id
stringlengths
10
10
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
content
stringlengths
3.91k
873k
references
dict
1907.09273
Why Build an Assistant in Minecraft?
In this document we describe a rationale for a research program aimed at building an open "assistant" in the game Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue.
http://arxiv.org/pdf/1907.09273
Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, C. Lawrence Zitnick, Jason Weston
cs.AI, cs.CL
null
null
cs.AI
20190722
20190725
9 1 0 2 l u J 5 2 ] I A . s c [ 2 v 3 7 2 9 0 . 7 0 9 1 : v i X r a # Why Build an Assistant in Minecraft? Arthur Szlam, Jonathan Gray, Kavya Srinet, Yacine Jernite, Armand Joulin, Gabriel Synnaeve, Douwe Kiela, Haonan Yu, Zhuoyuan Chen, Siddharth Goyal, Demi Guo, Danielle Rothermel, C. Lawrence Zitnick, Jason Weston November 29, 2021 # Abstract In this document we describe a rationale for a research program aimed at building an open “assistant” in the game Minecraft, in order to make progress on the problems of natural language understanding and learning from dialogue. # 1 Introduction In the last decade, we have seen a qualitative jump in the performance of machine learning (ML) methods directed at narrow, well-defined tasks. For example, there has been marked progress in object recognition [57], game-playing [73], and generative models of images [40] and text [39]. Some of these methods have achieved superhuman performance within their domain [73, 64]. In each of these cases, a powerful ML model was trained using large amounts of data on a highly complex task to surpass what was commonly believed possible. Here we consider the transpose of this situation. Instead of superhuman performance on a single difficult task, we are interested in competency across a large number of simpler tasks, specified (perhaps poorly) by humans. In such a setting, understanding the content of a task can already be a challenge. Beyond this, a large number of tasks means that many will have been seen only a few times, or even never, requiring sample efficiency and flexibility. There has been measured progress in this setting as well, with the mainstreaming of virtual personal assistants. These are able to accomplish thousands of tasks communicated via natural language, using multi-turn dialogue for clarifications or further specification. The assistants are able to interact with other applications to get data or perform actions. Nevertheless, many difficult problems remain open. Automatic natural language understanding (NLU) is still rigid and limited to constrained scenarios. Methods for using dialogue or other natural language for rich supervision remain primitive. In addition, because they need to be able to reliably and predictably solve many simple tasks, their multi-modal inputs, and the constraints of their maintenance and deployment, assistants are modular systems, as opposed to monolithic ML models. Modular ML systems that can improve themselves from data while keeping well defined interfaces are still not well studied. Despite the numerous important research directions related to virtual assistants, they themselves are not ideal platforms for the research community. They have a broad scope and need a large amount of world knowledge, and they have complex codebases maintained by hundreds, if not thousands of engineers. Furthermore, their proprietary nature and their commercial importance makes experimentation with them difficult. In this work we argue for building an open interactive assistant, and through it, the tools and platform for re- searching grounded NLU. Instead of a “real world” assistant, we propose working in the sandbox construction game of Minecraft 1. The constraints of the Minecraft world (e.g. coarse 3-d voxel grid, simple physics) and the regularities in the head of the distribution of in-game tasks allow numerous hand-holds for NLU research. Furthermore, since we work in a game environment, players may enjoy interacting with the assistants as they are developed, yielding a rich resource for human-in-the-loop research. The rest of this document describes in more depth the motivations and framing of this program, and should be read as a problem statement and a call-to-arms. Concurrently, we are releasing https://github.com/ facebookresearch/craftassist, which houses data and code for a baseline Minecraft assistant, labeling tools, and infrastructure for connecting players to bots; we hope these will be useful to researchers interested in this program or interactive agents more generally. We detail the contents of the framework (https://github.com/ facebookresearch/craftassist) in [28]. 'Minecraft features: ©Mojang Synergies AB included courtesy of Mojang AB # 2 Minecraft Minecraft2 is a popular multiplayer open world voxel-based building and crafting game. Gameplay starts with a procedurally created world containing trees, mountains, fields, and so on, all created from an atomic set of a few hundred possible atomic blocks; and (atomic) animals and other non-player characters (collectively referred to as “mobs”). The blocks are placed on a 3D voxel grid. Each voxel in the grid contains one material, of which most are air (empty); mobs and players have floating point positions. Players can move, place or remove blocks of different types, and attack or be attacked by mobs or other players. The game has two main modes3: “creative” and “survival”. In survival mode the player is resource limited, can be harmed, and is subject to more restrictive physics. The player must gather and combine resources in a process called crafting to build objects. In creative mode the player is not resource limited, cannot be harmed, and is subject to less restrictive physics, e.g., the player can fly through the air. Crafting is not necessary in creative mode since all materials are available to the player. An in-depth guide to Minecraft can be found at https://minecraft.gamepedia. com/Minecraft. Building is a core gameplay component in both modes. Minecraft allows multiplayer servers, and players can collaborate to build and survive, or compete. It has a huge player base (91M monthly active users in October 2018) [1], and many players are active in creating game mods and shareable content. The multiplayer game has built in text chat for player to player communication. Dialogue between users on multi-user servers is a standard part of the game. # 2.1 Task Space Minecraft players can build things, gather resources, craft (combine resources), attack other players and mobs (non- player characters), and chat. Even focusing only on building, the set of things a player could possibly do in the game is enormous; in the most naive sense, it is all possible ways of placing all the possible blocks into as big a world as fits in RAM. Minecraft players are creative, and the diversity of player built objects in Minecraft is astounding; these include landmarks, sculptures, temples, rollercoasters and entire cityscapes. Collaborative building is a common activity in Minecraft. Nevertheless, the Minecraft task space is constrained, due to restrictions on the environment and player behavior. Physics in Minecraft is particularly simple, and as mentioned above, the world is arranged on a coarse 3D grid. There is a finite (and relatively small) list of atomic game objects (mob types, block types, etc), and a small finite list of crafting formulae. We expect that the distribution of player requests of an assistant will be concentrated on a tiny fraction of what is actually possible in the game. For example, the vast majority of block arrangements are unlikely building requests, and modern ML has shown some success at being able to learn to sample (or retrieve) perceptually pleasing 3d structures [79]. Because an assistant bot could already be helpful by successfully completing common tasks from the head of the distribution even if it fails on tasks from the tail, we believe we can make progress towards a useful assistant without having to be able to succeed at every possible request. If true, this could pave the way to further learning during deployment. The constraints of the environment allow that executing a task is straightforward once it is specified. Movement, construction, crafting, and even scouting and combat can be reasonably scripted. While there might be value in learned representations (of language, the environment, actions, etc.) that come from learning to execute, it is not necessary to use ML to solve task execution. On the other hand, because it is possible to script execution, to the extent that it useful to get learned representations, we can easily get supervision to train these. # 3 Learning and Interaction in Minecraft We wish to build an intelligent in-game assistant4 that can perform whatever players may want it to do (in-game). Simple examples of the assistant’s duties could range from building simple or complex structures to entire cityscapes, breaking down structures, dancing, catching mobs (Minecraft creatures), etc. We intend that the assistant’s primary interface should be via natural language using Minecraft chat. Finally, we aim for an agent that is fun, and one that people will want to play and engage with. The purpose of building such an agent is to facilitate the study of the following: 2https://minecraft.net/en-us/ 3There are 5 modes in total: survival, hardcore, adventure, creative, and spectator. 4or playmate, or minion, etc... • Synergies of ML components: To explore and evaluate various approaches to building a complex agent. In particular, how the various ML and non-ML components of a system can work together, and how to exploit the synergies of these components. Especially, how to exploit these components to make progress in: • Grounded natural language understanding: Here, specifying tasks is the challenge, not executing them. How can we build a system to understand what a human wants; and to associate language to a task? • Self improvement: To make progress in building agents with the ability to improve themselves during deploy- ment and interaction with human players. How to learn new tasks and concepts from dialogue or demonstra- tions? The basic philosophy of this program is that we should approach the relevant NLU and self-improvement problems by all means available. In contrast to many agent-in-an-environment settings, where the environment is a challenge used to test learning algorithms, we consider the environment a tool for making the NLU problems more tractable. Thus instead of “what ML methods can learn representations of the environment that allow an agent to act effectively?” we are interested in the problem of “what approaches allow an agent to understand player intent and improve itself via interaction, given the most favorable representations (ML based or otherwise) of the environment we can engineer?”. While we are sympathetic to arguments suggesting that we will be unable to effectively attack the NLU problems without fundamental advances in methods for representation learning5, we think it is time to try anyway. Code and other tools to support this program are available at https://github.com/facebookresearch/ craftassist. # 3.1 Natural Language Interaction The assistant should interact with players through natural language. This is not meant to handicap the assistant, and other methods of interaction may also be helpful. However, we believe the flexibility and generality of language will be useful (and perhaps necessary). Natural language allows that players will need no specialized training to play with the assistant. Furthermore, combined with the fact that the agent is just another player in the game, this makes training and evaluation scenarios where a human pretends to be a bot straightforward. We intend that the player will be able to specify tasks through dialogue (rather than by just issuing commands), so that the agent can ask for missing information, or the player can interrupt the agent’s actions to clarify. In addition, we hope dialogue to be useful for providing rich supervision. The player might label attributes about the environment, for example “that house is too big”, relations between objects in the environment (or other concepts the bot understands), for example “the window is in the middle of the wall”, or rules about such relations or attributes. We expect the player to be able to question the agent’s “mental state” to give appropriate feedback, and we expect the bot to ask for confirmation and use active learning strategies. We consider the natural language understanding (NLU) problem to be the program’s central technical challenge. We give some examples in the next section 3.1.1. While a full solution is likely beyond the reach of current techniques, working with an interactive agent in Minecraft allows opportunities for progress. We can rely on agent grounding (a shared knowledge resource between player and agent, in this case the semantics of Minecraft) to aid learning language semantics. Moreover, we expect to spend lots of human effort on making sure common requests are understood by the bot, in order to bootstrap interaction with players. We discuss these in more detail in section 3.1.2 # 3.1.1 Challenges The set of things that Minecraft players do for fun (building, crafting, resource gathering) is combinatorially complex. Even focusing on creative mode (where crafting and resource gathering are not relevant dynamics), there is a huge space of things a player could ask an assistant for help with. We give some examples and discuss their complexities: PLAYER: build a tower 15 blocks tall and then put a giant smiley on top ASSISTANT: ok [assistant starts building the tower] To succeed in executing this command, the assistant needs to understand what a “tower” is (and how to build one), understand that “15 blocks high” measures the height of the tower, and what “15” is. It needs to know what a “smiley” is (and how to build it) and understand the relative position “top”. In our view, with the assumptions of this program, 5And we think programs aimed at advancing these are great too! this is nontrivial but straightforward, for example using techniques as in [100, 101, 53, 52]. However, it is easy to imagine small changes that make it more difficult: PLAYER: build a tower 15 blocks tall and then put a giant smiley on top ASSISTANT: ok [assistant starts building the tower] PLAYER: wait, stop, make every other block red Besides needing to know what “every other” means, in this scenario, the assistant has to recognize that the player is referencing a change to the tower and smiley that it is currently building, and that “stop” doesn’t mean “stop activity”. The assistant also needs to know what “red” is but that is more straightforward. If the assistant does not know what “every other” means, we could imagine a dialogue like PLAYER: build a tower 15 blocks tall and then put a giant smiley on top ASSISTANT: ok [assistant starts building the tower] PLAYER: wait, stop, make every other block red [assistant recognizes the instruction refers to a change in its current build task, but doesn’t understand the change (and specifically recognizes “every other” as unknown)] ASSISTANT: What is “every other”? PLAYER: Let me show you [player makes a stack of blocks alternating in color; assistant is able to generalize “every other” to new situations] In our view, this level of flexibility and ability to learn from dialogue and examples is beyond what we can accom- plish with our current technology. However, we believe that situating the assistant in an engaging environment that allows simplified perception and task execution opens opportunities to do research at this frontier. # 3.1.2 Opportunities As mentioned in 2.1, the Minecraft task space and environment have many regularities that we can use to simplify task execution. These regularities also create handholds for the NLU problems of specifying tasks and learning from dialogue. First, they allow us to engineer language primitives or otherwise introduce domain knowledge into data collection and model design. For example, using the knowledge of the basic Minecraft tasks, we can build sets of language/action templates for generating examples commands for these tasks (in addition to crowdsourcing such language). These can be used to build artificial training data and inform the structure of machine learning models that are meant to interpret language. Another example is basic behavioral patterns that elicit data from the user, like asking a player to tag things it sees in the environment, which can be later used as referents. There is a huge space of opportunities for endowing our assistant with a range of language primitives. These allow building useful assistants from which to bootstrap before being able to learn too much from players. Second, the structure of the environment and the head of the task distribution will hopefully allow the assistant to learn language more easily, beyond data generation and model design. This structure can function as a knowledge resource shared between agent and player, and ground concepts the assistant might need to learn. For example, if the user asks the assistant to “build a smiley”, the agent can infer that “a smiley” is some kind of block object, as “build” is a common task the bot should already understand. The agent can make connections between “small” and the number of blocks in an object, and “close” and the number of steps it needed to walk. Also, atomic Minecraft objects give a set of reference objects with rich structure that require no learning to apprehend: mob types, block types etc. are given explicitly to the assistant. Furthermore, we might expect to be able to find synergies between learning generative models of block objects and discriminative models of those block objects (and their parts) in addition to the language used to intruct the assistant to build or destroy those objects. In our view, one of the most exciting aspects of building an assistant in Minecraft is this grounding of concepts afforded by the environment and the distribution of tasks. Third, the basic game is fun, and gives us opportunities to understand how to make training (or teaching) an assistant rewarding. We expand on this more in 3.2.1. # 3.2 Self improvement One of the oft repeated criticisms of standard statistical ML is that models cannot improve themselves beyond becom- ing more accurate at the task for which they were designed (and for which the data has been collected and labeled, etc.). They cannot reconfigure themselves for new tasks without a human building a new dataset (or reward function). While this complaint is not very precise, we do believe it is true in spirit, especially for the most successful ML models, such as those used in object recognition, translation, or language modeling. While the features they produce can be useful in many tasks, the model themselves are inflexible. On the other hand, there are several systems designed explicitly for self improvement, for example [63, 88]. We consider the setting of a Minecraft assistant an ideal setting for studying how to flexibly learn from human interaction. As in [63, 88], instead of trying to “solve” self-improvement by making fundamental advances in ML, we think the setting will allow progress because it allows the correct substrate. That is, we hope we can design frameworks allowing an assistant to improve itself within that framework (but not beyond that); and because of the richness of the Minecraft task space, this allows for non-trivial growth. # 3.2.1 Effective (and fun) training One of the goals of the program is to study how the assistant should interact with people, and what feedback mech- anisms should be employed to make training the assistant both efficient and enjoyable. While efficient learning is probably the key technical ML challenge to making the assistant responsive to training, there are many other UI fac- tors that may be as important for making training fun. As part of the project, we want to study how the assistant should respond to feedback so that it is engaging. Moreover, mistakes should be amusing, and the assistant should more generally fail gracefully. These criteria are important in order to incentivize players to both engage with the assistant and to enable it to receive a learning signal to improve itself. In terms of efficiency (in ML terms, being very sample efficient) and user enjoyment, some major challenges include: • For the assistant to be immediately responsive to feedback. We suspect this will have a large effect on the players impression of the bot’s performance. • To optimally interact with the user, for example seeking clarification that both gives maximal information and does not bother the player with a succession of annoying questions. • For the assistant to get the most out of the (rich) feedback that is given [75, 34], i.e. to generalize maximally beyond the individual case being addressed. So, for example, we can imagine the following dialogue: PLAYER: go over to that tower with the gargoyles [assistant does not know what “gargoyles are”, but sees there is only one tower; surmise that that tower must be related to “gargoyles”] [assistant walks to the tower] ASSISTANT: does this tower have gargoyles? PLAYER: yes ASSISTANT: What is a gargoyle? PLAYER: its the monster on top of the tower [assistant recognizes is it is a part of the tower because of “top” which it already understands] ASSISTANT: is it this? [assistant moves to the top of the tower] PLAYER: yes The information the assistant could gain from this dialogue could be learned by seeing lots of objects labeled with “gargoyle”, or associating a reward given by a player when it moved to the correct object over lots of plays. But in our view, if the assistant could execute these sorts of language interactions, it could be far more sample efficient. In this case, through the interaction, the agent could learn that a gargoyle is an object made of blocks (and not some other property of a tower, like being tall), find an example of this kind of object and a rough segmentation, and an association between “gargoyle” and the word “monster”. As before, this level of dialogue is still science fiction, and is for illustrating the opportunities of rich feedback. However, one can see that in order to even study these kinds of interactions in a non-toy setting with our current technology, we need an environment where the assistant has a good deal of domain knowledge built in. We also need players who are motivated to teach the assistant. We think it will be useful (and interesting) to design learning behaviors explicitly in addition to trying to learn them. # 3.3 Modularity Many of the recent successes of ML have come from training end-to-end models. Nevertheless, we believe that in this research program, the best path forward is to bootstrap a modular system with both scripted and ML components. We are not opposed to end-to-end approaches, and think it may be possible for an ML agent to learn to do everything end-to-end from raw pixel input. However, our view is that this too difficult in the near term, and that the challenges of learning low-level actions and perception are likely to be a distraction from the core NLU and self-improvement problems. In contrast, a modular system allows us to abstract away the lower-level primitives and focus on these problems. Furthermore, using a modular system makes data collection and generation easier. Finally, besides making this research program more accessible, we think modular ML-augmented systems are generally interesting objects of study. In particular, we do not consider it “cheating” to script useful action, perception, or symbolic manipulation prim- itives. These might include path-finding and building scripts, and libraries of schematics and shapes; or heuristics for relative directions or sizes of objects. Similarly, we consider it reasonable to use ML components trained for a particular sub-task (perceptual or otherwise). Our frameworks (as in [27]) and models (as in [5, 27, 47]) should be designed to allow for a gradual process of ML-ification where it may be useful. Abstracting lower-level primitives: The fundamental problem that a Minecraft assistant faces is understanding what the player wants the assistant to do. In contrast, for many requests, execution is more straightforward. In Minecraft, the sequence of (atomic) actions necessary to complete basic tasks (path-planning, building, etc.) can be scripted by directly accessing the game’s internal world state. This is clearly not true more generally – in the real world, or even in the settings of other games, the execution of similar basic actions cannot be easily scripted. Furthermore, many of the easily scriptable action primitives in Minecraft (like building) might take hundreds or even thousands of steps to complete, making them non-trivial to learn. Similarly, many perceptual primitives are straightforward to implement, despite being non-trivial to learn. For example, mobs are atomic objects, as are blocks, despite having surface forms (as images) that vary based on viewing angle, occlusions, lighting conditions, etc. In this research program, we consider these and other particulars of the environment as tools for directly engaging the fundamental problem of player intent. Gathering data: The successes of end-to-end models have been driven by large data; but in this program, there is no (initial) source of data for end-to-end training. The end-to-end task requires human interaction, and with our current technology, agents are not responsive enough to supervision to learn in a way that would be engaging. Again because of the human in the loop, self-play is not straightforward. Simply recording human players playing normally (without an in-game assistant) for behavioral cloning is not ideal, as standard play is different than what we would want the assistant to learn; and having humans play as assistants with other humans does not easily scale. On the other hand, for many action or perception primitives, it is easy to collect or generate data specifically for that primitive. These considerations suggest approaches with modular components with clearly defined interfaces, and heterogeneous training based on what data is available (in particular, we are interested in models like [5] that can be trained end-to-end as a whole or separately as components). Modularity as a more generally useful ML trait: We consider the study of ML-augmented systems as an interesting endeavor in its own right. An open assistant can be used as a laboratory for studying the interactions between ML and non-ML components and how changing one component affects the others. Or how to design and build systems so that their behavior can be easily modified, explained, and engineered, while still having the capability to flexibly learn. While the specific primitives for this program should be designed with the Minecraft assistant in mind, we expect that many of the things we discover about building systems will be useful more generally. Some researchers might argue that without the fundamental learning algorithms that would allow an agent to build its understanding from the ground up, we cannot scale beyond simple scripts and hand-designed behaviors. We agree that this is a risk, and we certainly agree that scripts and hard-coded perceptual primitives will not be sufficient for real progress6. Our view is that the NLU and self-improvement problems have difficulties beyond representation learning that should be confronted as directly as possible, and scripted or separately trained components should be used to the extent they are useful in making progress in these core problems. 6However: while there can be important generalization and flexibility benefits from an end-to-end system, so too can there be different general- ization and flexibility benefits to having the correct heuristic/symbolic substrate. # 4 Literature Review Existing Research Using Minecraft A number of machine learning research projects have been initiated in Minecraft. Microsoft has built the MALMO project [38] as a platform for AI research. It is often used as a testbed for ML model architectures trained using reinforcement learning methods, e.g. [71, 85, 3, 66, 82]. Some of these use (templated) language to describe mid-level macros [71] or tasks [67]. The work of [55] considers the use of human feedback, but at the action level, not by via language. The Malmo Collaborative AI Challenge7 was recently proposed to explore the training of collaborative RL agents, but does not address collaboration with humans, nor the use of language. Most recently [2] collects millions of frames of players doing various tasks in Minecraft, and proposes several (single- player) tasks to evaluate RL and imitation learning methods. [97] considers the use of language in Minecraft to answer templated visual questions. [42] focuses on a dataset and neural model for spatial descriptors (“on top of”, “to the left of”). An observational study of how humans would speak to an intelligent game character in Minecraft using a Wizard of Oz approach has been recently conducted [4]. Other Gaming Platforms and Simulated Environments A number of other games are used as platforms for AI research. Many are built to study the development of reinforcement learning based algorithms and do not study language, e.g. Starcraft [80, 86], Atari games [9], Go [74, 84], TORCS [95], Doom [41] and Text adventure games [18, 96]. DeepMind Lab [8] is based on the Quake engine, although it is somewhat removed from the game itself. Some of these systems do not involve a human at all, e.g. one-player Atari Games, while others such as Starcraft and Go do involve an agent interacting with a human, but in an adversarial fashion, rather than as an assistant. Instead of directly using an existing game platform, another approach has been to implement a simulation for embodied agent research, especially for navigation, QA and situated dialogue. Several such environments have been proposed, such as 3D environments like House3D [94], HoME [14], MINOS [70], Matterport3D [16], AI2-THOR [44], and Habitat [58], as well as more simplistic 2D grid worlds [78, 99, 17]. Within those environments typical tasks to study are language grounding, navigation, embodied QA[19]. The visual content of Minecraft is much less realistic than [94, 14, 70, 16, 44]; but the task space is richer. In [11, 10], the authors explore natural language instruction for block placement, first in 2D and then in 3D. Similar to the setting we consider, visual fidelity is secondary to natural language complexity, and their environment is focused on building instruction, although there the agent is not able to navigate or dialogue with players. One crucial difference between all of these environments and Minecraft is that Minecraft is a engaging game that already has a huge player base, hopefully enabling the study of learning agents that interact with humans via natural language at scale. To a large extent, all the above platforms have been used mostly to answer questions of the form “how can we design algorithms and architectures that are able to learn via acting in an environment (perhaps integrating multiple modalities)?” In particular end-to-end learning and reinforcement is emphasized, and “cheating” by explicitly using domain knowledge or directly using the underlying mechanics of the simulator is discouraged. The agent is supposed to learn these. In our program, we wish to answer questions of the form ”how can we use the simulator as a knowledge resource, shared between player and learning agent, in service of understanding intent from language?”. Anything that makes this easier is fair game, including directly using the underlying game objects and domain knowledge about the environment and task space. Personal Assistants and dialogue Virtual personal assistants have now penetrated the consumer market, with products such as Google Assistant, Siri and Alexa. These systems are grounded in world knowledge; and can be considered “embodied” in an environment consisting of web and device responses. They could be seen as an important platforms for research, especially in terms of dialogue and human interaction. Unfortunately, as they are large proprietary production systems, and as experimentation can negatively affect users and brands, they are not easily used in open (academic) AI research. Currently, such academic research typically involves smaller, less open-ended domains. There is a large body of work on goal-oriented dialogue agents which typically focus on one task at a time, for example restaurant booking [93, 35, 12], airline booking [22, 91], or movie recommendation [50]. Minecraft as a platform is relatively a good setting for goal directed dialogue research, as (1) a Minecraft agent is embodied in an environment with many familiar physical and perceptual primitives, and so offers opportunities for correlating language, perception, and physical actions; and (2) due to its open-ended nature, the setting emphasizes competency at a wide variety of tasks; (3) the game itself is engaging, and (4) amusing enough mistakes are more tolerable. 7https://www.microsoft.com/en-us/research/academic-program/collaborative-ai-challenge/ Semantic Parsing and Program Synthesis Many of the simpler goal-oriented dialogue tasks discussed above can be formulated as slot filling, which can be considered a form of semantic parsing (inferring a machine understandable logical form from a natural language utterance). Semantic parsing has been used for interpreting natural language commands for robots [81, 59] and for virtual assistants [43]. Semantic parsing in turn can be considered a form of the general problem of program synthesis (see [30] for a survey), where the input to the program synthesizer is a natural language description of what the program should do. Recently there have been many works applying ML to semantic parsing [7, 51, 21, 37, 32, 102] and program synthesis from input-output pairs [45, 69, 26, 65, 13, 25] (and non-ML success stories [29]). An assistant in Minecraft offers the opportunity for studying program synthesis through interactive dialogue and demonstrations; and in a setting where the agent can learn over time about a player’s particular task distribution and language. These have been less studied, although see [36, 31] for dialogue based semantic parsing and [60] for multi-turn program synthesis and [24, 87] for “lifelong” learning to synthesize programs. Learning to Ground Language outside of Simulators There are numerous works exploring the link between modal- ities in order to ground language outside of a simulator by considering static resources, e.g. by combining vision and text. Such works typically consider a fixed database of images and associated text, e.g. for image captioning [54], video captioning [98], visual QA [6] or visual dialogue [20, 72]. Learning by Instruction and Interactive LearningWe are interested in agents that learn from feedback from the user. Learning from dialogue with varying types of feedback, such as verbal cues (e.g., “Yes, that’s right!”) was explored for the question answering (QA) setting in [92] and [48], and in a chit-chat setting in [33]. Similar techniques have been applied for teaching machines to describe images via natural language feedback as well [56]. Agents can gather more information by also proactively asking questions rather than just waiting for feedback. This has been studied in (ungrounded) conversational systems [77, 90, 68, 49] and when grounding with images via the CLEVR QA task [62]. Several works have studied how to use language to train a parametric classifier, e.g. [76, 34, 23]. The topic of learning by instruction was also the subject of a NIPS 2018 workshop8. While there is some work in learning from dialogue in an embodied setting [15, 83] this is currently relatively unexplored, The program described in this work is related to (and has been inspired by) [89, 87]. In [89] an interactive language learning game is set up between human and machine in order to place blocks appropriately given a final plan. In [87] a more open ended building task is proposed using a grid of voxels. The machine has to learn the language used by humans in a collaborative setting to perform well specified but complex actions in order to draw shapes out of blocks. We have also been inspired by [63], [61], and [46] and in particular, we share the goal of exploring agents that can learn autonomously from various forms of supervision beyond labels. Perhaps in a naive sense, what we are suggesting is less ambitious than these: instead of learning about the real world as in [63], the goal is to learn how to do tasks in Minecraft (and process the relevant concepts) that are likely to be given to Minecraft assistant. Instead of working towards a method that can learn in an unbounded way, as in [61], we aim “only” for learning within the Minecraft frame, with as advantageous a perceptual and knowledge-representation substrate as we can find. The focus in this work on task specification and learning through language in a known environment recalls the “Frostbite Challenge” in [46]; but we do not consider it important whether our agents have human biases. Nevertheless, it may turn out that succeeding in the Minecraft task space requires understanding the real world, that in order to be able to learn as flexibly as we would like in the Minecraft frame necessitates methods that can learn as desired by [61], and that in order to successfully complete human tasks and learn from human instruction, the agent will need human biases. In any case, we think the setting of an interactive assistant in Minecraft is an ideal frontier to work on these problems. # 5 Conclusion In this work we have argued for building a virtual assistant situated in the game of Minecraft , in order to study learning from interaction, and especially learning from language interaction. We have argued that the regularities of the game environment and task space can and should be used explicitly as tools to make the relevant learning problems more tractable, rather than as diagnostics to measure if a method can learn those regularities. We hope the broader research community will be interested in in working on this program, and to to that end we open https://github.com/facebookresearch/craftassist with infrastructure for connecting bots to players, and data and code for baseline bots. 8https://nips.cc/Conferences/2018/Schedule?showEvent=10918 # References [1] Minecraft exceeds 90 million monthly active users. https://www.gamesindustry.biz/articles/ 2018-10-02-minecraft-exceeds-90-million-monthly-active-users. [2] Minerl. https://web.archive.org/web/20190625193739/http://minerl.io/. [3] Stephan Alaniz. Deep reinforcement learning with model learning and monte carlo tree search in minecraft. arXiv preprint arXiv:1803.08456, 2018. [4] Fraser Allison, Ewa Luger, and Katja Hofmann. How players speak to an intelligent game character using natural language messages. Transactions of the Digital Games Research Association, 4(2), 2018. [5] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39–48, 2016. [6] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433, 2015. [7] Yoav Artzi and Luke Zettlemoyer. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1:49–62, 2013. [8] Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich K¨uttler, Andrew Lefrancq, Simon Green, V´ıctor Vald´es, Amir Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016. [9] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. [10] Yonatan Bisk, Kevin J Shih, Yejin Choi, and Daniel Marcu. Learning interpretable spatial operations in a rich 3d blocks world. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [11] Yonatan Bisk, Deniz Yuret, and Daniel Marcu. Natural language communication with robots. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 751–761, 2016. [12] Antoine Bordes, Y-Lan Boureau, and Jason Weston. Learning end-to-end goal-oriented dialog. In Proceedings of the International Conference on Learning Representations (ICLR), 2017. [13] Matko Boˇsnjak, Tim Rockt¨aschel, Jason Naradowsky, and Sebastian Riedel. Programming with a differentiable forth interpreter. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 547–556. JMLR. org, 2017. [14] Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, arXiv preprint Hugo Larochelle, and Aaron Courville. Home: A household multimodal environment. arXiv:1711.11017, 2017. [15] Rehj Cantrell, Paul Schermerhorn, and Matthias Scheutz. Learning actions from human-robot dialogues. In RO-MAN, 2011 IEEE, pages 125–130. IEEE, 2011. [16] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158, 2017. [17] Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. Babyai: First steps towards grounded language learning with a human in the loop. arXiv preprint arXiv:1810.08272, 2018. [18] Marc-Alexandre Cˆot´e, ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. arXiv preprint arXiv:1806.11532, 2018. [19] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), answering. volume 5, page 14, 2018. [20] Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, volume 2, 2017. [21] Li Dong and Mirella Lapata. Language to logical form with neural attention. arXiv preprint arXiv:1601.01280, 2016. [22] Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 207–219, Saarbr¨ucken, Germany, August 2017. Association for Computational Linguistics. [23] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using purely textual descriptions. In Proceedings of the IEEE International Conference on Computer Vision, pages 2584– 2591, 2013. [24] Alexander L Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Lifelong perceptual programming by example. 2016. [25] Alexander L Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Differentiable programs with neural libraries. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1213–1222. JMLR. org, 2017. [26] Alexander L Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428, 2016. [27] Jonas Gehring, Zeming Lin, Daniel Haziza, Vegard Mella, Daniel Gant, Nicolas Carion, Dexter Ju, Danielle Rothermel, Laura Gustafson, Eugene Kharitonov, Vasil Khalidov, Florentin Guth, Nantas Nardelli, Nicolas Usunier, and Gabriel Synnaeve. TorchCraftAI v1.1. https://torchcraft.github.io/ TorchCraftAI/docs/core-abstractions.html. Accessed: 2019-07-18, DOI: 10.5281/zen- odo.3341787. [28] Jonathan Gray, Kavya Srinet, Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo, Siddharth Goyal, C. Lawrence Zitnick, and Arthur Szlam. Craftassist: A framework for dialogue-enabled interactive agents. arXiv preprint arXiv:1907.08584, 2019. [29] Sumit Gulwani, William R Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Com- munications of the ACM, 55(8):97–105, 2012. 30 Sumit Gulwani, Oleksandr Polozov, Rishabh Singh, et al. Program synthesis. Foundations and Trends®) in Programming Languages, 4(1-2):1-119, 2017. [31] Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. Dialog-to-action: conversational question an- In Advances in Neural Information Processing Systems, pages swering over a large-scale knowledge base. 2942–2951, 2018. [32] Kelvin Guu, Panupong Pasupat, Evan Zheran Liu, and Percy Liang. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. arXiv preprint arXiv:1704.07926, 2017. [33] Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. Learning from dialogue after deployment: Feed yourself, chatbot! arXiv preprint arXiv:1901.05415, 2019. [34] Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. Train- ing classifiers with natural language explanations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1884–1895. Association for Computational Lin- guistics, 2018. [35] Matthew Henderson, Blaise Thomson, and Jason D Williams. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272, 2014. [36] Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential ques- tion answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821–1831, 2017. [37] Robin Jia and Percy Liang. Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622, 2016. [38] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelli- gence experimentation. In IJCAI, pages 4246–4247, 2016. [39] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. [40] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. [41] Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Ja´skowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. In Computational Intelligence and Games (CIG), 2016 IEEE Conference on, pages 1–8. IEEE, 2016. [42] Nikita Kitaev and Dan Klein. Where is misty? interpreting spatial descriptors by modeling regions in space. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 157–166, 2017. [43] Thomas Kollar, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung, Lambert Mathias, Michael Kayser, Bradford Snow, and Spyros Matsoukas. The alexa meaning representation language. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 3 (Industry Papers), volume 3, pages 177–184, 2018. [44] Eric Kolve, Roozbeh Mottaghi, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474, 2017. [45] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. [46] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences, 40, 2017. [47] Dennis Lee, Haoran Tang, Jeffrey O Zhang, Huazhe Xu, Trevor Darrell, and Pieter Abbeel. Modular architecture for starcraft ii with deep reinforcement learning. In Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference, 2018. [48] J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. Weston. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823, 2016. [49] J. Li, A. H. Miller, S. Chopra, M. Ranzato, and J. Weston. Learning through dialogue interactions. arXiv preprint arXiv:1612.04936, 2016. [50] Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. Towards deep conversational recommendations. In Advances in Neural Information Processing Systems, pages 9748–9758, 2018. [51] Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020, 2016. [52] Percy Liang. Learning executable semantic parsers for natural language understanding. Commun. ACM, 59(9):68–76, August 2016. [53] Percy Liang, Michael Jordan, and Dan Klein. Learning dependency-based compositional semantics. In Pro- ceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 590–599. Association for Computational Linguistics, 2011. [54] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. [55] Zhiyu Lin, Brent Harrison, Aaron Keech, and Mark O Riedl. Explore, exploit or listen: Combining human feed- back and policy model to speed up deep reinforcement learning in 3d worlds. arXiv preprint arXiv:1709.03969, 2017. [56] Huan Ling and Sanja Fidler. Teaching machines to describe images via natural language feedback. arXiv preprint arXiv:1706.00130, 2017. [57] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. arXiv preprint arXiv:1805.00932, 2018. [58] Manolis Savva*, Abhishek Kadian*, Oleksandr Maksymets*, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A Platform for Em- bodied AI Research. arXiv preprint arXiv:1904.01201, 2019. [59] Cynthia Matuszek, Evan Herbst, Luke Zettlemoyer, and Dieter Fox. Learning to parse natural language com- mands to a robot control system. In Experimental Robotics, pages 403–415. Springer, 2013. [60] Mika¨el Mayer, Gustavo Soares, Maxim Grechkin, Vu Le, Mark Marron, Oleksandr Polozov, Rishabh Singh, Benjamin Zorn, and Sumit Gulwani. User interaction models for disambiguation in programming by example. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pages 291–301. ACM, 2015. [61] Tomas Mikolov, Armand Joulin, and Marco Baroni. A roadmap towards machine intelligence. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 29–61. Springer, 2016. [62] Ishan Misra, Ross Girshick, Rob Fergus, Martial Hebert, Abhinav Gupta, and Laurens van der Maaten. Learning by asking questions. [63] T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Set- tles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. Never-ending learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15), 2015. [64] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. [65] Arvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945, 2016. [66] Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016. [67] Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-shot task generalization with multi-task deep reinforcement learning. arXiv preprint arXiv:1706.05064, 2017. [68] Sudha Rao and Hal Daum´e III. Learning to ask good questions: Ranking clarification questions using neural expected value of perfect information. arXiv preprint arXiv:1805.04655, 2018. [69] Scott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. [70] Manolis Savva, Angel X Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. Minos: Multi- modal indoor simulator for navigation in complex environments. arXiv preprint arXiv:1712.03931, 2017. [71] Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294, 2017. [72] Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. Engaging image chat: Modeling personality in grounded dialogue. arXiv preprint arXiv:1811.00945, 2018. [73] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. [74] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. [75] Shashank Srivastava, Igor Labutov, and Tom Mitchell. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1527–1536. Association for Computational Linguistics, 2017. [76] Shashank Srivastava, Igor Labutov, and Tom Mitchell. Zero-shot learning of classifiers from natural language quantification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 306–316, 2018. [77] Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. End-to-end optimization of goal-driven and visually grounded dialogue systems. arXiv preprint arXiv:1703.05423, 2017. [78] Sainbayar Sukhbaatar, Arthur Szlam, Gabriel Synnaeve, Soumith Chintala, and Rob Fergus. Mazebase: A sandbox for learning from games. arXiv preprint arXiv:1511.07401, 2015. [79] Minhyuk Sung, Hao Su, Vladimir G Kim, Siddhartha Chaudhuri, and Leonidas Guibas. Complementme: weakly-supervised component suggestions for 3d modeling. ACM Transactions on Graphics (TOG), 36(6):226, 2017. [80] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timoth´ee Lacroix, Zeming Lin, Florian Richoux, and Nicolas Usunier. Torchcraft: a library for machine learning research on real-time strategy games. arXiv preprint arXiv:1611.00625, 2016. [81] Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011. [82] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical ap- proach to lifelong learning in minecraft. In AAAI, volume 3, page 6, 2017. [83] Jesse Thomason, Jivko Sinapov, and Raymond Mooney. Guiding interaction behaviors for multi-modal In Proceedings of the First Workshop on Language Grounding for Robotics, grounded language learning. pages 20–24, 2017. [84] Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and C Lawrence Zitnick. Elf: An extensive, In Advances in Neural Information lightweight and flexible research platform for real-time strategy games. Processing Systems, pages 2659–2669, 2017. [85] Hiroto Udagawa, Tarun Narasimhan, and Shim-Young Lee. Fighting zombies in minecraft with deep reinforce- ment learning. Technical report, Technical report, Stanford University, 2016. [86] Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich K¨uttler, John Agapiou, Julian Schrittwieser, et al. Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782, 2017. [87] Sida I Wang, Samuel Ginn, Percy Liang, and Christoper D Manning. Naturalizing a programming language via interactive learning. arXiv preprint arXiv:1704.06956, 2017. [88] Sida I. Wang, Samuel Ginn, Percy Liang, and Christopher D. Manning. Naturalizing a programming language In Proceedings of the 55th Annual Meeting of the Association for Computational via interactive learning. Linguistics (Volume 1: Long Papers), pages 929–938. Association for Computational Linguistics, 2017. [89] Sida I Wang, Percy Liang, and Christopher D Manning. Learning language games through interaction. arXiv preprint arXiv:1606.02447, 2016. [90] Yansen Wang, Chenyi Liu, Minlie Huang, and Liqiang Nie. Learning to ask questions in open-domain conver- sational systems with typed decoders. arXiv preprint arXiv:1805.04843, 2018. [91] Wei Wei, Quoc Le, Andrew Dai, and Jia Li. Airdialogue: An environment for goal-oriented dialogue research. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3844– 3854, 2018. [92] J. E. Weston. Dialog-based language learning. In Advances in Neural Information Processing Systems (NIPS), pages 829–837, 2016. [93] Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413, 2013. [94] Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. Building generalizable agents with a realistic and rich 3d environment. arXiv preprint arXiv:1801.02209, 2018. [95] Bernhard Wymann, Eric Espi´e, Christophe Guionneau, Christos Dimitrakakis, R´emi Coulom, and Andrew Sumner. Torcs, the open racing car simulator. Software available at http://torcs. sourceforge. net, 4:6, 2000. [96] Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. Mastering the dungeon: Grounded language learning by mechanical turker descent. arXiv preprint arXiv:1711.07950, 2017. [97] Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. Neural-symbolic In Advances in Neural Information vqa: Disentangling reasoning from vision and language understanding. Processing Systems, pages 1039–1050, 2018. [98] Haonan Yu, Jiang Wang, Zhiheng Huang, Yi Yang, and Wei Xu. Video paragraph captioning using hierarchical recurrent neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4584–4593, 2016. [99] Haonan Yu, Haichao Zhang, and Wei Xu. Interactive grounded language acquisition and generalization in a 2d world. In International Conference on Learning Representations, 2018. [100] John M. Zelle and Raymond J. Mooney. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2, AAAI’96, pages 1050–1055. AAAI Press, 1996. [101] Luke S. Zettlemoyer and Michael Collins. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI, pages 658–666. AUAI Press, 2005. [102] Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural lan- guage using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017.
{ "id": "1708.04782" }
1907.09190
ELI5: Long Form Question Answering
We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum ``Explain Like I'm Five'' (ELI5) where an online community provides answers to questions which are comprehensible by five year olds. Compared to existing datasets, ELI5 comprises diverse questions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline. However, our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement.
http://arxiv.org/pdf/1907.09190
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, Michael Auli
cs.CL
null
null
cs.CL
20190722
20190722
9 1 0 2 l u J 2 2 ] L C . s c [ 1 v 0 9 1 9 0 . 7 0 9 1 : v i X r a # ELI5: Long Form Question Answering # Angela Fan1,2 Yacine Jernite∗1 Ethan Perez∗3 David Grangier4 Jason Weston1 Michael Auli1 # 1Facebook AI Research 2LORIA 3NYU ‡ # 4Google AI ‡ [angelafan,yjernite,jase,michaelauli]@fb.com, [email protected], [email protected] # Abstract Question: How do Jellyfish function without brains or ner- vous systems? [...] (60 words) We introduce the first large-scale corpus for long-form question answering, a task requir- ing elaborate and in-depth answers to open- ended questions. The dataset comprises 270K threads from the Reddit forum “Explain Like I’m Five” (ELI5) where an online community provides answers to questions which are com- prehensible by five year olds. Compared to ex- isting datasets, ELI5 comprises diverse ques- tions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outper- forms conventional Seq2Seq, language mod- eling, as well as a strong extractive baseline. However, our best model is still far from hu- man performance since raters prefer gold re- sponses in over 86% of cases, leaving ample opportunity for future improvement.1 Answer: Jellyfish may not have a brain, but they have a rough nervous system and innate behaviours. However, they are very simple creatures. They’re invertebrate: creatures with- out a backbone. Most jellyfish have really short life spans. Sometimes just a couple of hours. [...] As their name im- plies, they are largely composed of basically jelly inside a thin membrane. They’re over 95% water. (327 words) Documents: Jellyfish do not have brains, and most barely have nervous systems. They have primitive nerve cells that help them orient themselves in the water and sense light and touch. [...] While they dont possess brains, the animals still have neurons that send all sorts of signals throughout their body. [...] They may accomplish this through the as- sistance of their nerve rings. Jellyfish don’t have brains, and that’s just where things begin. They don’t have many of the body parts that are typical in other animals. [...] (1070 words) Figure 1: ELI5 example. Models must write multi-sentence answers given questions and supporting web documents. # Introduction Existing question answering datasets have enabled significant progress in models that provide ex- tractive or unambigious short answers. However, less attention has been paid to open-ended ques- tions that require explanations. In this work, we present ELI5: a Long Form Question Answer- ing dataset that emphasizes the dual challenges of isolating relevant information within long source documents and generating paragraph-length ex- planations in response to complex, diverse ques- tions (see illustrations in Figures 1 and 2). The first challenge of ELI5 is the length and di- versity of answers that span multiple sentences: ~* ∗ Equal contribution ‡ Work done while at Facebook AI Research 1Dataset, Pretrained Models, and Additional Informa- https://facebookresearch. https://github.com/ tion is available: github.io/ELI5, facebookresearch/ELI5 questions are complex and cannot be easily ad- dressed by a short response (Nguyen et al., 2016) or by extracting a word or phrase from an evidence document (Rajpurkar et al., 2016). Answers also represent one of several valid ways of addressing the query. Many state-of-the-art question answer- ing models perform well compared to human per- formance for extractive answer selection (Radford et al., 2018; Devlin et al., 2018). However, their success does not directly carry over to our setting. The second challenge is the length and diversity of the content from knowledge sources required to answer our questions. We leverage evidence queried from the web for each question. In con- trast to previous datasets where the human written answer could be found with lexical overlap meth- ods (Weissenborn et al., 2017), ELI5 poses a sig- nificant challenge in siphoning out important in- formation, as no single sentence or phrase contains the full answer. While there are some datasets that do require multi-sentence supporting knowl- HOW How do different animals see different colors? How do ISP Internet Service Providers work? How does my car engine work? How exactly does a massive sewer system work in a large city? Is there really such a thing as a friendly/mean drunk? What causes this disparity? ma — iemeud —> |n what order should a you brush, floss and meena aap What is a Turing machine and why is it so important? What exactly is hiccuping and why do we do it? What's the difference between 32 and 64bit operating systems? Can someone explain Venture Capital funding like I'm five? Figure 2: ELI5 questions by starting word, where box size represents frequency. Questions are open ended and diverse. edge such as TriviaQA (Joshi et al., 2017), their answers are still short. We benchmark the performance of several ex- tractive, retrieval, and generative models. Evalua- tion of our task, and of multi-sentence text genera- tion in general, is challenging. We draw upon sev- eral evaluation metrics that quantify performance on intermediary fill-in tasks that lead up to the full answer generation. The overall answer generation quality is measured with ROUGE (Lin, 2004) and various human evaluation studies. strain the answer to a word or short phrase from the input and evaluate using exact match or F1 with the ground truth span. HotpotQA (Yang et al., 2018) extends this approach by building questions which challenge models to conduct multi-hop reasoning across multiple paragraphs, but the answer is still a short span. Further, the answer must be straightforward, as it needs to be copied from the supporting evidence — precluding most “how” or “why” type questions. We develop a strong abstractive baseline by training a Seq2Seq model on multiple tasks over the same data: language modeling, masked word prediction (Devlin et al., 2018) and answer genera- tion. We show this approach outperforms conven- tional Seq2Seq and language modeling, as well as a strong extractive baseline based on BidAF (Seo et al., 2017) but generalized to multi-sentence out- put. However, our best-performing model is still far from the quality of human written answers, with raters preferring the gold answers 86% of the time. Further, we show that model performance is strongly limited by the ability to comprehend long multi-document input and generate long out- puts to form a comprehensive answer, leaving this challenge for future research. Abstractive QA Abstractive datasets include NarrativeQA (Kocisky et al., 2018), a dataset of movie and book summaries and CoQA (Reddy et al., 2018), a multi-domain dialogue dataset. Both collect responses with crowdworkers and find that written answers are mostly extractive and short. MS MARCO (Nguyen et al., 2016), a dataset of crowdsourced responses to Bing queries, has written answers around 1 sentence long with short input passages. TriviaQA (Joshi et al., 2017) contains longer multi-document web input, collected using Bing and Wikipedia. As the dataset is built from trivia, most questions can be answered with a short extractive span. # 2 Related Work Various QA datasets have been proposed in roughly two categories: extractive answers and short abstractive answers (see Table 1). Extractive QA Extractive an- swering datasets such as TREC (Voorhees, 2003), SQuAD (Rajpurkar et al., 2016, 2018), NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), and QuAC (Choi et al., 2018) con- Multi-document summarization The ELI5 task of writing a paragraph length response from multiple supporting documents can be seen as a form of query-based multi-document summarization (Tombros and Sanderson, 1998). Summarization tasks such as DUC 20042 involve long input and multi-sentence generation, but contain much less training data compared to ELI5. WikiSum (Liu et al., 2018) proposes writing Wikipedia articles as a multi-document summarization task. ELI5 requires more directed 2https://duc.nist.gov/duc2004/ Dataset Average # of Words 1st Question Word Frequency (%) Question Document(s) Answer Why How What When Where Who Which OTHER # Q-A Pairs ELI5 42.2 857.6 (212K) 130.6 44.8 27.1 18.3 11.3 2.0 1.8 0.8 6.1 272K MS MARCO v2 (Nguyen et al., 2016) TriviaQA (Joshi et al., 2017) NarrativeQA (Kocisky et al., 2018) CoQA (Reddy et al., 2018) SQuAD (2.0) (Rajpurkar et al., 2018) HotpotQA (Yang et al., 2018) 6.4 14 9.8 5.5 9.9 17.8 56 2895 656 271 116.6 917 13.8 2.0 4.7 2.7 3.2 2.2 1.7 0.2 9.8 2 1.4 0.1 16.8 3.9 10.7 5 8.9 2.6 35.0 32.6 38.0 27 45.3 37.2 2.7 2.0 1.7 2 6.0 2.8 3.5 2.1 7.5 5 3.6 2.2 3.3 16.8 23.4 15 9.6 13.8 1.8 41.8 2.2 1 4.4 28.5 35.3 0.6 6.8 43 17.6 12.8 183K 110K 47K 127K 150K 113K Table 1: Comparing large-scale QA datasets. ELI5 has answers an order of magnitude longer and more open-ended questions. text generation to answer a question, rather than to write about a general topic. In addition, ELI5 contains a diverse set of questions which can involve more than one Wikipedia concept. # 3 Making a Long Form QA Dataset # 3.1 Creating the Dataset from ELI5 There are several websites which provide forums to ask open-ended questions such as Yahoo An- swers, Quora, as well as numerous Reddit forums, or subreddits. We focus on the subreddit Explain Like I’m Five (ELI5) where users are encouraged to provide answers which are comprehensible by a five year old.3 ELI5 is appealing because answers are supposed to be entirely self contained, and thus rely less on pre-existing knowledge of the world and use simpler language that is easier to model. Questions and answers. We select a set of ques- tions and answers from the ELI5 forum up to July 2018 and then filter it based on how users rated these pairs. First, we only retain questions which have a score of at least two, that is two more ‘up- votes’ than ‘down-votes’. Second, there must be at least one answer with a score of at least two. This yields a final number of 272K questions, and en- sures that at least one person other than the author has read the thread and deemed it appropriate. For each thread, we select the answer with the high- est voting score as the reference. Note that 63% have one or more other valid answers by our up- vote criteria, potentially doubling the size of the available training data. early experiments in our setting showed it to be in- sufficient to cover the wide range of topics present in ELI5 and to address the open-ended nature of the questions. Instead, we use web data pro- vided by Common Crawl.4 Specifically, we con- sider each of the individual pages in the July 2018 archive (roughly one per URL) as a single docu- ment. The data is tokenized with Spacy5 and we select English documents with FastText language identification (Bojanowski et al., 2017). Finally, we index the data with Apache Lucene.6 Creating support documents. We query the in- dex for the 272K questions and gather the 100 most relevant web sources for each question, ex- cluding Reddit. Each web source is the extracted text of one page in Common Crawl. This leads to supporting text for each question of a few hundred thousand words. There is a good chance that the supporting text contains the necessary information to answer the question, but the sheer amount of data is far beyond the scope of what many mod- ern models can handle. We therefore filter the 100 web sources by selecting specific passages using a simple heuristic: we split each web source into sentences, find sentences with the highest TFIDF similarity with respect to the question, add some local context for each of these, and concatenate the result into a single support document, with special tokens indicating non-contiguous passages and document shifts. Each support document is the result of this processing to concatenate rele- vant information from the web sources. Preparing supporting information. Next, we collect web sources for every question to pro- vide relevant information that a system can draw upon when generating an answer. Wikipedia has been found effective for factoid-oriented questions (Joshi et al., 2017; Chen et al., 2017). However, We find that extracting 15 passages with a con- text of one sentence before and after the initial se- lection provides the best trade-off between support document length and likelihood of containing rel- evant information, where relevance is measured as the likelihood of containing a sentence which has 3https://www.reddit.com/r/ explainlikeimfive # 4http://commoncrawl.org 5https://spacy.io 6http://lucene.apache.org % Correct Human Answers % Correct Human Answers with Explanation 94.5 90.2 % Support Document contains Full Answer % Support Document contains Relevant Info 65.0 92.0 Table 2: Annotated subset of ELI5 to assess answerability. high ROUGE with the answer. We release all 100 Common Crawl IDs for each question and a script to create the support document so future research can use the support document or choose to further investigate the information retrieval problem. Finalizing the data set. If the training data con- tains questions that are too similar to the valida- tion and test data, a model may perform well on these examples by memorizing related examples. We prevent this by building the validation and test set to contain questions that are sufficiently differ- ent from the training data. We compute the TFIDF similarity between each pair of questions in the entire dataset and sample the validation and test set from the subset which has no close neighbor by TFIDF score. The final dataset contains 237K train examples, 10K for valid, and 25K for test. # 3.2 Dataset Analysis Table 1 compares ELI5 to related datasets in terms of the length of the question, support document, answer, as well as statistics on the question types. First, ELI5 questions are much longer than in other datasets. This is because the initial question is often followed by a clarifying paragraph detail- ing what aspect of the general theme should be addressed or the question’s starting assumptions, which need to be considered to answer well. To get a rough idea of the different questions, we cat- egorize them based on interrogative words. ELI5 focuses on open-ended queries which are less rep- resented in other extractive or abstractive datasets. Figure 2 shows examples of ELI5 questions split by type and Appendix Figure 11 displays random examples from the ELI5 training set. Interestingly, even What questions tend to require paragraph- length explanations (What is the difference. . . ). Support documents contain 22-60 sentences or on average 858 words, which puts ELI5 on the higher end of published datasets for document length. ELI5 contains long-form answers with an average length of 6.6 sentences, or 130 words. Next, we analyze a random subset of ELI5 to assess the feasability of answering the questions in the dataset. We judge if the question is answer- able by reading each question, the gold answer, and the support document we have created with TF-IDF extraction. Note that questions can have multiple parts and all parts of the question must be answered. We sample 500 randomly question- answer pairs from the training set and find that 94.5% of gold answers fully address the question (Table 2) based on the information in the support document. Figure 12 in Appendix F displays ex- amples of human answers that do not correctly an- swer the question. A small proportion of answers are correct but do not explain the answer. On the support document side, 65% of the support docu- ments we construct provide the complete answer to the question, and 92% of support documents provide information relevant to the question. # 4 Evaluation Methods Evaluating long-form answers. There are sev- eral aspects to quality: answers should be topi- cal and accurate, fluent, and coherent from start to end. We judge the accuracy aspect by comparing to the gold answer. ROUGE (Lin, 2004) measures similarity between a model output and one or sev- eral references, and is often used in summariza- tion. While our task presents different challenges, such as the diversity of possible answers to a ques- tion, we still find the corpus-level metric to be use- ful to rank different related models (§6). We report F1 for ROUGE-1, ROUGE-2, and ROUGE-L. Abstractive model metrics. For generative models, perplexity (PPL) measures the ability to predict the next word in a sequence given its con- text. For a variant which focuses on semanti- cally important words, we report FILL-1, the ac- curacy at which models generate different Nouns, Verbs, and Adjectives given the correct preceding tokens in the first 2K examples of the test set. Fi- nally, ROUGE-20% measures the model’s abil- ity to complete an answer given the first 80% of the reference answer, the question, and the sup- port document. Specifically, we generate a num- ber of tokens corresponding to 20% of the aver- age answer length in the validation set, and mea- sure ROUGE between these and the last 20% of the reference. We mentioned that there are several valid ways to answer most questions. This mea- sure abstracts away this variability and evaluates a system’s ability to complete an answer. Extractive: But none of these hypotheses make much sense for a jellyfish because they do nt have brains at all. They just have a nerve net — a loose ring of neurons that runs around the rim of their pulsating bells. They have an unusual nervous system, because jellyfish are not bilaterally symmetrical — that is, they dont have a left side and a right side. Jellyfish don’t have brains, but their nervous systems detect smells, light and other stimuli, and they coordinate their physical responses. Language Model: They don’t have brains. They have a nervous system. They do not have a brain. They are not conscious. They can not move. They move. Their nervous system is not a muscle. They use their muscles to move. The nervous system does not have any nerves. They just have a nerve. They also have a muscle that is very similar to a muscle in that it is very much like a muscle, but instead of a muscle it is a muscle and it is like a nerve, but it is not muscle. The nerve is like the muscle, it is just a muscle but it has a nerve and it has nerves. The nerves are like the muscles, but they have nerves. Multi-task Seq2Seq: Jellyfish don’t have brains. Jellyfish have a nervous system. They know how to swim, they don’t even know what to do with their bodies. They have a “nervous system”, which is basically a bunch of nerve cells that connect to the body. The nervous system is made up of neurons, neurons, nerves, etc. Each neuron doesn’t have a specific set of “neurons”, it just has a different set of nerves. It has a set of sensory neurons, which are all connected to each other. It just knows that it does react to the stimulus, and it knows that the stimulus is there. They have a bunch of nervous systems that act like a filter to get information back. Figure 3: Example answers from the extractive BidAF model, Question + Document + Answer language model, and Multi-task Seq2Seq model for the question “How do Jellyfish function without brains or nervous systems?” (cf. Figure 1). Human evaluation. We use crowdworkers to conduct three assessments. First, evaluators rate the fluency of human and model generated answers on a 5-point Likert Scale, from “very poorly writ- ten” to “easily readable” (500 evaluations). Sec- ond, evaluators are given question-answer pairs and are asked if the answer is correct (500 eval- uations) 7. We also evaluated a smaller subset ourselves while additionally looking at the support documents (100 evaluations) to assess answer ac- curacy. Lastly, crowdworkers are given the ques- tion and answers from two models and asked to decide which answer they prefer while consider- ing readability and accuracy (1000 evaluations). Each crowdworker assessment is made by 3 dif- ferent evaluators. The same questions are used for all models and must be at least 5 words long. # 5 Models # 5.1 Extractive and Retrieval Models Retrieval baseline and oracle. We report ROUGE for a retrieval system that returns the answer of the closest question in the training set. Specifically, we perform a nearest neigh- bor search (Johnson et al., 2017) over the aver- age word embeddings of the question using FAST- TEXT (Bojanowski et al., 2017). We also compute an approximate oracle score for extractive systems by using the reference answer to select similar sen- tences from the support document to maximize ROUGE. Computing ROUGE between the ref- erence and all sets of sentences from the source is intractable. Instead, we perform a beam search that adds sentences maximizing TFIDF with re- spect to the answer. The final beam is re-ranked using ROUGE with respect to the reference an- swer. We run this algorithm on our support doc- ument and on the full set of web sources for each validation and test question, selecting up to 10 sen- tences with a beam of size 10. Extractive models. The first baseline we ex- plore simply returns the 7 sentences from the sup- port document which have the highest TFIDF sim- ilarity with the question. We also evaluate mod- els which score sentences from the support doc- ument based on the question and return the high- est scoring sentences in their original order (the number is tuned on the validation set to maximize ROUGE). We train a model based on BidAF (Seo et al., 2017). We create an extractive training set by finding the span of up to 5 contiguous sentences in the support document which have the highest ROUGE with respect to the reference answer, and sub-sample other support document sentences so that the final training document is shorter than 400 words. We then train a BidAF model to predict the extracted span in the sub-sampled support docu- ment based on the question. For test, we compute the span score for each individual sentence, and return the 5 with the highest score as it performed best compared to returning 3 or 7 sentences. # 5.2 Abstractive Models 7We experimented with a variant where crowdworkers were allowed to select a third I don’t know option, but found it was used only around 8% of the time. Language and Seq2Seq models. We train sev- eral models based on the Transformer architec- ture (Vaswani et al., 2017), both in its language model and sequence-to-sequence (Seq2Seq) con- Model Support Document Nearest Neighbor PPL - - 1 16.8 16.7 ROUGE 2 2.3 2.3 L 10.2 12.5 Extractive (TFIDF) Extractive (BidAF) Oracle support doc Oracle web sources - - - - 20.6 23.5 27.4 54.8 2.9 3.1 2.8 8.6 17.0 17.5 19.9 40.3 42.2 LM Q + A 33.9 LM Q + D + A 52.9 Seq2Seq Q to A Seq2Seq Q + D to A 55.1 32.7 Seq2Seq Multi-task 27.8 26.4 28.3 28.3 28.9 4.7 4.0 5.1 5.1 5.4 23.1 20.5 22.7 22.8 23.1 Table 3: Comparison of oracles, baselines, retrieval, extrac- tive, and abstractive models on the full proposed answers. Model FILL-1 acc. A V N 31.0 29.6 20.6 LM Q + A 30.9 28.9 19.9 LM Q + D + A S2S Q to A 21.7 23.0 15.5 S2S Q + D to A 27.6 26.3 19.4 27.9 26.7 19.9 S2S Multi-task ROUGE-20% L 2 1 26.5 7.0 21.1 26.3 7.8 21.3 33.6 11.5 29.5 32.7 10.7 28.6 37.2 14.6 33.0 Table 4: Intermediary fill-in tasks for sequential generation. figurations. To investigate how much information from the document the model uses, we train a lan- guage model on the concatenation of Question, Support Document, and Answer (Q + D + A) as well as on the Question and Answer (Q + A). Sim- ilarly, one Seq2Seq configuration goes from Q to A, and the other from Q + D to A. In all cases, Q, D, and A are separated by special tokens. Multi-task training. Language models are trained to predict all tokens in the question, web source, and answer. However, the standard Seq2Seq model only receives training signal from predicting the answer which is much less than the language model gets. This can contribute to learning poor quality representations compared to language models. To address this, we train a multi-task Seq2Seq model: during training, we multi-task between several generation tasks, including language modeling of Q + D + A by the decoder and variations of source/target pairs (see Appendix A). We add a masked word prediction task (Devlin et al., 2018) where 15% of tokens in the input are masked and must be recovered by the model in the correct order, and append a marker at the start of each sequence to indicate the task. Data processing. To reduce the vocabulary, we apply byte-pair encoding (Sennrich et al., 2016) to generate 40K codes which are applied to all datasets. We model a vocabulary of 52,863 to- kens for answer generation. We use the Trans- former implementation of fairseq-py (Gehring et al., 2017) and train with the big architecture fol- lowing the details in (Vaswani et al., 2017). Given our data length, we train with a large batch size by delaying gradient updates until a sufficient number of examples have been seen (Ott et al., 2018). Generation. We generate from abstractive mod- els using beam search with beam 5. We disal- low repeated trigrams to prevent repetition, a tech- nique commonly used in multi-sentence summa- rization (Paulus et al., 2017; Fan et al., 2018). For the full answer generation task, we tune a mini- mum and maximum length for generation on the valid set and apply these settings to the test set. # 6 Results # 6.1 Overview of Model Performance Full answer ROUGE. Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which in- dicates that memorizing answers from the train- ing set is insufficient. For extractive models, the oracle provides an approximate upper bound of 27.4 ROUGE-1. The BidAF model is the strongest (23.5), better than TFIDF between the question and the support document to select sen- tences. However, these approaches are limited by the support document, as an oracle computed on the full web sources achieves 54.8. Abstractive methods achieve higher ROUGE, likely because they can adapt to the domain shift between the web sources and the ELI5 subreddit. In general, Seq2Seq models perform better than language models and the various Seq2Seq settings do not show large ROUGE differences. Figure 3 shows an example of generation for the language model and the best Seq2Seq and extractive settings (see Appendix F for additional random examples). Perplexity and fill-in tasks. Tables 3 and 4 present metrics specific to sequential generation models: perplexity of the answer, accuracy of the model’s FILL-1 word prediction for Nouns, Verbs, and Adjectives, and ROUGE of the con- ditional generation of the last 20% answer words. The language model perplexity is much lower than that of the standard Seq2Seq setting – this is likely linked to the number of output tokens the system as Fluency 100 6 3 4 - o3 - - B2 - . a ti. Zo FS ge Rh Oh gt gi? os ao oP 7h xe Caer x0" Wr se OP 07 Wr on oF # x oo gt © ww ae w wa Answer Accuracy Assessed with no Support Document iS * gO Answer Quality 100 Assessed with Support Document ‘% Yes ane Rag oe oF sue 7 gx? AG aX x 907 ast Pre tr ete a ws Wo HR oh HE d® HL ot ps ee pe @ Figure 4: Human evaluation of answer fluency and accuracy — with and without access to supporting evidence documents LM s2s Extractive Multi-task 16 69 14.5 11.2 Human 98.4%, | | 93.1*,| | 85.5%. | | 88.8% 3.4 30.6 427 Multi-task 96.6%. | | 69.4%. | | 57.3* 22 29.3 Extractive 97.8%, | | 70.7* 8.2 s2s 91.8* Figure 5: Human preferences for pairwise comparisons. The better model’s % preference is bolded. * indicates statistical significance. is required to predict at training time. The multi- task Seq2Seq experiment, in which the Seq2Seq decoder is trained to predict the question and the document, in addition to the answer, can reach the same perplexity as the language model. ROUGE- 20% shows a much starker contrast between lan- guage modeling and Seq2Seq, as well as between standard Seq2Seq and multi-task training. The lat- ter achieves strong performance of 37.2 ROUGE- 1. However, both versions of the language model are still better at FILL-1. These results suggest that the Seq2Seq model is better than the language model in maintaining coherence and that Seq2Seq relies on information over many time steps. Human evaluation. Human answers are rated highest in terms of fluency (Figure 4, left). The ex- tractive model outputs human-written text which is likely fluent but with the failure mode of con- catenating unrelated sentences. The multi-task model performs similarly to the extractive model which indicates that abstractive methods can gen- erate coherent answers. The language model and standard Seq2Seq trail behind. ers in selecting positive (scores 4 and 5), negative (1 and 2), or neutral (3) choices on the 5-point Likert scale, and find that 2 crowdworkers agree almost 100% of the time (Appendix E, Figure 10). In answer accuracy (Figure 4, middle), there is a large gap between human performance and all models. The language model is almost never accu- rate, while the extractive model is slightly more so than the multi-task model. Crowdworkers assess- ing accuracy do not have the support document. We evaluate accuracy ourselves with the support document in Figure 4, right. Similar to crowd- workers, we find 40% of extractive answers to be accurate. We find only 19% of multi-task model answers are fully accurate; even if the model out- put answers the question, it can generate a sen- tence with an incorrect statement. In contrast, the extractive model copies sentences from human- written text. However, the multi-task model is bet- ter at generating relevant answers (84% relevancy compared to 68% for extractive), as the extractive model is constrained by the support document. Figure 5 presents pairwise preference judg- ments of human annotators shown answers from two of the five systems. The reference answer is preferred over the output of all of our trained mod- els in at least 85.5% of cases, indicating there is substantial room for improvement. The multi-task abstractive setting comes next, closely followed by the extractive (multi-task is only preferred in 57% of comparisons), then the standard Seq2Seq and finally the language model, considered worse than any other setting in at least 91% of cases. We use a two-tailed binomial test to test statis- tical significance of the pairwise judgments and it shows that all judgments are statistically signifi- cant at p < 0.05. To get a sense of the stability of our results, we analyzed the standard deviation of three indepen- dent fluency trials conducted on separate days and we find low variation (Appendix E, Figure 10). We also measure agreement between crowdwork- # 6.2 Quantitative and Qualitative Analysis the proposed metrics. We Discussion of present a number of metrics which provide insight into various model behaviors. We recommend Seq2Seq Multi-task ~ wr) panera: ° BOB ogee RBH get™ at Language Model arr? ‘. Me ger et QE go 1 oo o x oy LZ MN we os agit 05 way" oo _o 0 hse as cy wo a ee oe ore Figure 6: Attention over the question and supporting evidence for the Multi-task Seq2Seq model and Question + Document + Answer language model. Attention is shown for the first word of answer generation. future work to report full ROUGE and ROUGE- 20%. Perplexity and FILL-1 focus on local prediction and are poor indicators of overall appropriateness for the full task. Full answer ROUGE discriminates reasonably well between models with the same general architecture, but cannot rate an abstractive system against an The ROUGE-20% measure extractive one. abstracts away some variability and focuses on coherence between the beginning and end of an answer. This metric correlates with human judgments of quality but can only be reported for sequential generation. 30 7 ROUGE-1 By Document Overlap 30 28 G 26 8 3 8 p2 20 we wer extractive | Multi-task ROUGE-1 By % Training Data F1 ROUGE-1 sr atk Percentile er sh ash ao Figure 7: (left) Model score by document-answer similarity. (right) Seq2Seq multi-task score by amount of training data. ‘Average Sentence Rank Maximum Sentence Rank ‘% of Sentences % of Sentences 0 100 200 300 400 TFIDF passage rank 500 0 100 200 300 400 500 TFIDF passage rank Analysis of extractive, LM and Seq2Seq models. Language models perform better than Seq2Seq in terms of perplexity and FILL-1, while being significantly worse at ROUGE-20% and human evaluations. To investigate this, we visu- alize the attention mechanism at the start of an- swer generation in Figure 6. The attention of the language model is strongly focused on nearby context when generating the first word of the an- swer, whereas the multi-task Seq2Seq model at- tends more evenly to relevant information in the question and the document. This validates our as- sumption that the language model’s focus on local context is insufficient for high quality answers. In Figure 7 (left), we further investigate how the relevance and quality of the support document ex- traction step affects the answers provided by the extractive and abstractive setting. The ROUGE score is displayed for data subsets, partitioned by percentile of word overlap of the answer with the support document (e.g. how many answer words appear). While both models perform better for documents with higher ROUGE overlap between support document and human answer, the abstrac- tive setting is much better at compensating for when the support document has lower relevance. Data size and initial selection. There is a large difference between the extractive oracle ROUGE using our support document and the oracle on full Figure 8: (left) TFIDF rank of source passage for oracle sen- tences. (right) Highest rank used per question. web sources. This suggests that the initial selec- tion of our support document severely limits ac- cess to relevant information. To assess the impact of support document size, we re-run the selection step for 1000 examples to extract 500 passages in- stead of 20, and run the oracle on these new inputs. Figure 8 shows the TFIDF rank of the passages from which sentences are selected. While slightly more sentences are extracted from the higher rank- ing passages, less than 9% come from the first 20, and most oracles have at least one sentence from the last 100. For a model to perform best, it would have to handle inputs tens of thousands of words long. In Table 3, we show an oracle computed on the full web sources has much higher ROUGE than an oracle computed on the support document. We analyze the impact of data size on perfor- mance in Figure 7. We train the multi-task model on 25%, 50%, and 75%, and the all of the data to compare performance. ROUGE increases as a function of the data used and even though ELI5 is one of the larger QA datasets (§3), this shows that collecting more still helps. While we only used one reference answer per question here, recall that over half of them have multiple answers, which could be leveraged to train better models. Combining challenges. Our task blends the inter-dependent challenges of retrieving informa- tion, reasoning, and writing long outputs. Study- ing each of these aspects in context is particularly important. For example, we show that the abstrac- tive model’s ability to compensate for a (realisti- cally) imperfect support document is essential to its relative success over extractive methods. The fluency gap between the reference and the extrac- tive system in human evaluation also suggests that the latter may require sequential decision capabil- ities. This kind of decision making is necessary to address the dual challenges of reasoning over sev- eral supporting facts and generating long coherent outputs. We see our task’s need to combine com- plementary systems as critical to gaining insights into their individual behaviors. # 7 Conclusion We introduce the first large-scale long form ques- tion answering dataset of open-ended queries with explanatory multi-sentence answers. We show that abstractive models generate coherent answers and are competitive with extractive models in hu- man evaluation. Proposed models are far from human performance, in part due to the inability to exploit the long full web text. We hope ELI5 will inspire future work in all aspects of long-form QA, from the information extraction problem of obtaining information from long, multi-document input to generating more coherent and accurate paragraph-length answers. # References Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135–146. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. In ACL. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805. Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur G¨uney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with con- text from a search engine. CoRR, abs/1704.05179. Angela Fan, David Grangier, and Michael Auli. 2018. In ACL Controllable abstractive summarization. Workshop on Neural Machine Translation and Gen- eration. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. Billion-scale similarity search with gpus. CoRR, abs/1702.08734. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In ACL. Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gabor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. TACL. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. In ACL Workshop on Text Summarization Branches Out. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In ICLR. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine read- ing comprehension dataset. CoRR. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. In WMT, pages 1–9. Association for Compu- tational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Anastasios Tombros and Mark Sanderson. 1998. Ad- vantages of query biased summaries in information retrieval. In SIGIR. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- hension dataset. In ACL Workshop on Representa- tion Learning for NLP. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. NIPS. Ellen M. Voorhees. 2003. Overview of the TREC 2003 question answering track. In TREC. Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017. Making neural qa as simple as possible but not simpler. In CoNLL. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600. # A Details of Multitask Training The seq2seq multi-task model was trained on a va- riety of tasks at training time. Each task is spec- ified by a special token to delineate to the model which task it is. Tasks at training time include the following, in the form of (source, target) pairs. “+” represents a concatenation of inputs, separated by a special token. • (empty, question) • (empty, document) • (empty, answer) • (empty, question + document) • (empty, question + document + answer) • (question, answer) • (question, document) • (question + document, answer) • (question, document + answer) • masked word prediction: 15% of source words are replaced by a “[MASK]” token and the corresponding tokens must be predicted as the target in the correct order # B Architectural Details # B.1 Extractive BidAF is trained using the Al- The BidAF model lenNLP8 implementation, using the standard hyper-parameters (specified in the bidaf.jsonnet file9). We only change the batch size, since a 16GB GPU can only fit one example per batch, and as a result the Adam learning rate has to be changed to 5e − 5. We provide the code to se- lect the target span and sub-sample the input in our data, as well as to convert it to the SQUAD format required by the AllenNLP system. # B.2 Abstractive Models Models are trained with the Adam optimizer with beta values (0.9, 0.98), initial learning rate 1e−07 with 4000 warmup steps to learning rate 0.0001. We follow the inverse square root learning rate scheduler described in (Vaswani et al., 2017). Models are trained with a label smoothing value of 0.1. # 8https://allennlp.org/ 9https://github.com/allenai/allennlp/ blob/master/training_config/bidaf. jsonnet Sequence to sequence models are trained with following architecture from (Vaswani et al., 2017): 6 encoder layers, 6 decoder layers, FFN dimen- sion 4096, 16 attention heads, embedding dimen- sion 1024. Gradient updating is delayed until 32 updates have been processed. Models are regular- ized with dropout 0.1 and attention dropout 0.1. Language models are trained with same param- eters described for seq2seq above, with 6 decoder layers. We did not train with 12 decoder layers, as we found the deeper Transformer model was harder to optimize and we achieved worse results compared to a 6-layer language model. For generation, models generate a minimum of 200 words and a maximum of 500 words. # C Comparison of Extractive and Abstractive Methods Figure 13 displays an example of a generated an- swer for an example where the source document is of poor quality but the abstractive answer still has strong ROUGE. In comparison, the extractive answer is heavily affected by the poor document quality and derails in topic. # D Test/Valid Similarity with Train Figure 9 shows the performance of the Multi-task Seq2Seq and LM on Question + Document + An- swer by the similarity of the question in the valida- tion set to a question in the training set. The sim- ilarity is determined by TFIDF. There is very lit- tle effect of answer generation on a question more similar to a training question than less similar. ROUGE-1 By Similarity to Train F1 ROUGE-1 sh Sh gh gah Percentile |) LMQ+D+A Multi-task Figure 9: ROUGE of full answer generation is not strongly affected by similarity of the questions in the validation set to questions in the training set. # E Variance in Human Evaluation Studies We analyze the variation of our human evaluation study for answer generation fluency in Figure 10. We conduct 3 different trials of the same 100 ran- domly sampled question-answer pairs from the test set for the selected models. Each trial is con- ducted on a different day. Our results show that standard deviation across the trials is small and not statistically significant. Further, each answer is evaluated for fluency by 3 different crowdworkers. Figure 10 analyzes the agreement rate between crowdworkers that can choose on a scale of five options. We term “agree- ment” if all workers are positive, negative, or neu- tral about the answer fluency. We show that all three crowdworkers agree around 60% of the time for most models and almost 80% of the time for the language model. As the language model gen- eration is significantly less fluent than the other models, most crowdworkers are in agreement. The agreement of at least two of the annotators is al- most 100% for all of our evaluated systems. # F Examples We display randomly sampled examples from the training set in Figure 11 and examples of answers that do not answer the question in Figure 12 (an estimated 5% of the dataset). To better understand the output of our models, we display example generations randomly sam- pled from the test set for the multi-task Seq2Seq model (Figure 14) and the Extractive BidAF model (Figure 15). We additionally show a set of poor generations for the multi-task Seq2Seq model (Figure 16) that display a representative set of problems for this abstractive model. 5.9 Std Deviation Across 3 Trials Likert Scale out of 5 1 eS eve ae we RX) oo ca Inter-Annotator Agreement » © 0 » % full agreement 0 7 Ls 2 ge cP xo 30 ah EO" i? w' ow hho” ww" 9! 0 Inter-Annotator Agreement € ow £ & w e % 4 a» Ro aie p i OF Jo A Og 7 yr ow oF ORY Ope oe ph ae Wh Figure 10: Analysis of Human Fluency Study (a) We an- alyze the variation between three iterations of the same ex- periment, conducted on three different days. We evaluate the fluency rating given to the human answers, LM answers, and Multi-task model answers, and find low variation across mul- tiple trials. (b) We calculate the inter-annotator agreement between the three evaluators that score the fluency of each of the models, and find that the % of time all three annotators agree is high- est for the language model. Agreement is calculated using positive (Likert scale scores 4 and 5), neutral (3), and nega- tive (scores 1 and 2). (c) We calculate the inter-annotator agreement between at least two evaluators for the fluency of each of the models, and find that two annotators usually agree almost 100% of the time. Why do Oreos soften when they get stale, but most other foods (like bread) harden? It has to do with how much moisture is in the food compared to how much moisture is in the air. Oreo? Drier than air, they gain moisture and turn soft. Loaf of bread? More moist than air, it loses moisture and dries up. Kitchen hack: put a slice of bread in with cookies to keep the cookies fresher a little longer. Why is it that cracks seemingly spread on their own? Because of how the mechanics of the crack work. The stress in a material is related to the radius of corners, that is, rounded corners, like the bottom of the letter U, have less stress than sharp corners, like a V. Sharp corners are actually referred to in mechanical engineering as “stress raisers”. And what’s sharper than the edge of a crack? This means that even stress from thermal expansion of the material can be enough to cause the crack to spread. This is especially true of glass, whose brittle composition is quite susceptible to catastrophic failures. Discrete Math: reflexive, symmetric, antisymmetric, transitive A *binary relation* is a function of two variables that is either true or false. I’m going to use Rxy to notate the relation R applied to x and y in that order. A relation is *reflexive* if Rxx is true; that is, if both variables are the same the relation always holds. A relation is *symmetric* if Rxy = Ryx; that is, switching the order of the arguments doesn’t change the value. A relation is *antisymmetric* if Rxy never equals Ryx for distinct x and y; that is, switching the order of the arguments always changes the value (unless the arguments are the same, in which case it obviously can’t). A relation is *transitive* if Rxz is true whenever Rxy and Ryz are. Equality is the simplest example of this; if you have x = y and y = z, then you can conclude that x = z. Why does bashing my crt tv make it work? There are several reasons why “percussive maintenance” can work. The most likely is that there’s simply a poor electrical connection somewhere, and banging on it gets it into contact. Once the electric- ity starts to flow, things heat up and expand a bit, maintaining the connection. Until you turn it off and everything cools off again. Is it more efficient to leave the AC running on auto at 74F (in 85 degree whether) or turning it off when leaving the house, and turning it back on when returning when the ambient temp in the apartment is 85? Turn it off, you will use less power. Thermodynamics tells us that heat loss is proportional to temperature difference, so if you let your house warm up the heat from outside enters more slowly. Essentially the product of time and temperature difference is your cooling energy. There is a minor subtlety with maintenance and life cycle, your AC unit may not be designed for continuous duty, so long cool down cycles could be hard on it. Not sure if that is the case in your unit, seems like a bad way to design anything but it could be. Edit: one non-thermodynamic factor is humidity and mold, which will be different at a constant temperature vs a cycling temperature. Figure 11: Examples of Question-Answer Pairs randomly sampled from the training set The differences between Wii, PS, Xbox, etc., and which is considered the best system. I’m 40, out of the loop, and have to buy one for Christmas. If he is 7 go for the Wii its /technically/ more kid friendly and has a more varied option of games for that age range... I think What is it when everything just looks really bright/your eyes white out for a moment, then goes back to normal? What is it? Time to see a doctor. Neurologically what happens someone becomes insane That is waaaaaaaay too broad a question to possibly answer since “insane” covers a multitude of different conditions and disorders, all with their own causes and that’s even assuming that we know the causes in the first place. If my spaceship could withstand the pressures of Jupiter, what would a trip through the planet be like? Read the novel “2010” by Arthur C. Clarke. There’s a chapter that explains this far better than I could. If person A saves person B’s life through CPR but person B’s rib gets broken during chest compressions, why is person B able to sue person A? Why doesn’t the judge through the case out of court? I know Good Samaritan laws play into this somehow, but I can’t remember how. I think that if you have the proper licensing, then they can’t sue you. Or maybe it’s the other way around. Sorry, this response has become useless Figure 12: Examples of Answers in the training set that do not answer the question Question: Why do TV shows hide logos ? Document: Why is that? Why do logos get blurred on television, or in film, or even in music videos? The answer, it turns out, is complicated, but is mostly about money in various forms. A whole lot of dysfunction here. Quick Pick: HBO TV Shows Pick the missing word in the title of these HBO shows. September is season premiere month so these logos should be fresh in your mind. TV Shows Venn Diagram II Can you click on the most accurate section of the Venn Diagram for each of the following TV Shows? [...] ELI5 Answer: nothing is free. In most cases, it is a prop for the show, but because apple did NOT pay them for the product placement, the show isn’t going to give it away. In other cases, apple may not want their brand used in association with that media. Extractive: The answer, it turns out, is complicated, but is mostly about money in various forms. There is only one reason they do not trust it and that is fear of loosing their money. All the seals or say reassurance and credibility logos assure your user that a website is safe, protected and personal information is safe. A Lot Of New Disney TV Shows Disney is finally looking to break into the world of TV shows with this streaming service and that’s reportedly going to include ”episodic spinoffs” of beloved franchises like High School Musical (that’s right, it’s coming back. . . ), Monsters Inc., and The Muppets. Seq2Seq Multi-task: It’s because they’re paying for the right to use the show. If you’re a TV show, you’re paying to have the show shown for free, and you’re not paying for a license to use that show for free. the reason they don’t have to pay for the license is because the show doesn’t want to be associated with the show, and the show isn’t paying for it. It’s not the show’s fault, it’s the company that makes the show pay for it, and it’s not their fault. Figure 13: Example of document with low document-answer overlap where the abstractive system still achieves a high ROUGE score. The extractive baseline correctly identifies where the question is reformulated in the source document and selects the following sentence, but then goes on to confuse different types of logos and goes off topic entirely with the Disney analysis. Who controls what the president is allowed to do? But then who controls what they’re allowed to do? Etc .. The president is the commander-in-chief of the armed forces. He is the head of the executive branch. He controls the military, the military and the military. The president controls the army, the navy, the air force, and the army. The President is the president. He has the authority to do whatever he wants, but he doesn’t have the power to do all he wants. He doesn’t want to do anything about it. He wants to do something about it, but it doesn’t really matter what the president is doing. He’s not the president, he’s just a figurehead, and he’s the president of the United States. It’s not really a matter of who controls the government, it’s the people who control the government. There are a lot of people who don’t like the President, but they don’t really like him. There’s a lot more to it than that, but that’s the basic gist of it. The US government is controlled by the US government, but not by the federal government. The federal government is the one that controls most of the US military, but there are a few that don’t. They are the ones that control most of US military operations, and they are not the ones who control most. What makes the stomach “growl”? The sound of your stomach growling is caused by the movement of air in your stomach. The sound you hear when your stomach is empty is called peristalsis. This is the sound that your stomach makes when it is empty. When you are hungry your stomach will make the sound of the growling sound, but it will not be as loud as the sound you are making when you are not hungry. It is the same thing that makes your stomach growl when it’s empty. You are not actually hungry, but you are actually contracting the muscles in your intestines to make the noise. The growling is the result of the muscles contracting and contracting to make a loud noise. It’s the same reason why you feel the urge to eat when you’re hungry. Your stomach is trying to empty itself, so it makes the noise as if you aren’t hungry. The same thing happens with your stomach when you have a cold. The muscles in the stomach are contracting and making the noise, but they are not contracting. They are contracting, but not contracting, so they aren’t contracting. The noise you feel when you feel it is the stomach making is your stomach making the sound. Source : I am a gastroenterologist, and I have a lot of experience with gastroenterology. How does a de-humidifier work? A humidifier works by heating the air in the room. When the air is warm, it cools down. When it is cold, it warms up. This cools the air around it, which cools it down. This is why you can see a lot of cool things in a room if you look at it from a distance. The air around the room is warmer than it is, so it rises. This causes it to rise, and cool down. The same thing happens with a humidifier. The water in the air condenses out of the air, and condenses back into liquid water. This water condenses back to liquid water, which is what you see when you blow on a hot air balloon. The cold air cools down, and the warm air condenses back out of it, making the air colder. It’s the same thing as a humidifier, except that it doesn’t have any water in it, it just has a bunch of tiny droplets in it that make up the air. It does have some water, but it has a lot more in it than it does with a cold air balloon, so the air gets colder. The downside to a humidifier is that it isn’t as efficient as a regular humidifier, and it is much more efficient than a regular one. Could anyone explain the suffix nomenclature for diseases/conditions/ailments? Ex. -itis vs -osis vs -algia There are a few different types of suffix. -ous suffix comes from the Latin “-ary”, which means “-ate”. The suffix “-er” comes from a Latin root that means “something” or “something.” “-ous” is a suffix that is used to refer to things that are related to something that is related to that thing. For example, “-es” means “a”. So “-e” is the suffix that comes from “-ide”, and “-o” is “-ic”. In this case, the “-i” is from the Greek “-os”, so “-ia” is just the Latin root of “-ian”. It’s the same way that “-s” is also the suffix for “-ies”, but it ’s not the same thing as “-y”. A lot of the words that are used in the English language don’t have the same suffix, so they are used differently. In English, the word “-us” has a different suffix than “-u”. This means that the -ic suffix is more likely to be used in English than the -ian suffix . In other words, it’s more likely that the suffix will be used more often than it is in English. How do live visualization websites work? I’m not sure what you’re referring to, but I’ll give it a shot. First of all, you have to understand what they mean by the term “ visualization”. Basically, they use a technique called “visualization.” Basically, you take a picture of a room, look at it, and compare it to what it looks like to you. Then, you look at the room, and see what the room looks like, and then compare that to what you see in the room. You can then use the visualization to figure out what it actually looks like. For example, let’s say you ’re in a room with a camera, and you want to know what the camera looked like. You know that the camera looks like the camera, but you don’t know what it looked like, so you use it to look at what it was like to see what you were looking at. Then you use the technique to find out what the person was looking at, and figure out how to use that to get a sense of what they were actually looking at at. The visualization is pretty simple, but it’s a lot more complicated than that, so I’ll try to explain it in a bit more detail. First, you need to understand how the visualization works. The first thing to understand is that it’s very easy to use. Let’s say that you’re looking at a room and you’re really interested in the camera. You want to be able to see how far away the camera is from you, so that you can figure out the distance between the camera and your camera. The second thing to remember is that the visualization isn’t very accurate. It’s just a way of looking at things that aren’t directly related to each other. For instance, if you’re interested in learning how to read a book, you might be interested in reading a book. You might also want to read the book in a way that is easy to understand, and it’s easy to get used to it. Figure 14: Example Question-Generated Answer pairs from Seq2Seq multi-task model, randomly sampled from the test set Why doesn’t the color red compress well in JPEGs? PNG will take longer to load on a website, but its sometimes simply worth it if it means the difference between a good or bad image. So with all this hate toward JPEGs, you might be asking yourself why the format continues to even exist when so many better options are available. Also important to note, JPEGs do more than compress the file, they also lose color and contrast range. These numbers and ratios are examples for the sake of easy explanation, but lets say a picture has 100 colors and 100 contrast points. Straight out of the camera, JPEGs often look much more vibrant than raw les, because the colors have been enhanced and sharpening applied in-camera. If you need to archive a large number then you could try placing them in a zip file, but you probably won’t save more than 5%. When reading weather reports and it says 50% chance of rain, what does that actually mean and how is it calculated? I have always maintained this is a confusing concept and its the main reason that I will rarely if ever use a percent chance in a forecast. When they say there is a 50% chance of rain, does that mean that there is a 50% chance it is not going to rain? Then, why does it always rain when the chance of rain is 50%? So, maybe the 50% chance means that it will rain on only 50% of the land while the other 50% rains on the water. This is important to keep in mind because when making claims about the impact of global warming, you need to look at the big picture, not just the last 150 years. Well, there are two input variables you have to keep in mind: first, the geographic location — where youre looking for a forecast, and second, the time window youre looking at. Why does my skin turn a paler white when pressed? Kinda random. Always wanted to know. There is a darker shade, but the shade Sunkissed is perfect for the lighter skin wearers. It doesn’t irritate eyes, and it’s gentler on skin than some of their other powders — it’s also very finely milled and thick enough that you can use it as a foundation and it covers even dark broken capillaries. What I don’t like: This is very light peach when it starts out, and it doesn’t turn paler on skin; it also oxidizes. It’s a light peach when it starts out, and then it turns darker. If you are unsure if you have cool skin tone, check if you have bluish coloured veins inside your wrist (just under your forehand). Spots or a rash that do not fade under pressure will still be seen when the side of a clear drinking glass is pressed firmly against the skin. Can psychoactive drugs trigger schizophrenia in people who are at a genetic disposition to the disease already? If so, how high is the risk for different family members that have it? Do you have a higher chance if a parent has it or if a grandparent has it? What about cousins? Aunts and Uncles? The identical twin of a person with schizophrenia is most at risk, with a 40 to 65 percent chance of developing the disorder. Some doctors think that the brain may not be able to process information correctly; and it is believed that genetic factors appear to play a role , as people who have family members with schizophrenia may be more likely to get the disease themselves. As Schizophrenia has a tendency to run in families, scientists already know there is a genetic link but that doesnt mean that if you do have someone in your family that has Schizophrenia that you will too, neither does it mean that if you dont, you wont, so there are other factors involved too. At the moment people with Schizophrenia are usually prescribed anti-psychotic medication, some of which can carry unpleasant side effects. If you have a pre-existing risk for schizophrenia (which most people at risk are unaware of), theres a much higher chance that using cannabis will trigger a schizophrenic episode. Again, it is extremely important to note that this risk applies primarily to people who are already at risk of developing schizophrenia. Why has the Canadian dollar gone down in value over the few years? So far in 2016, the Canadian dollar has lost a lot of value. The days of the Canadian dollar at parity with the US dollar are long gone. A lot of that increase in book value is because of the loss of value of the Canadian dollar. What we have to remember however is that it is not really the Canadian dollar that has gone up in value, it is the American dollar which has gone down. Since the beginning of the Iraq War the American economy has stumbled and one could make the argument that it is because there is no one at the wheel. This means Canadians can now come into the US and get a great deal on just about anything , and they are buying up real estate, to the tune of more than $20 billion a year. How do they make Jello with pineapple bits in it if pineapple ruins Jello? I have a store-bought cup of generic Jello with pineapple bits in it. According to all my google searching and middle school science classes, this shouldn’t be possible because pineapple enzyme just destroys Jello’s jellystuff. So what is this thing in my cupboard? Enzymes: whats in your spit fruit and jello an hour before the lab begins it is possible to make jello with canned pineapple chunks but not fresh. Well, just as a little science experiment, I stuck a piece of fresh, uncooked pineapple in a cup of the jello, and this is what it looked like after about 20 minutes. In the cooking world, on the other hand, bromelain is regarded as the enemy of the gelatin dessert if you use fresh pineapple in gelatin, the enzyme eats the. Experiment demonstrates denaturation of an enzyme using an enzyme called catalase which is found in the pineapple and jelly experiment background. In this lab students will investigate a few of the different enzymes from our body pineapple contains the enzyme bromelain, and should break down the jello. Well, to begin with, it’s not true you can’t put any pineapples in jello jello is alive — at least according to a 1974 experiment performed by dr. Figure 15: Example Question-Generated Answer pairs from the extractive BidAF model, randomly sampled from the test set Why is “PC Gamer” Hardware so Flashy? It’s because they don’t want to. They want to make sure that they have a decent amount of battery life, and that they can make a decent battery life. They also want to be able to make a small battery that will last for a very long time, and make sure they have enough power to last a long time. That said, there are a lot of people who don’t know what they’re talking about, and don’t really know what’s going on. There’s also the fact that they’re not going to make it to the end of the day, and it’s going to be a lot harder for them to make the battery last longer than it is for the average person. It’s not like they ’re going to have to make their own battery last a lot longer than they are for a PC, but they’re still going to get a lot more battery life than they’re for a gaming PC, so it’s a lot easier to make them last longer, and they’re also going to need a lot less power than they were for a desktop PC, which means they’ll need more power to run the game, and so they’ll have to have more power for the game to last longer. That’s why it’s so important to keep in mind that there ’s a ton of people out there who aren’t very good at it, and there’s no reason to be afraid of it. What are the benefits of an American city being it’s own county? i.e. San Francisco There are a lot of benefits to an Amer- ican city being “own” a county. It’s not like there’s a whole lot of money to be made in a city, but it’s a lot more complicated than that. There’s also the fact that there are **lots*** of people who aren’t part of the city, and there aren’t **many** people who ** are part of that city. There are also **a lot** more people who don’t want to be part of a city than there are people who just want to live there. There is also a **lot* of **money** in that city, so it’s easier for them to get there than it is for the people who want to stay there. This is why there are so many people in San Francisco, and why it’s so hard to find a city that doesn’t have a whole bunch of people in it. **Edit:** [Here’s a link] (URL 0) to an article that explains it better than I can. I’m not sure if this is the right place to ask this, but I’m sure someone else can give you a better answer than me, so I ’ll give it a shot. How do cruise ships get wifi in the middle of the ocean? There are a couple of ways to do this. One is to turn on the radio and turn off the wifi. The other is to make sure that the wifi is turned on. If you turn the wifi on, the radio will turn on and the wifi will turn off. This is done by turning on the wifi and turning off the radio. If the WiFi is turned off, then the WiFi will turn back on, but the radio won’t turn on. There are also other ways to turn off wifi, such as turning on wifi, or turning off wifi. I’m not sure how to explain this to a 5 year old, but I’ll try to explain it to a five year old. I hope this helps! Edit: I forgot to mention that WiFi is not a wireless network, it is a network of wireless networks. WiFi networks are connected to the internet, but they are not connected to WiFi networks. The internet is connected to a WiFi network, but it isn’t connected to Wifi networks. They are connected by WiFi networks, but WiFi networks can not be connected to wifi networks. So, if you turn on WiFi, the WiFi network will be turned on and off. The WiFi network can be turned off by turning off WiFi, but not by turning it back on. How do they know exactly how tall Mt. Everest is? There are a lot of different ways to measure the height of an object. For example, you can measure the distance from the top of a mountain to the bottom of the mountain, and you can calculate the distance between the top and the mountain. The easiest way to do this is by measuring the angle between the mountain and the ground, and then measuring the height between the ground and the top. You can also measure the angle of the ground from the bottom to the top, and the height from the ground to the ground. The most common method is to use a ruler, which is a ruler that measures the length of the base of a piece of paper. The height of the top is measured in meters, and it’s measured in feet. The simplest way to measure height is by using a ruler. A ruler is a measure of the distance to the base, and a ruler measures the height at which the base is measured. A good ruler is also a good ruler, but it’s not a good one. It’s a good measure of how far away a mountain is, and how far it is away from the base. The same is true for measuring height, but you can’t measure it with a ruler because it’s too far away. How/where was the first language invented? It wasn’t until the late 19th century that the first language was invented. It was just a way for people to communicate with each other and to communicate. For example, if you were to talk to someone, they would talk to yo , but you wouldn’t know what they were talking about. If you were talking to them, they’d talk to each other, but they’d never know what you’re talking about, so they wouldn’t be able to tell you what they’re talking to. The same thing happens with languages. For instance, you might say “I ’m talking to you”, but if you’re not talking to me, you won’t know how to say it. You might say, “I want to know you,” but you don’t know where you’re going to be talking to, so you might not know what to say. But you might also say “you want to understand me,” and so on. And so on and so forth. Eventually, people started to learn how to communicate, and eventually, they started to figure out how to use their own language to communicate in a way that they couldn’t understand, and then they developed a new language that they didn’t understand. It’s like how you can tell a friend that you’re in the middle of a conversation, and you can’t tell them how to tell them that they’re in a conversation. Figure 16: Example poor quality Question-Generated Answer pairs from the Seq2Seq multi-task model
{ "id": "1705.04304" }
1907.08584
CraftAssist: A Framework for Dialogue-enabled Interactive Agents
This paper describes an implementation of a bot assistant in Minecraft, and the tools and platform allowing players to interact with the bot and to record those interactions. The purpose of building such an assistant is to facilitate the study of agents that can complete tasks specified by dialogue, and eventually, to learn from dialogue interactions.
http://arxiv.org/pdf/1907.08584
Jonathan Gray, Kavya Srinet, Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo, Siddharth Goyal, C. Lawrence Zitnick, Arthur Szlam
cs.AI
null
null
cs.AI
20190719
20190719
9 1 0 2 l u J 9 1 ] I A . s c [ 1 v 4 8 5 8 0 . 7 0 9 1 : v i X r a # CraftAssist: A Framework for Dialogue-enabled Interactive Agents # Jonathan Gray * Kavya Srinet * Yacine Jernite Haonan Yu Zhuoyuan Chen Demi Guo Siddharth Goyal C. Lawrence Zitnick Arthur Szlam Facebook AI Research {jsgray,ksrinet}@fb.com # Abstract This paper describes an implementation of a bot assistant in Minecraft, and the tools and platform allowing players to interact with the bot and to record those interactions. The purpose of build- ing such an assistant is to facilitate the study of agents that can complete tasks specified by dia- logue, and eventually, to learn from dialogue in- teractions. # 1. Introduction While machine learning (ML) methods have achieved impressive performance on difficult but narrowly-defined tasks (Silver et al., 2016; He et al., 2017; Mahajan et al., 2018; Mnih et al., 2013), building more general systems that perform well at a variety of tasks remains an area of active research. Here we are interested in systems that are competent in a long-tailed distribution of simpler tasks, specified (perhaps ambiguously) by humans using natural language. As described in our position paper (Szlam et al., 2019), we propose to study such systems through the de- velopment of an assistant bot in the open sandbox game of Minecraft1 (Johnson et al., 2016; Guss et al., 2019). This paper describes the implementation of such a bot, and the tools and platform allowing players to interact with the bot and to record those interactions. The bot appears and interacts like another player: other players can observe the bot moving around and modifying the world, and communicate with it via in-game chat. Fig- ure 1 shows a screenshot of a typical in-game experience. Neither Minecraft nor the software framework described here provides an explicit objective or reward function; the ultimate goal of the bot is to be a useful and fun assistant in a wide variety of tasks specified and evaluated by human players. Figure 1. An in-game screenshot of a human player using in-game chat to communicate with the bot. Longer term, we hope to build assistants that interact and collaborate with humans to actively learn new concepts and skills. However, the bot described here should be taken as initial point from which we (and others) can iterate. As the bots become more capable, we can expand the scenarios where they can effectively learn. To encourage collaborative research, the code, data, and models are open-sourced2. The design of the framework is purposefully modular to allow research on components of the bot as well as the whole system. The released data includes the human actions used to build 2,586 houses, the labeling of the sub-parts of the houses (e.g., walls, roofs, etc.), human rewordings of templated commands, and the mapping of natural language commands to bot in- terpretable logical forms. To enable researchers to inde- pendently collect data, the infrastructure that allows for the recording of human and bot interaction on a Minecraft server is also released. We hope these tools will help em- power research on agents that can complete tasks specified by dialogue, and eventually, learn form dialogue interac- tions. Equal contribution 'Minecraft features: ©Mojang Synergies AB included cour- tesy of Mojang AB 2https://github.com/facebookresearch/ craftassist CraftAssist: A Framework for Dialogue-enabled Interactive Agents Each voxel in the grid contains one material. In this paper, we assume players are in creative mode and we focus on building compound objects. Minecraft, particularly in its creative mode setting, has no win condition and encourages players to be creative. The diversity of objects created in Minecraft is astound- ing; these include landmarks, sculptures, temples, roller- coasters and entire cityscapes. Collaborative building is a common activity in Minecraft. Minecraft allows multiplayer servers, and players can col- laborate to build, survive, or compete. It has a huge player base (91M monthly active users in October 2018) 4, and players actively create game mods and shareable content. The multiplayer game has a built-in text chat for player to player communication. Dialogue between users on multi- user servers is a standard part of the game. Figure 2. An in-game screenshot showing some of the block types available to the user in creative mode. # 2. Minecraft Minecraft3 is a popular multiplayer open world voxel- based building and crafting game. Gameplay starts with a procedurally generated world containing natural features (e.g. trees, mountains, and fields) all created from an atomic set of a few hundred possible blocks. Addition- ally, the world is populated from an atomic set of animals and other non-player characters, commonly referred to as “mobs”. # 3. Client/Server Architecture Minecraft operates through a client and server architecture. The bot acting as a client communicates with the Minecraft server using the Minecraft network protocol5. The server may receive actions from multiple bot or human clients, and returns world updates based on player and mob ac- tions. Our implementation of a Minecraft network client is included in the top-level client directory. Implementing the Minecraft protocol enables a bot to con- nect to any Minecraft server without the need for installing server-side mods, when using this framework. This pro- vides two main benefits: The game has two main modes: “creative” and “survival”. In survival mode the player is resource limited, can be harmed, and is subject to more restrictive physics. In creative mode, the player is not resource limited, cannot be harmed, and is subject to less restrictive physics, e.g. the player can fly through the air. An in-depth guide to Minecraft can be found at https://minecraft. gamepedia.com/Minecraft. In survival mode, blocks can be combined in a process called “crafting” to create other blocks. For example, three wood blocks and three wool can be combined to create an atomic “bed” block. In creative mode, players have access to all block types without the need for crafting. 1. A bot can easily join a multiplayer server along with human players or other bots. 2. A bot can join an alternative server which implements the server-side component of the Minecraft network protocol. The development of the bot described in this paper uses the 3rd-party, open source Cuberite server. Among other benefits, this server can be easily modi- fied to record the game state that can be useful infor- mation to help improve the bot. # 4. Assistant v0 Compound objects are arrangements of multiple atomic ob- jects, such as a house constructed from brick, glass and door blocks. Players may build compound objects in the world by placing or removing blocks of different types in the environment. Figure 2 shows a sample of different block types. The blocks are placed on a 3D voxel grid. 3https://minecraft.net/en-us/ This section outlines our initial approach to building a Minecraft assistant, highlighting some of the major design decisions made: 4https://www.gamesindustry.biz/articles/ 2018-10-02-minecraft-exceeds-90-million- monthly-active-users 5We have implemented protocol version 340, which corre- sponds to Minecraft Computer Edition v1.12.2, and is described here: https://wiki.vg/index.php?title=Protocol&oldid=14204 CraftAssist: A Framework for Dialogue-enabled Interactive Agents Input Incoming Chat v Neural Semanti crat leural Semantic |¢ | Dialogue Manager Parser Action Dictionary Input: Biock Change Neural Semantic Push Dialogue Object ‘Segmentation Model a Dialogue Stack Pop Dialogue Object Dialogue Object New Task Output: Output: Outgoing Chat Action finding necessary to complete a Move command) is per- formed in a Task object in the bot’s task stack. # 4.1. Handling an Example Command Consider a situation where a human player tells the bot: “go to the blue house”. The Dialogue Manager first checks for illegal or profane words, then queries the se- mantic parser. The semantic parser takes the chat as input and produces the action dictionary shown in figure 4. The dictionary indicates that the text is a command given by a human, that the high-level action requested is a MOVE, and that the destination of the MOVE is an object that is called a “house” and is “blue” in colour. More details on action dic- tionaries are provided in section 5.2.1. Based on the output of the semantic parser, the Dialogue Manager chooses the appropriate Dialogue Object to handle the chat, and pushes this Object to the Dialogue Stack. Figure 3. A simplified block diagram demonstrating how the modular system reacts to incoming events (in-game chats and modifications to the block world) • a modular architecture • the use of high-level, hand-written composable ac- tions called Tasks In the current version of the bot, the semantic parser is a function of only text – it is not aware of the objects present in the world. As shown in figure 3, it is the job of the Dialogue Object6 to interpret the action dictionary in the context of the world state stored in the memory. In this case, the Dialogue Object would query the memory for ob- jects tagged ”blue” and ”house”, and if present, create a Move Task whose target location is the actual (x, y, z) co- ordinates of the blue house. More details on Tasks are in section 5.1.2 • a pipelined approach to natural language understand- ing (NLU) involving a neural semantic parser Once the Task is created and pushed onto the Task stack, it is the Move Task’s responsibility, when called, to compare the bot’s current location to the target location and produce a sequence of low-level step movements to reach the target. A simplified module-level diagram is shown in Fig- ure 3, and the code described here is available at: https://github.com/facebookresearch/craftassist. See Sec- tion 8 for a discussion of these decisions and our future plans to improve the bot. Rather than directly modelling the action distribution as a function of the incoming chat text, our approach first parses the incoming text into a logical form we refer to as an ac- tion dictionary, described later in section 5.2.1. The action dictionary is then interpreted by a dialogue object which queries the memory module – a symbolic representation of the bot’s understanding of the world state – to produce an action and/or a chat response to the user. Input: [0] "go to the blue house" Output: { "dialogue_type": "HUMAN_GIVE_COMMAND", "action": { "action_type": "MOVE", "location": { "location_type": "REFERENCE_OBJECT", "reference_object": { "has_colour": [0, [3, 3]], "has_name": [0, [4, 4]] }}}} The bot responds to commands using a set of higher-level actions we refer to as Tasks, such as move to location X, or build a Y at location Z. The Tasks act as abstractions of long sequences of low-level movement steps and indi- vidual block placements. The Tasks are executed in a stack (LIFO) order. The interpretation of an action dictionary by a dialogue object generally produces one or more Tasks, and the execution of the Task (e.g. performing the path- Figure 4. An example input and output for the neural semantic parser. References to words in the input (e.g. ”house”) are writ- ten as spans of word indices, to allow generalization to words not present in the dictionary at train-time. For example, the word ”house” is represented as the span beginning and ending with word 3, in sentence index 0. 6The code implementing the dialogue object that would handle this scenario is in interpreter.py CraftAssist: A Framework for Dialogue-enabled Interactive Agents Main Loop L Cao any blocks boon sddedo removed om ) —ves—>| Use Nema un ‘the wit 7 "of No Cnn an ne chat? “coer aDalogu \ gift kt No No Yes Dialogue Manager calls Semantic Parser and pushes a Dialogue Objectto the Stack ¥ ¥ Pop the Dialogue Stack ‘and call step(chat) on the topmost Dialogue Object Pop the Dialogue Stack and call.step() on the tepmost Dialogue Object + Is thoro a Task on to Pop the Task Stack and =] paee No L < of absolute (x, y, z) Mob: A moving object in the world (e.g. cow, pig, sheep, etc.) # 5.1.2. TASKS A Task is an interruptible process with a clearly defined objective. A Task can be executed step by step, and must be resilient to long pauses between steps (to allow tasks to be paused and resumed if the user changes their priorities). A Task can also push other Tasks onto the stack, similar to the way that functions can call other functions in a standard programming language. For example, a Build may first re- quire a Move if the bot is not close enough to place blocks at the desired location. The following is a list of basic Tasks: Move(Location) Move to a specific coordinate in the world. Implemented by an A* search which destroys and replaces blocks if necessary to reach a destination. Figure 5. A flowchart of the bot’s main event loop. On every loop, the bot responds to incoming chat or block-change events if nec- essary, and makes progress on the topmost Task on its stack. Note that dialogue context (e.g. if the bot has asked a question and is awaiting a response from the user) is stored in a stack of Dialogue Objects. If this dialogue stack is not empty, the topmost Dialogue Object will handle an incoming chat. Build(Schematic, Location) Build a specific schematic into the world at a specified location. Destroy(BlockObject) Destroy the specified BlockObject. Dig(Location, Size) Dig a rectangular hole of a given Size at the specified Location. Fill(Location) Fill the holes at the specified Location. A flowchart of the bot’s main event loop is shown in fig- ure 5, and the implementation can be found in the step method in craftassist agent.py. Spawn(Mob, Location) Spawn a Mob at a given Location. Dance(Movement) Perform a defined sequence of moves (e.g. move in a clockwise pattern around a coordinate) # 5. Modules There are also control flow actions which take other Tasks as arguments: This section provides a detailed documentation of each module of the system as implemented, at the time of this release. Undo(Task) This Task reverses the effects of a specified Task, or defaults to the last Task executed (e.g. destroy the blocks that resulted from a Build) # 5.1. Task Stack 5.1.1. TASK PRIMITIVES Loop(StopCondition, Task) This Task keeps executing the given Task until a StopCondition is met (e.g keep dig- ging until you hit a bedrock block) The following definitions are concepts used throughout the bot’s Tasks and execution system: # 5.2. Semantic Parser BlockId: A Minecraft building material (e.g. dirt, di- amond, glass, or water), characterized by an 8-bit id and 4-bit metadata7 Location: An absolute position (x, y, z) in the world The core of the bot’s natural language understanding is performed by a neural semantic parser called the Text-to- Action-Dictionary (TTAD) model. This model receives an incoming chat / dialogue and parses it into an action dictio- nary that can be interpreted by the Dialogue Object. Schematic: An object blueprint that can be copied into the world: a map of relative (x, y, z) +> BlockId BlockObject: A real object that exists in the world: a set A detailed report of this model is available at (Jernite et al., 2019). The model is a modification of the approach in (Dong & Lapata, 2016)). We use bi-directional GRU en- coder for encoding the sentences and multi-headed atten- 7See https://minecraft-ids.grahamedgecombe.com/ CraftAssist: A Framework for Dialogue-enabled Interactive Agents tion over the input sentence. # 5.2.1. ACTION DICTIONARIES An action dictionary is an unambiguous logical form of the intent of a chat. An example of an action dictionary is shown in figure 4. Every action dictionary is one of four dialogue types: 1. HUMAN GIVE COMMAND: The human is giving an instruction to the bot to perform a Task, e.g. to Move somewhere or Build something. An action dic- tionary of this type must have an action key that has a dictionary with an action type specifying the Task, along with further information detailing the information for the Task (e.g. “schematic” and “loca- tion” for a Build Task). 2. GET MEMORY: The human is asking a question or otherwise probing the bot’s understanding of the envi- ronment. Stack. For example, the Interpreter Object, while handling a Destroy command and determining which object should be destroyed, may ask the user for clarifi- cation. This places a ConfirmReferenceObject ob- ject on the Stack, which in turn either pushes a Say object to ask the clarification question or AwaitResponse ob- ject (if the question has already been asked) to wait for the user’s response. The Dialogue Manager will then first call the Say and then call the AwaitResponse object to help resolve the Interpreter object. # 5.4. Memory The data stored in the bot’s memory includes the locations of BlockObjects and Mobs (animals), information about them (e.g. user-assigned names, colour etc), the histori- cal and current state of the Task Stack, all the chats and relations between different memory objects. Memory data is queried by DialogueObjects when interpreting an action dictionary (e.g. to interpret the action dictionary in figure 4, the memory is queried for the locations of block objects named “house” with colour “blue”). 3. PUT MEMORY: The human is providing information to the bot for future reference or providing feedback to the bot, e.g. assigning a name to an object “that brown thing is a shed”. 4. NOOP: No action is required. The memory module is implemented using an in-memory SQLite8 database. Relations and tags are stored in a single triple store. All memory objects (including triples them- selves) can be referenced as the subject or object of a mem- ory triple. There is a dialogue object associated with each dialogue type. For example, the GetMemoryHandler interprets a GET MEMORY action dictionary, querying the memory, and responding to the user with an answer to the question. For HUMAN GIVE COMMAND action dictionaries, with few exceptions, there is a direct mapping from “ac- tion type” values to Task names in section 5.1.2. How are BlockObjects populated into Memory? At this time, BlockObjects are defined as maximally con- nected components of unnatural blocks (i.e. ignoring blocks like grass and stone that are naturally found in the world, unless those blocks were placed by a human or bot). The bot periodically searches for BlockObjects in its vicin- ity and adds them to Memory. # 5.3. Dialogue Manager & Dialogue Stack The Dialogue Manager is the top-level handler for incom- ing chats. It performs the following : 1. Checking the chat for obscenities or illegal words 2. Calling the neural semantic parser to produce an ac- tion dictionary 3. Routing the handling of the action dictionary to an ap- propriate Dialogue Object How are tags populated into Memory? At this time, tag triples of the form (BlockObject id, "has tag", tag) are inserted as the result of some PUT MEMORY actions, triggered when a user assigns a name or descrip- tion to an object via chat or gives feedback (e.g. “that ob- ject is a house”, “that barn is tall” or “that was really cool”). Some relations (e.g. has colour, indicating BlockOb- ject colours) are determined heuristically. Neural network perception modules may also populate tags into the mem- ory. 4. Storing (in the Dialogue Stack) persistent state and context to allow multi-turn dialogues # 5.5. Perception The Dialogue Stack is to Dialogue Objects what the Task Stack is to Tasks. The execution of a Dialogue Object may require pushing another Dialogue Object onto the The bot has access to two raw forms of visual sensory in- put: 8https://www.sqlite.org/index.html CraftAssist: A Framework for Dialogue-enabled Interactive Agents 2D block vision9 By default, this produces a 64x64 im- age where each “pixel” contains the block type and distance to the block in the bot’s line of sight. For example, instead of a pixel containing RGB colour information representing “brown”, the bot might see block-id 17, indicating “Oak Wood”. 3D block vision10 The bot has access to the underlying block map: the block type at any absolute position nearby. This information is not available to a human player inter- acting normally with the Minecraft game – if it is impor- tant to compare a bot’s sensorimotor capabilities to a hu- man’s (e.g. in playing an adversarial game against a human player), avoid the use of the get blocks function which implements this capability. Other common perceptual capabilities are implemented us- ing ML models or heuristics as appropriate: • Generations: Algorithmically generating action trees (logical forms over the grammar) with associated sur- face forms using templates. (The script for generating these is here: generate dialogue.py) • Rephrases: We asked crowd workers to rephrase some of the produced instructions into commands in alternate, natural English that does not change the meaning of the sentence. • Prompts: We presented crowd workers with a de- scription of an assistant bot and asked them for ex- amples of commands they’d give the bot. • Interactive: We asked crowd workers to play creative mode Minecraft with our bot, and used the data from the in-game chat. Semantic segmentation A 3d convolutional neural net- work processes each Block Object and outputs a tag for each voxel, indicating for example whether it is part of a wall, roof, or floor. The code for this model is in python/craftassist/vision/semantic segmentation/ The dataset has four files, corresponding to the settings above: 1. generated dialogues.json : This file has 800000 di- alogue - action dictionary pairs generated using our generation script. More can be generated using the script. Relative directions Referring to objects based on their positions relative to other objects is performed heuristically based on a coordinate shift relative to the speaker’s point of view. For example, referencing “the barn left of the house” is handled by searching for the closest object called “barn” that is to the speaker’s left of the “house”. 2. rephrases.json: This file has 25402 dialogue - action dictionary pairs. These are paraphrases of dialogues generated by our grammar. 3. prompts.json: This file contains 2513 dialogue - ac- tion dictionary pairs. These dialogues came from the prompts setting described above. Size and colour Referring to objects based on their size or colour is handled heuristically. The colour of a Block Object is based on the colours of its most common block types. Adjectives referring to size (e.g. “tiny” or “huge”) are heuristically mapped to ranges of block lengths. 4. humanbot.json: This file contains 708 dialogue - ac- tion dictionary pairs. These dialogues came from the interactive setting above. The format of the data in each file is: # 6. Data This section describes the datasets we are releasing with the framework. • A dialogue is represented as a list of sentences, where each sentence is a sequence of words separated by spaces and tokenized using the spaCy tokenizer (Hon- nibal & Johnson, 2015). # 6.1. The semantic parsing dataset We are releasing a semantic parsing dataset of English- language instructions and their associated “action dictio- naries”, used for human-bot interactions in Minecraft. This dataset was generated in different settings as described be- low: • Each json file is a list of dialogue - action dictionary pair, where “action dictionary” is a nested dictionary described in 5.2.1 For more details on the dataset see: (Jernite et al., 2019) 9The implementation of 2D block vision is found at agent.cpp#L328 # 6.2. House dataset 10The implementation of 3D block vision is found at agent.cpp#L321 We used crowd sourcing to collect examples of humans building houses in Minecraft. Each user is asked to build a CraftAssist: A Framework for Dialogue-enabled Interactive Agents house on a fixed time budget (30 minutes), without any ad- ditional guidance or instructions. Every action of the user is recorded using the Cuberite server. id of the semantic annotation that the coordinate/block belongs to (1-indexed annotation list). The data collection was performed in Minecraft’s creative mode, where the user is given unlimited resources, has ac- cess to all material block types and can freely move in the game world. The action space of the environment is straight-forward: moving in x-y-z dimensions, choosing a block type, and placing or breaking a block. • annotation list: List of semantic segmentation for the house. • house name: Name of the house. There are 2050 houses in total and 1038 distinct labels of subcomponents. There are hundreds of different block types someone could use to build a house, including different kinds of wood, stone, dirt, sand, glass, metal, ice, to list a few. An empty voxel is considered as a special block type “air” (block id=0). The datasets described above can be downloaded following the instructions here # 7. Related Work We record sequences of atomic building actions for each user at each step using the following format: [t, userid, [x, y, z], [block-id, meta-id], "P"/"B"] where the time-stamp t is in monotonically increasing or- der; [xt, yt, zt] is the absolute coordinate with respect to the world origin in Minecraft; “P” and “B” refers to placing a new block and breaking (destroying) an existing block; each house is built by a single player in our data collection process with a unique user-id. There are 2586 houses in total. Details of this work is under submission. # 6.3. Instance segmentation data For a subset of the houses collected in the house dataset described above, we asked crowd workers to add semantic segmentation labels for sub-components of the house. The format of the data is explained below. There are two files: • training data.pkl : This file contains data we used for training our 3D semantic segmentation model. A number of projects have been initiated to study Minecraft agents or to build frameworks to make learning in Minecraft possible. The most well known framework is Microsoft’s MALMO project (Johnson et al., 2016). The majority of work using MALMO consider reinforcement learned agents to achieve certain goals e.g (Shu et al., 2017; Udagawa et al., 2016; Alaniz, 2018; Oh et al., 2016; Tessler et al., 2017). Recently the MineRL project (Guss et al., 2019) builds on top of MALMO with playthrough data and specific challenges. Our initial bot has a neural semantic parser (Dong & La- pata, 2016; Jia & Liang, 2016; Zhong et al., 2017) as its core NLU component. We also release the data used to train the semantic parser. There have been a number of datasets of natural language paired with logical forms to evaluate semantic parsing approaches, e.g. (Price, 1990; Tang & Mooney, 2001; Cai & Yates, 2013; Wang et al., 2015; Zhong et al., 2017). Recently (Chevalier-Boisvert et al., 2018) described a gridworld with navigation instruc- tions generated via a grammar. Our bot also needs to up- date its understanding of an initial instruction during sev- eral turns of dialogue with the user, which is reminiscent of the setting of (Bordes et al., 2017). • validation data.pkl: This file contains data used as validation set for the model. Each pickle file has a list of : [schematic, annotated_schematic, annotation_list, house_name] where: • schematic: The 3-d numpy array representing the house, where each element in the array is the block id of the block at that coordinate. • annotated schematic: The 3-d numpy array represent- ing the house, where each element in the array is the In addition to mapping natural language to logical forms, our dataset connects both of these to a dynamic environ- ment. In (Tellex et al., 2011; Matuszek et al., 2013) seman- tic parsing has been used for interpreting natural language commands for robots. In our setup, the “robot” is embodied in the Minecraft game instead of in the physical world. Se- mantic parsing in a voxel-world recalls (Wang et al., 2017), where the authors describe a method for building up a pro- gramming language from a small core via interactions with players. Our bot’s NLU pipeline is perhaps most similar to the one proposed in (Kollar et al., 2018), which builds a grammar for the Alexa virtual personal assistant. A task relevant to interactive bots is that of Visual Ques- tion Answering (VQA) (Antol et al., 2015; Krishna et al., 2017; Geman et al., 2015) in which a question is asked CraftAssist: A Framework for Dialogue-enabled Interactive Agents about an image and an answer is provided by the system. Most papers address this task using real images, but syn- thetic images have also been used (Johnson et al., 2017; Andreas et al., 2016). The VQA task has been extended to visual dialogues (Das et al., 2017) and videos (Tapaswi et al., 2016). Recently, the tasks of VQA and navigation have been combined using 3D environments (Gordon et al., 2018; Das et al., 2018; Kolve et al., 2017; Anderson et al., 2018) to explore bots that must navigate to certain locations before a question can be answered, e.g., “How many chairs are in the kitchen?” Similar to our framework, these papers use synthetic environments for exploration. However, these can be expanded to use those generated from real environ- ments (Savva et al., 2019). Instead of the goal being the answering of a question, other tasks can be explored. For instance, the task of guiding navigation in New York City using dialogue (de Vries et al., 2018), or accomplishing tasks such as pushing or opening specific objects (Kolve et al., 2017). having to address the ambiguities of tasks specified through language. 4. It may be possible to transfer the natural language un- derstanding capabilities of the bot to another similar domain by re-implementing the interpretation and ex- ecution of action dictionaries, without needing to re- train the semantic parser. On the other hand, 1. The space of objectives that can be completed by the bot is limited by the specification of the action dictio- naries. Adding a new capability to the bot usually re- quires adding a new structure to the action dictionary spec, adding code to the relevant Dialogue Object to handle it, and updating the semantic parsing dataset. 2. A more end-to-end model with a simpler action space might only require the collection of more data and might generalize better. # 8. Discussion In this work we have described the design of a bot and as- sociated data that we hope can be used as a starting point and baseline for research in learning from interaction in Minecraft. In this section, we discuss some major design decisions that were made for this bot, and contrast against other possible choices. We further discuss ways in which the bot can be improved. 3. The use of a pipelined approach (as described in this paper) introduces the possibility for compounding er- rors. There is a huge space of possible interfaces into the high- level actions we have proposed (and many other interesting constructions of high level actions). In particular, we plan to remove the strict separation between the parser and the world state in our bot. # 8.1. Semantic Parsing Rather than learning a mapping directly from (language, state) to an action or sequence of actions, the bot de- scribed in this paper first parses language into a program over high level tasks, called action dictionaries (see sec- tion 5.2.1). The execution of the program is scripted, rather than learned. # 8.2. Symbolic Memory As described in section 5.4, the bot’s memory is imple- mented using a (discrete, symbolic) relational database. The major advantages of this (compared to an end-to-end machine-learned model that operates on raw sensory in- puts) are: This arrangement has several advantages: 1. Determining a sequence of actions, given a well- specified intent, is usually simple in Minecraft. For example, moving to a known but faraway object might require hundreds of steps, but it is simple to use a path-finding algorithm such as A* search to find the sequence of actions to actually execute the move. 2. Training data for a semantic parsing model is easier to collect, compared to language-action pairs that would necessitate recording the actions of a human player. 3. If it was desired to learn the low-level actions needed to complete a task, approaches such as reinforcement learning could be employed that use the completion of the task in the action dictionary as a reward without 1. Easier to convert semantic parses into fully specified tasks that can query and write to the database. 2. Debugging the bot’s current understanding of the world is easier. 3. Integrating outside information, e.g. crowd-sourced building schematics, is more straightforward: doing so requires pre-loading rows into a database table, rather than re-training a generative model. 4. Reliable symbolic manipulations, especially lookups by keyword. On the other hand, such a memory can be brittle and lim- ited. Even within the space of “discrete” memories, there CraftAssist: A Framework for Dialogue-enabled Interactive Agents are more flexible formats, e.g. raw text; and there have been recent successes using such memories, for example works using the Squad dataset (Rajpurkar et al., 2016). We hope our platform will be useful for studying other sym- bolic memory architectures as well as continuous, learned approaches, and things in between. actions used in building them, instance segmentations of those houses, and templates and rephrases of templates for training a semantic parser. In the future, we plan to con- tinue to release data as it is collected. We hope that the community will find the framework useful and join us in building an assistant that can learn a broad range of tasks from interaction with people. # 8.3. Modularity for ML research The bot’s architecture is modular, and currently many of the modules are not learned. Many machine learning re- searchers consider the sort of tasks that pipelining makes simpler to be tools for evaluating more general learning methodologies. Such a researcher might advocate more end-to-end (or otherwise less “engineered”) approaches be- cause the goal is not necessarily to build something that works well on the tasks that the engineered approach can succeed in, but rather to build something that can scale be- yond those tasks. # References Alaniz, S. Deep reinforcement learning with model learn- ing and monte carlo tree search in minecraft. arXiv preprint arXiv:1803.08456, 2018. Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., S¨underhauf, N., Reid, I., Gould, S., and van den Hen- gel, A. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real envi- In Proceedings of the IEEE Conference on ronments. Computer Vision and Pattern Recognition, 2018. We have chosen this approach in part because it allows us to more easily build an interesting initial assistant from which we can iterate; and in particular allows data collection and creation. We do believe that modular systems are more generally interesting, especially in the setting of compe- tency across a large number of relatively easier tasks. Per- haps most interesting to us are approaches that allow mod- ular components with clearly defined interfaces, and het- erogeneous training based on what data is available. We hope to explore these with further iterations of our bot. Andreas, J., Rohrbach, M., Darrell, T., and Klein, D. Neu- ral module networks. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pp. 39–48, 2016. Antol, S., Agrawal, A., Lu, J., Mitchell, M., Batra, D., Lawrence Zitnick, C., and Parikh, D. Vqa: Visual ques- tion answering. In Proceedings of the IEEE international conference on computer vision, pp. 2425–2433, 2015. Despite our own pipelined approach, we consider research on more end-to-end approaches worthwhile and interest- ing. Even for researchers primarily interested in these, the pipelined approach still has value beyond serving as a base- line: as discussed above, it allows generating large amounts of training data for end-to-end methods. Bordes, A., Boureau, Y., and Weston, J. Learning end-to- In 5th International Confer- end goal-oriented dialog. ence on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceed- ings, 2017. URL https://openreview.net/ forum?id=S1Bb3D5gg. Finally, we note that from an engineering standpoint, mod- ularity has clear benefits. In particular, it allows many re- searchers to contribute components to the greater whole in parallel. As discussed above, the bot presented here is meant to be a jumping off point, not a final product. We hope that the community will find the framework useful and join us in building an assistant that can flexibly learn from interaction with people. # 9. Conclusion Cai, Q. and Yates, A. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), vol- ume 1, pp. 423–433, 2013. Chevalier-Boisvert, M., Bahdanau, D., Lahlou, S., Willems, L., Saharia, C., Nguyen, T. H., and Ben- gio, Y. Babyai: First steps towards grounded language arXiv preprint learning with a human in the loop. arXiv:1810.08272, 2018. We have described a platform for studying situated natural language understanding in Minecraft. The platform con- sists of code that implements infrastructure for allowing bots and people to play together, tools for labeling data, In addition to the code, we are and a baseline assistant. releasing a diverse set of data we used for building the as- sistant. This includes 2586 houses built in game, and the Das, A., Kottur, S., Gupta, K., Singh, A., Yadav, D., Moura, J. M., Parikh, D., and Batra, D. Visual dialog. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, 2017. Das, A., Datta, S., Gkioxari, G., Lee, S., Parikh, D., and Batra, D. Embodied question answering. In Proceedings CraftAssist: A Framework for Dialogue-enabled Interactive Agents of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 5, pp. 14, 2018. de Vries, H., Shuster, K., Batra, D., Parikh, D., We- ston, J., and Kiela, D. Talk the walk: Navigating new york city through grounded dialogue. arXiv preprint arXiv:1807.03367, 2018. Kollar, T., Berry, D., Stuart, L., Owczarzak, K., Chung, T., Mathias, L., Kayser, M., Snow, B., and Matsoukas, S. The alexa meaning representation language. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 3 (Industry Papers), volume 3, pp. 177–184, 2018. Language to logical form with neural attention. arXiv preprint arXiv:1601.01280, 2016. Kolve, E., Mottaghi, R., Gordon, D., Zhu, Y., Gupta, A., and Farhadi, A. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474, 2017. Geman, D., Geman, S., Hallonquist, N., and Younes, L. Visual turing test for computer vision systems. Proceed- ings of the National Academy of Sciences, 2015. Gordon, D., Kembhavi, A., Rastegari, M., Redmon, J., Fox, D., and Farhadi, A. Iqa: Visual question answering in in- teractive environments. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pp. 4089–4098, 2018. Guss, W. H., Codel, C., Hofmann, K., Houghton, B., Kuno, N., Milani, S., Mohanty, S. P., Liebana, D. P., Salakhut- dinov, R., Topin, N., Veloso, M., and Wang, P. The min- erl competition on sample efficient reinforcement learn- ing using human priors. CoRR, abs/1904.10079, 2019. URL http://arxiv.org/abs/1904.10079. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.-J., Shamma, Shamma, D., Bernstein, M., and Fei-Fei, L. Visual genome: Connecting language and vision using crowd- sourced dense image annotations. International Journal of Computer Vision, 2017. Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., Bharambe, A., and van der Maaten, L. Exploring the limits of weakly supervised pretraining. arXiv preprint arXiv:1805.00932, 2018. Matuszek, C., Herbst, E., Zettlemoyer, L., and Fox, D. Learning to parse natural language commands to a robot control system. In Experimental Robotics, pp. 403–415. Springer, 2013. He, K., Gkioxari, G., Doll´ar, P., and Girshick, R. Mask r-cnn. In Proceedings of the IEEE international confer- ence on computer vision, pp. 2961–2969, 2017. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013. An improved non- monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pp. 1373–1378, Lisbon, Portugal, September 2015. Association for Computational Linguistics. URL https://aclweb. org/anthology/D/D15/D15-1162. Jernite, Y., Srinet, K., Gray, J., and Szlam, A. Craftassist instruction parsing: Semantic parsing for a minecraft as- sistant, 2019. Jia, R. and Liang, P. Data recombination for neural seman- tic parsing. arXiv preprint arXiv:1606.03622, 2016. Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., and Girshick, R. B. CLEVR: A diagnos- tic dataset for compositional language and elementary visual reasoning. In CVPR, pp. 1988–1997. IEEE Com- puter Society, 2017. Oh, J., Chockalingam, V., Singh, S., and Lee, H. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016. Price, P. J. Evaluation of spoken language systems: The atis domain. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990, 1990. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. Savva, M., Kadian, A., Maksymets, O., Zhao, Y., Wijmans, E., Jain, B., Straub, J., Liu, J., Koltun, V., Malik, J., Parikh, D., and Batra, D. Habitat: A platform for embod- ied ai research. arXiv preprint arXiv:1904.01201, 2019. Shu, T., Xiong, C., and Socher, R. Hierarchical and in- terpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294, 2017. Johnson, M., Hofmann, K., Hutton, T., and Bignell, D. The malmo platform for artificial intelligence experimenta- tion. In IJCAI, pp. 4246–4247, 2016. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., CraftAssist: A Framework for Dialogue-enabled Interactive Agents Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. Szlam, A., Chen, Z., Goyal, S., Gray, J., Guo, D., Jernite, Y., Joulin, A., Kiela, D., Rothermel, D., Srinet, K., Synnaeve, G., Weston, J., Yu, H., and Zitnick, C. L. Why build an assistant in minecraft? https://research.fb.com/publications/ why-build-an-assistant-in-minecraft/, 2019. Tang, L. R. and Mooney, R. J. Using multiple clause con- structors in inductive logic programming for semantic parsing. In European Conference on Machine Learning, pp. 466–477. Springer, 2001. Tapaswi, M., Zhu, Y., Stiefelhagen, R., Torralba, A., Urta- sun, R., and Fidler, S. Movieqa: Understanding stories in movies through question-answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. Tellex, S., Kollar, T., Dickerson, S., Walter, M. R., Baner- jee, A. G., Teller, S., and Roy, N. Understanding natural language commands for robotic navigation and mobile manipulation. In Twenty-Fifth AAAI Conference on Ar- tificial Intelligence, 2011. Tessler, C., Givony, S., Zahavy, T., Mankowitz, D. J., and Mannor, S. A deep hierarchical approach to lifelong learning in minecraft. In AAAI, volume 3, pp. 6, 2017. Udagawa, H., Narasimhan, T., and Lee, S.-Y. Fighting zombies in minecraft with deep reinforcement learning. Technical report, Technical report, Stanford University, 2016. Wang, S. I., Ginn, S., Liang, P., and Manning, C. D. Nat- uralizing a programming language via interactive learn- ing. arXiv preprint arXiv:1704.06956, 2017. Wang, Y., Berant, J., and Liang, P. Building a semantic In Proceedings of the 53rd Annual parser overnight. Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), vol- ume 1, pp. 1332–1342, 2015. Zhong, V., Xiong, C., and Socher, R. Seq2sql: Generating structured queries from natural language using reinforce- ment learning. arXiv preprint arXiv:1709.00103, 2017.
{ "id": "1704.06956" }
1907.07174
Natural Adversarial Examples
We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that computer vision models have shared weaknesses. The first dataset is called ImageNet-A and is like the ImageNet test set, but it is far more challenging for existing models. We also curate an adversarial out-of-distribution detection dataset called ImageNet-O, which is the first out-of-distribution detection dataset created for ImageNet models. On ImageNet-A a DenseNet-121 obtains around 2% accuracy, an accuracy drop of approximately 90%, and its out-of-distribution detection performance on ImageNet-O is near random chance levels. We find that existing data augmentation techniques hardly boost performance, and using other public training datasets provides improvements that are limited. However, we find that improvements to computer vision architectures provide a promising path towards robust models.
http://arxiv.org/pdf/1907.07174
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song
cs.LG, cs.CV, stat.ML
CVPR 2021; dataset and code available at https://github.com/hendrycks/natural-adv-examples
null
cs.LG
20190716
20210304
1 2 0 2 r a M 4 ] G L . s c [ 4 v 4 7 1 7 0 . 7 0 9 1 : v i X r a # Natural Adversarial Examples Dan Hendrycks UC Berkeley Kevin Zhao* University of Washington # Steven Basart* UChicago Jacob Steinhardt, Dawn Song UC Berkeley # Abstract We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adver- sarial filtration technique to create datasets with limited spurious cues. Our datasets’ real-world, unmodified ex- amples transfer to various unseen models reliably, demon- strating that computer vision models have shared weak- nesses. The first dataset is called IMAGENET-A and is like the ImageNet test set, but it is far more challenging for existing models. We also curate an adversarial out-of- distribution detection dataset called IMAGENET-O, which is the first out-of-distribution detection dataset created for ImageNet models. On IMAGENET-A a DenseNet-121 ob- tains around 2% accuracy, an accuracy drop of approx- imately 90%, and its out-of-distribution detection perfor- mance on IMAGENET-O is near random chance levels. We find that existing data augmentation techniques hardly boost performance, and using other public training datasets provides improvements that are limited. However, we find that improvements to computer vision architectures provide a promising path towards robust models. # 1. Introduction Research on the ImageNet [11] benchmark has led to numerous advances in classification [40], object detection [38], and segmentation [23]. ImageNet classification improvements are broadly applicable and highly predictive Improvements on of improvements on many tasks [39]. ImageNet classification have been so great that some call ImageNet classifiers “superhuman” [25]. However, performance is decidedly subhuman when the test distri- bution does not match the training distribution [29]. The distribution seen at test-time can include inclement weather conditions and obscured objects, and it can also include objects that are anomalous. Recht et al., 2019 [47] remind us that ImageNet test Manhole Cover (99%) DRE aon, ¢ o o z Oo a ic} £ Photosphere Jellyfish (99%) ImageNet-O Figure 1: Natural adversarial examples from IMAGENET-A and IMAGENET-O. The black text is the actual class, and the red text is a ResNet-50 prediction and its confidence. IMAGENET-A contains images that classifiers should be able to classify, while IMAGENET-O contains anomalies of unforeseen classes which should result in low-confidence predictions. ImageNet-1K models do not train on exam- ples from “Photosphere” nor “Verdigris” classes, so these images are anomalous. Most natural adversarial examples lead to wrong predictions despite occurring naturally. examples tend to be simple, clear, close-up images, so that the current test set may be too easy and may not represent harder images encountered in the real world. Geirhos et al., 2020 argue that image classification datasets contain “spurious cues” or “shortcuts” [18, 2]. For instance, models may use an image’s background to predict the foreground object’s class; a cow tends to co-occur with a green pasture, to the and even though the background is inessential object’s identity, models may predict “cow” primarily using the green pasture background cue. When datasets contain *Equal Contribution. 1 ImageNet-A ImageNet-O Accuracy of Various Models Detection with Various Models 25 === Random Chance Level 204 _ 20 4 15 4 g ze = ee ee ee 0104 <x 15 4 5 4 oz om re i C—F*e net net 6.39 AP x99 wet met 39 4D 4-50 er 12 Gi et * ene ext 12 "GG Ke et p ee: N oe ce gee » gore N eo” ae # S © g Ss < Figure 2: Various ImageNet classifiers of different architectures fail to generalize well to IMAGENET-A and IMAGENET-O. Higher Accuracy and higher AUPR is better. See Section 4 for a description of the AUPR out-of-distribution detection measure. These specific models were not used in the creation of IMAGENET-A and IMAGENET-O, so our adversarially filtered image transfer across models. spurious cues, they can lead to performance estimates that are optimistic and inaccurate. To counteract this, we curate two hard ImageNet test sets of natural adversarial examples with adversarial filtration. By using adversarial filtration, we can test how well models perform when simple-to-classify examples are removed, which includes examples that are solved with simple spurious cues. Some examples are depicted in Figure 1, which are simple for humans but hard for models. Our examples demonstrate that it is possible to reliably fool many models with clean natural images, while previous attempts at exposing and measuring model fragility rely on synthetic distribution corruptions [20, 29], artistic renditions [27], and adversarial distortions. We demonstrate that clean examples can reliably de- grade and transfer to other unseen classifiers using our first dataset. We call this dataset IMAGENET-A, which contains images from a distribution unlike the ImageNet training distribution. IMAGENET-A examples belong to ImageNet classes, but the examples are harder and can cause mistakes across various models. They cause consistent classifica- tion mistakes due to scene complications encountered in the long tail of scene configurations and by exploiting classifier blind spots (see Section 3.2). Since examples transfer reli- ably, this dataset shows models have unappreciated shared weaknesses. image concepts from outside ImageNet-1K. These out-of- distribution images reliably cause models to mistake the ex- amples as high-confidence in-distribution examples. To our knowledge this is the first dataset of anomalies or out-of- distribution examples developed to test ImageNet models. While IMAGENET-A enables us to test image classifica- tion performance when the input data distribution shifts, IMAGENET-O enables us to test out-of-distribution detec- tion performance when the label distribution shifts. We examine methods to improve performance on adversarially filtered examples. However, this is diffi- cult because Figure 2 shows that examples successfully transfer to unseen or black-box models. To improve robustness, numerous techniques have been proposed. We find data augmentation techniques such as adversarial training decrease performance, while others can help by a few percent. We also find that a 10× increase in training data corresponds to a less than a 10% increase improving model in accuracy. architectures is a promising avenue toward increasing robustness. Even so, current models have substantial room for improvement. Code and our two datasets are available at github.com/hendrycks/natural-adv-examples. The second dataset allows us to test model uncertainty estimates when semantic factors of the data distribution shift. Our second dataset is IMAGENET-O, which contains # 2. Related Work Adversarial Examples. Real-world images may be cho- sen adversarially to cause performance decline. Goodfellow ImageNet ImageNet-O Previous OOD Datasets Figure 3: IMAGENET-O examples are closer to ImageNet examples than previous out-of-distribution (OOD) detec- tion datasets. For example, ImageNet has triceratops ex- amples and IMAGENET-O has visually similar T-Rex ex- amples, but they are still OOD. Previous OOD detection datasets use OOD examples from wholly different data gen- erating processes. For instance, previous work uses the De- scribable Textures Dataset [10], Places365 scenes [63], and synthetic blobs to test ImageNet OOD detectors. To our knowledge we propose the first dataset of OOD examples collected for ImageNet models. et al. [21] define adversarial examples [54] as “inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake.” Most ad- versarial examples research centers around artificial ¢, ad- versarial examples, which are examples perturbed by nearly worst-case distortions that are small in an é, sense. Su et al., 2018 [52] remind us that most ¢, adversarial examples crafted from one model can only be transferred within the same family of models. However, our adversarially filtered images transfer to all tested model families and move be- yond the restrictive ¢, threat model. Out-of-Distribution Detection. For out-of-distribution (OOD) detection [30, 44, 31, 32] models learn a distribu- tion, such as the ImageNet-1K distribution, and are tasked with producing quality anomaly scores that distinguish be- tween usual test set examples and examples from held-out anomalous distributions. For instance, Hendrycks et al., 2017 [30] treat CIFAR-10 as the in-distribution and treat Gaussian noise and the SUN scene dataset [57] as out-of- distribution data. They show that the negative of the max- imum softmax probability, or the the negative of the clas- sifier prediction probability, is a high-performing anomaly score that can separate in- and out-of-distribution examples, so much so that it remains competitive to this day. Since that time, other work on out-of-distribution detection has con- tinued to use datasets from other research benchmarks as anomaly stand-ins, producing far-from-distribution anoma- lies. Using visually dissimilar research datasets as anomaly stand-ins is critiqued in Ahmed et al., 2019 [1]. Some pre- vious OOD detection datasets are depicted in the bottom row of Figure 3 [31]. Many of these anomaly sources are unnatural and deviate in numerous ways from the distribu- tion of usual examples. In fact, some of the distributions can be deemed anomalous from local image statistics alone. Next, Meinke et al., 2019 [46] propose studying adversar- ial out-of-distribution detection by detecting adversarially optimized uniform noise. In contrast, we propose a dataset for more realistic adversarial anomaly detection; our dataset contains hard anomalies generated by shifting the distribu- tion’s labels and keeping non-semantic factors similar to the original training distribution. Spurious Cues and Unintended Shortcuts. Models may learn spurious cues and obtain high accuracy, but for the wrong reasons [43, 18]. Spurious cues are a studied problem in natural language processing [9, 22]. Many recently introduced NLP datasets use adversarial filtration to create “adversarial datasets” by sieving examples solved with simple spurious cues [49, 5, 61, 15, 7, 28]. Like this recent concurrent research, we also use adversarial filtra- tion [53], but the technique of adversarial filtration has not been applied to collecting image datasets until this paper. Additionally, adversarial filtration in NLP removes only the easiest examples, while we use filtration to select only the hardest examples and ignore examples of intermediate difficulty. Adversarially filtered examples for NLP also do not reliably transfer even to weaker models. In Bisk et al., 2019 [6] BERT errors do not reliably transfer to weaker GPT-1 models. This is one reason why it is not obvious a priori whether adversarially filtered images should transfer. In this work, we show that adversarial filtration algorithms can find examples that reliably transfer to both weaker and stronger models. Since adversarial filtration can remove examples that are solved by simple spurious cues, models must learn more robust features for our datasets. Robustness to Shifted Input Distributions. Recht et al., 2019 [47] create a new ImageNet test set resembling the original test set as closely as possible. They found evi- dence that matching the difficulty of the original test set required selecting images deemed the easiest and most ob- vious by Mechanical Turkers. However, Engstrom et al., 2020 [16] estimate that the accuracy drop from ImageNet to ImageNetV2 is less than 3.6%. In contrast, model accu- racy can decrease by over 50% with IMAGENET-A. Bren- del et al., 2018 [8] show that classifiers that do not know the spatial ordering of image regions can be competitive on the ImageNet test set, possibly due to the dataset’s lack of diffi- culty. Judging classifiers by their performance on easier ex- amples has potentially masked many of their shortcomings. For example, Geirhos et al., 2019 [19] artificially overwrite each ImageNet image’s textures and conclude that classi- fiers learn to rely on textural cues and under-utilize infor- mation about object shape. Recent work shows that clas- sifiers are highly susceptible to non-adversarial stochastic corruptions [29]. While they distort images with 75 dif- ferent algorithmically generated corruptions, our sources of distribution shift tend to be more heterogeneous and varied, and our examples are naturally occurring. # 3. IMAGENET-A and IMAGENET-O # 3.1. Design IMAGENET-A is a dataset of real-world adversarially fil- tered images that fool current ImageNet classifiers. To find adversarially filtered examples, we first download numer- ous images related to an ImageNet class. Thereafter we delete the images that fixed ResNet-50 [24] classifiers cor- rectly predict. We chose ResNet-50 due to its widespread use. Later we show that examples which fool ResNet-50 re- liably transfer to other unseen models. With the remaining incorrectly classified images, we manually select visually clear images. Next, IMAGENET-O is a dataset of adversarially fil- tered examples for ImageNet out-of-distribution detectors. To create this dataset, we download ImageNet-22K and delete examples from ImageNet-1K. With the remaining ImageNet-22K examples that do not belong to ImageNet- 1K classes, we keep examples that are classified by a ResNet-50 as an ImageNet-1K class with high confidence. Then we manually select visually clear images. Both datasets were manually constructed by graduate students over several months. This is because a large share of images contain multiple classes per image [51]. There- fore, producing a dataset without multilabel images can be challenging with usual annotation techniques. To ensure images do not fall into more than one of the several hundred classes, we had graduate students memorize the classes in order to build a high-quality test set. IMAGENET-A Class Restrictions. We select a 200-class subset of ImageNet-1K’s 1, 000 classes so that errors among these 200 classes would be considered egregious [11]. For instance, wrongly classifying Norwich terriers as Norfolk terriers does less to demonstrate faults in current classifiers than mistaking a Persian cat for a candle. We additionally avoid rare classes such as “snow leopard,” classes that have changed much since 2012 such as “iPod,” coarse classes such as “spiral,” classes that are often image backdrops such as “valley,” and finally classes that tend to overlap such as “honeycomb,” “bee,” “bee house,” and “bee eater”; “eraser,” “pencil sharpener” and “pencil case”; “sink,” “medicine cabinet,” “pill bottle” and “band-aid”; and so on. The 200 IMAGENET-A classes cover most broad categories spanned by ImageNet-1K; see the Supplementary Materials for the full class list. IMAGENET-A Data Aggregation. The first step is to download many weakly labeled images. Fortunately, the website iNaturalist has millions of user-labeled images of animals, and Flickr has even more user-tagged images of objects. We download images related to each of the 200 Im- ageNet classes by leveraging user-provided labels and tags. After exporting or scraping data from sites including iNatu- ralist, Flickr, and DuckDuckGo, we adversarially select im- ages by removing examples that fail to fool our ResNet-50 models. Of the remaining images, we select low-confidence images and then ensure each image is valid through human review. If we only used the original ImageNet test set as a source rather than iNaturalist, Flickr, and DuckDuckGo, some classes would have zero images after the first round of filtration, as the original ImageNet test set is too small to contain hard adversarially filtered images. We now describe this process in more detail. We use a small ensemble of ResNet-50s for filtering, one pre-trained on ImageNet-1K then fine-tuned on the 200 class subset, and one pre-trained on ImageNet-1K where 200 of its 1, 000 logits are used in classification. Both classifiers have similar accuracy on the 200 clean test set classes from ImageNet- 1K. The ResNet-50s perform 10-crop classification for each image, and should any crop be classified correctly by the ResNet-50s, the image is removed. If either ResNet-50 as- signs greater than 15% confidence to the correct class, the image is also removed; this is done so that adversarially filtered examples yield misclassifications with low confi- dence in the correct class, like in untargeted adversarial at- tacks. Now, some classification confusions are greatly over- represented, such as Persian cat and lynx. We would like IMAGENET-A to have great variability in its types of errors and cause classifiers to have a dense confusion matrix. Con- sequently, we perform a second round of filtering to create a shortlist where each confusion only appears at most 15 times. Finally, we manually select images from this shortlist in order to ensure IMAGENET-A images are simultaneously valid, single-class, and high-quality. In all, the IMAGENET- A dataset has 7, 500 adversarially filtered images. As a specific example, we download 81, 413 dragonfly images from iNaturalist, and after running the ResNet-50 filter we have 8, 925 dragonfly images. In the algorithmi- cally diversified shortlist, 1, 452 images remain. From this shortlist, 80 dragonfly images are manually selected, but hundreds more could be selected if time allows. The resulting images represent a substantial distribution shift, but images are still possible for humans to classify. The Fr´echet Inception Distance (FID) [35] enables us to de- Fox Squirrel Monarch Butterfly Washing Machine Jay infly Manhole Cover Bullfrog infly Manhole Cover Fox Squirrel Bullfrog Monarch Butterfly Washing Machine Jay Figure 4: Additional adversarially filtered examples from the IMAGENET-A dataset. Examples are adversarially selected to cause classifier accuracy to degrade. The black text is the actual class, and the red text is a ResNet-50 prediction. Ligature Jellyfish (99%) Painting Goldfish (99%) Dam (99%) Highway Garlic Bread Hotdog (99%) Dam (99%) Garlic Bread Ligature Jellyfish (99%) Painting Goldfish (99%) Highway Hotdog (99%) Figure 5: Additional adversarially filtered examples from the IMAGENET-O dataset. Examples are adversarially selected to cause out-of-distribution detection performance to degrade. Examples do not belong to ImageNet classes, and they are wrongly assigned highly confident predictions. The black text is the actual class, and the red text is a ResNet-50 prediction and the prediction confidence. termine whether IMAGENET-A and ImageNet are not iden- tically distributed. The FID between ImageNet’s validation and test set is approximately 0.99, indicating that the distri- butions are highly similar. The FID between IMAGENET- A and ImageNet’s validation set is 50.40, and the FID be- tween IMAGENET-A and ImageNet’s test set is approxi- mately 50.25, indicating that the distribution shift is large. Despite the shift, we estimate that our graduate students’ IMAGENET-A human accuracy rate is approximately 90%. IMAGENET-O Class Restrictions. We again select a 200-class subset of ImageNet-1K’s 1, 000 classes. These 200 classes determine the in-distribution or the distribution that is considered usual. As before, the 200 classes cover most broad categories spanned by ImageNet-1K; see the Supplementary Materials for the full class list. IMAGENET-O Data Aggregation. Our dataset for ad- versarial out-of-distribution detection is created by fooling ResNet-50 out-of-distribution detectors. The negative of the prediction confidence of a ResNet-50 ImageNet classifier serves as our anomaly score [30]. Usually in-distribution examples produce higher confidence predictions than OOD examples, but we curate OOD examples that have high confidence predictions. To gather candidate adversarially filtered examples, we use the ImageNet-22K dataset with ImageNet-1K classes deleted. We choose the ImageNet- 22K dataset since it was collected in the same way as ImageNet-1K. ImageNet-22K allows us to have coverage of numerous visual concepts and vary the distribution’s se- mantics without unnatural or unwanted non-semantic data shift. After excluding ImageNet-1K images, we process the remaining ImageNet-22K images and keep the images which cause the ResNet-50 to have high confidence, or a low anomaly score. We then manually select a high-quality subset of the remaining images to create IMAGENET-O. We suggest only training models with data from the 1, 000 ImageNet-1K classes, since the dataset becomes trivial if models train on ImageNet-22K. To our knowledge, this dataset is the first anomalous dataset curated for ImageNet models and enables researchers to study adversarial out-of- distribution detection. The IMAGENET-O dataset has 2, 000 adversarially filtered examples since anomalies are rarer; this has the same number of examples per class as Ima- geNetV2 [47]. While we use adversarial filtration to select images that are difficult for a fixed ResNet-50, we will show Grasshopper Sundial Ladybug Dragonfl Banana Harvestman Sea Lion Figure 6: Examples from IMAGENET-A demonstrating classifier failure modes. Adjacent to each natural image is its heatmap [50]. Classifiers may use erroneous background cues for prediction. These failure modes are described in Section 3.2. these examples straightforwardly transfer to unseen models. 4. Experiments # 3.2. Illustrative Failure Modes Examples in IMAGENET-A uncover numerous failure modes of modern convolutional neural networks. We de- scribe our findings after having viewed tens of thousands of candidate adversarially filtered examples. Some of these failure modes may also explain poor IMAGENET-O perfor- mance, but for simplicity we describe our observations with IMAGENET-A examples. We show that adversarially filtered examples collected to fool fixed ResNet-50 models reliably transfer to other mod- els, indicating that current convolutional neural networks have shared weaknesses and failure modes. In the following sections, we analyze whether robustness can be improved by using data augmentation, using more real labeled data, and using different architectures. For the first two sections, we analyze performance with a fixed architecture for com- parability, and in the final section we observe performance with different architectures. First we define our metrics. Consider Figure 6. The first two images suggest models may overgeneralize visual concepts. It may confuse metal with sundials, or thin radiating lines with harvestman bugs. We also observed that networks overgeneralize tricycles to bicycles and circles, digital clocks to keyboards and calcu- lators, and more. We also observe that models may rely too heavily on color and texture, as shown with the drag- onfly images. Since classifiers are taught to associate en- tire images with an object class, frequently appearing back- ground elements may also become associated with a class, such as wood being associated with nails. Other examples include classifiers heavily associating hummingbird feeders with hummingbirds, leaf-covered tree branches being asso- ciated with the white-headed capuchin monkey class, snow being associated with shovels, and dumpsters with garbage trucks. Additionally Figure 6 shows an American alliga- tor swimming. With different frames, the classifier pre- diction varies erratically between classes that are seman- tically loose and separate. For other images of the swim- ming alligator, classifiers predict that the alligator is a cliff, lynx, and a fox squirrel. Assessing convolutional networks on IMAGENET-A reveals that even state-of-the-art models have diverse and systematic failure modes. Metrics. Our metric for assessing robustness to adversar- ially filtered examples for classifiers is the top-1 accuracy on IMAGENET-A. For reference, the top-1 accuracy on the 200 IMAGENET-A classes using usual ImageNet images is usually greater than or equal to 90% for ordinary classifiers. Our metric for assessing out-of-distribution detection performance of IMAGENET-O examples is the area un- der the precision-recall curve (AUPR). This metric requires anomaly scores. Our anomaly score is the negative of the maximum softmax probabilities [30] from a model that can classify the 200 IMAGENET-O classes. The maxi- mum softmax probability detector is a long-standing base- line in OOD detection. We collect anomaly scores with the ImageNet validation examples for the said 200 classes. Then, we collect anomaly scores for the IMAGENET-O examples. Higher performing OOD detectors would as- sign IMAGENET-O examples lower confidences, or higher anomaly scores. With these anomaly scores, we can com- pute the area under the precision-recall curve [48]. Random chance levels for the AUPR is approximately 16.67% with IMAGENET-O, and the maximum AUPR is 100%. The Effect of Data Augmentation on ImageNet-A Accuracy # Accuracy (%) 25 204 154 104 5 Normal Adversarial Style Training Transfer AugMix Cutout MoEx Mixup CutMix Figure 7: Some data augmentation techniques hardly improve IMAGENET-A accuracy. This demonstrates that IMAGENET-A can expose previously unnoticed faults in proposed robustness methods which do well on synthetic distribution shifts [34]. Data Augmentation. We examine popular data augmen- tation techniques and note their effect on robustness. In this section we exclude IMAGENET-O results, as the data aug- mentation techniques hardly help with out-of-distribution detection as well. As a baseline, we train a new ResNet-50 from scratch and obtain 2.17% accuracy on IMAGENET-A. Now, one purported way to increase robustness is through adversarial training, which makes models less sensitive to fy perturbations. We use the adversarially trained model from Wong et al., 2020 [56], but accuracy decreases to 1.68%. Next, Geirhos et al., 2019 [19] propose making net- works rely less on texture by training classifiers on images where textures are transferred from art pieces. They ac- complish this by applying style transfer to ImageNet train- ing images to create a stylized dataset, and models train on these images. While this technique is able to greatly increase robustness on synthetic corruptions [29], Style Transfer increases IMAGENET-A accuracy only 0.13% over the ResNet-50 baseline. A recent data augmentation tech- nique is AugMix [34], which takes linear combinations of different data augmentations. This technique increases ac- curacy to 3.8%. Cutout augmentation [12] randomly oc- cludes image regions and corresponds to 4.4% accuracy. Moment Exchange (MoEx) [45] exchanges feature map moments between images, and this increases accuracy to 5.5%. Mixup [62] trains networks on elementwise con- vex combinations of images and their interpolated labels; this technique increases accuracy to 6.6%. CutMix [60] su- perimposes images regions within other images and yields 7.3% accuracy. At best these data augmentations techniques improve accuracy by approximately 5% over the baseline. Results are summarized in Figure 7. Although some data augmentation techniques are purported to greatly improve robustness to distribution shifts [34, 59], their lackluster re- sults on IMAGENET-A show they do not improve robust- ness on some distribution shifts. Hence IMAGENET-A can be used to verify whether techniques actually improve real- world robustness to distribution shift. More Labeled Data. One possible explanation for con- sistently low IMAGENET-A accuracy is that all models are trained only with ImageNet-1K, and using additional data may resolve the problem. Bau et al., 2017 [4] ar- gue that Places365 classifiers learn qualitatively distinct fil- ters (e.g., they have more object detectors, fewer texture detectors in conv3) compared to ImageNet classifiers, so one may expect an error distribution less correlated with errors on ImageNet-A. To test this hypothesis we pre-train a ResNet-50 on Places365 [63], a large-scale scene recog- nition dataset. After fine-tuning the Places365 model on ImageNet-1K, we find that accuracy is 1.56%. Conse- quently, even though scene recognition models are pur- ported to have qualitatively distinct features, this is not enough to improve IMAGENET-A performance. Likewise, Places365 pre-training does not improve IMAGENET-O de- tection, as its AUPR is 14.88%. Next, we see whether la- beled data from IMAGENET-A itself can help. We take baseline ResNet-50 with 2.17% IMAGENET-A accuracy and fine-tune it on 80% of IMAGENET-A. This leads to no clear improvement on the remaining 20% of IMAGENET-A since the top-1 and top-5 accuracies are below 2% and 5%, respectively. Last, we pre-train using an order of magnitude more training data with ImageNet-21K. This dataset contains ap- proximately 21, 000 classes and approximately 14 million images. To our knowledge this is the largest publicly avail- able database of labeled natural images. Using a ResNet- 50 pretrained on ImageNet-21K, we fine-tune the model on ImageNet-1K and attain 11.41% accuracy on IMAGENET- A, a 9.24% increase. Likewise, the AUPR for IMAGENET- O improves from 16.20% to 21.86%, although this im- provement is less significant since IMAGENET-O images overlap with ImageNet-21K images. Academic researchers rarely use datasets larger than ImageNet due to computa- tional costs, using more data has limitations. An order of magnitude increase in labeled training data can provide some improvements in accuracy, though we now show that # Model Architecture and # Model Architecture and ImageNet-O Detection ImageNet-A Accuracy ImageNet-O Detection 25 25 Mmm ResNet Mm ~ResNet Mmm ResNext Mmm ResNext 20 7 Mm ResNet+SE Mmm ResNet+SE Mmm Res2Net Mm Res2Net _ 204 >] g Vv o 104 <x 154 5 + Normal Large XLarge 10 Normal Large XLarge # gS < Figure 8: Increasing model size and other architecture changes can greatly improve performance. Note Res2Net and ResNet+SE have a ResNet backbone. Normal model sizes are ResNet-50 and ResNeXt-50 (32 × 4d), Large model sizes are ResNet-101 and ResNeXt-101 (32 × 4d), and XLarge Model sizes are ResNet-152 and (32 × 8d). architecture changes provide greater improvements. Architectural Changes. We find that model architec- ture can play a large role in IMAGENET-A accuracy and IMAGENET-O detection performance. Simply increasing the width and number of layers of a network is sufficient to automatically impart more IMAGENET-A accuracy and IMAGENET-O OOD detection performance. Increasing network capacity has been shown to improve performance on , adversarial examples [42], common corruptions [29], and now also improves performance for adversarially fil- tered images. For example, a ResNet-50’s top-1 accu- racy and AUPR is 2.17% and 16.2%, respectively, while a ResNet-152 obtains 6.1% top-1 accuracy and 18.0% AUPR. Another architecture change that reliably helps is using the grouped convolutions found in ResNeXts [58]. A ResNeXt- 50 (32 x 4d) obtains a 4.81% top1 IMAGENET-A accuracy and a 17.60% IMAGENET-O AUPR. Another useful architecture change is self-attention. Convolutional neural networks with self-attention [36] are designed to better capture long-range dependencies and in- teractions across an image. We consider the self-attention technique called Squeeze-and-Excitation (SE) [37], which won the final ImageNet competition in 2017. A ResNet-50 with Squeeze-and-Excitation attains 6.17% accuracy. How- ever, for larger ResNets, self-attention does little to improve IMAGENET-O detection. We consider the ResNet-50 architecture with its resid- ual blocks exchanged with recently introduced Res2Net v1b blocks [17]. This change increases accuracy to 14.59% and the AUPR to 19.5%. A ResNet-152 with Res2Net v1b blocks attains 22.4% accuracy and 23.9% AUPR. Com- pared to data augmentation or an order of magnitude more labeled training data, some architectural changes can pro- vide far more robustness gains. Consequently future im- provements to model architectures is a promising path to- wards greater robustness. We now assess performance on a completely different architecture which does not use convolutions, vision Trans- formers [14]. We evaluate with DeiT [55], a vision Trans- former trained on ImageNet-1K with aggressive data aug- mentation such as Mixup. Even for vision Transformers, we find that ImageNet-A and ImageNet-O examples suc- cessfully transfer. In particular, a DeiT-small vision Trans- former gets 19.0% on IMAGENET-A and has a similar num- ber of parameters to a Res2Net-50, which has 14.6% ac- curacy. This might be explained by DeiT’s use of Mixup, however, which provided a 4% ImageNet-A accuracy boost for ResNets. The IMAGENET-O AUPR for the Transformer is 20.9%, while the Res2Net gets 19.5%. Larger DeiT models do better, as a DeiT-base gets 28.2% accuracy on IMAGENET-A and 24.8% AUPR on IMAGENET. Conse- quently, our datasets transfer to vision Transformers and performance for both tasks remains far from the ceiling. # 5. Conclusion We found it is possible to improve performance on our datasets with data augmentation, pretraining data, and ar- chitectural changes. We found that our examples transferred to all tested models, including vision Transformers which do not use convolution operations. Results indicate that im- proving performance on IMAGENET-A and IMAGENET-O is possible but difficult. Our challenging ImageNet test sets serve as measures of performance under distribution shift— an important research aim as models are deployed in in- creasingly precarious real-world environments. # References [1] Faruk Ahmed and Aaron C. Courville. Detecting semantic anomalies. ArXiv, abs/1908.04388, 2019. [2] Mart´ın Arjovsky, L´eon Bottou, Ishaan Gulrajani, and Invariant risk minimization. ArXiv, David Lopez-Paz. abs/1907.02893, 2019. [3] P. Bartlett and M. Wegkamp. Classification with a reject op- tion using a hinge loss. J. Mach. Learn. Res., 9:1823–1840, 2008. [4] David Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba. Network dissection: Quantifying interpretability of deep vi- sual representations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3319–3327, 2017. [5] Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Yih, and Yejin Choi. Abductive common- sense reasoning. ArXiv, abs/1908.05739, 2019. [6] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical common- sense in natural language. ArXiv, abs/1911.11641, 2019. [7] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical common- sense in natural language. ArXiv, abs/1911.11641, 2020. [8] Wieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. CoRR, abs/1904.00760, 2018. [9] Zheng Cai, Lifu Tu, and Kevin Gimpel. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In ACL, 2017. [10] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. Computer Vision and Pattern Recognition, 2014. [11] Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. CVPR, 2009. [12] Terrance Devries and Graham W. Taylor. Improved regular- ization of convolutional neural networks with Cutout. arXiv preprint arXiv:1708.04552, 2017. [13] Terrance Devries and Graham W. Taylor. Learning confi- dence for out-of-distribution detection in neural networks. ArXiv, abs/1802.04865, 2018. [14] A. Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, M. De- hghani, Matthias Minderer, Georg Heigold, S. Gelly, Jakob Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021. [15] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A read- ing comprehension benchmark requiring discrete reasoning over paragraphs. In NAACL-HLT, 2019. [16] L. Engstrom, Andrew Ilyas, Shibani Santurkar, D. Tsipras, Identifying statistical bias in J. Steinhardt, and A. Madry. dataset replication. ArXiv, abs/2005.09619, 2020. [17] Shanghua Gao, Ming-Ming Cheng, Kai Zhao, Xinyu Zhang, Ming-Hsuan Yang, and Philip H. S. Torr. Res2net: A new multi-scale backbone architecture. pattern analysis and machine intelligence, 2019. [18] Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. Shortcut learning in deep neural net- works. ArXiv, abs/2004.07780, 2020. [19] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. ICLR, 2019. [20] Robert Geirhos, Carlos R. M. Temme, Jonas Rauber, Heiko H. Sch¨utt, Matthias Bethge, and Felix A. Wich- mann. Generalisation in humans and deep neural networks. NeurIPS, 2018. [21] Ian Goodfellow, Nicolas Papernot, Sandy Huang, Yan Duan, , and Peter Abbeel. Attacking machine learning with adver- sarial examples. OpenAI Blog, 2017. [22] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. An- notation artifacts in natural language inference data. ArXiv, abs/1803.02324, 2018. [23] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross B. Girshick. Mask r-cnn. In CVPR, 2018. [24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, 2015. [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level perfor- mance on imagenet classification. 2015 IEEE International Conference on Computer Vision (ICCV), pages 1026–1034, 2015. [26] Dan Hendrycks, Steven Basart, Mantas Mazeika, Moham- madreza Mostajabi, J. Steinhardt, and D. Song. Scaling out-of-distribution detection for real-world settings. arXiv: 1911.11132, 2020. [27] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kada- vath, F. Wang, Evan Dorundo, Rahul Desai, Tyler Lixuan Zhu, Samyak Parajuli, M. Guo, D. Song, J. Steinhardt, and J. Gilmer. The many faces of robustness: A critical analysis of out-of-distribution generalization. ArXiv, abs/2006.16241, 2020. [28] Dan Hendrycks, C. Burns, Steven Basart, Andrew Critch, Jerry Li, D. Song, and J. Steinhardt. Aligning ai with shared human values. ArXiv, abs/2008.02275, 2020. [29] Dan Hendrycks and Thomas Dietterich. Benchmarking neu- ral network robustness to common corruptions and perturba- tions. ICLR, 2019. [30] Dan Hendrycks and Kevin Gimpel. A baseline for detect- ing misclassified and out-of-distribution examples in neural networks. ICLR, 2017. [31] Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. ICLR, 2019. [32] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can improve model robustness and uncertainty. Advances in Neural In- formation Processing Systems (NeurIPS), 2019. [33] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and D. Song. Using self-supervised learning can improve model ro- bustness and uncertainty. In NeurIPS, 2019. [34] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. ICLR, 2020. [35] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and S. Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017. [36] Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Andrea Vedaldi. Gather-excite : Exploiting feature context in convo- lutional neural networks. In NeurIPS, 2018. [37] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. [38] Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara Balan, Alireza Fathi, Ian Fischer, Zbig- niew Wojna, Yang Song, Sergio Guadarrama, and Kevin Murphy. Speed/accuracy trade-offs for modern convolu- tional object detectors. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [39] Simon Kornblith, Jonathon Shlens, and Quoc V. Le. Do bet- ter imagenet models transfer better? CoRR, abs/1805.08974, 2018. [40] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural net- works. NIPS, 2012. [41] A. Kumar, P. Liang, and T. Ma. Verified uncertainty calibra- tion. In Advances in Neural Information Processing Systems (NeurIPS), 2019. [42] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adver- sarial machine learning at scale. ICLR, 2017. [43] Sebastian Lapuschkin, Stephan W¨aldchen, Alexander Binder, Gr´egoire Montavon, Wojciech Samek, and Klaus- Robert M¨uller. Unmasking clever hans predictors and assess- ing what machines really learn. In Nature Communications, 2019. [44] Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for detecting out- of-distribution samples. ICLR, 2018. [45] Bo-Yi Li, Felix Wu, Ser-Nam Lim, Serge J. Belongie, and Kilian Q. Weinberger. On feature normalization and data augmentation. ArXiv, abs/2002.11102, 2020. [46] Alexander Meinke and Matthias Hein. Towards neural net- works that provably know when they don’t know. ArXiv, abs/1909.12180, 2019. [47] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to im- agenet? ArXiv, abs/1902.10811, 2019. [48] Takaya Saito and Marc Rehmsmeier. The precision-recall plot is more informative than the ROC plot when evaluat- ing binary classifiers on imbalanced datasets. In PLoS ONE. 2015. [49] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. ArXiv, abs/1907.10641, 2019. [50] Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv Ba- tra. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Com- puter Vision, 128:336 – 359, 2019. [51] Pierre Stock and Moustapha Ciss´e. Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases. In ECCV, 2018. [52] D. Su, Huan Zhang, H. Chen, Jinfeng Yi, P. Chen, and Yu- peng Gao. Is robustness the cost of accuracy? - a comprehen- sive study on the robustness of 18 deep image classification models. In ECCV, 2018. [53] Kah Kay Sung. Learning and example selection for object and pattern detection. 1995. [54] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. In- triguing properties of neural networks, 2014. [55] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv´e J´egou. Training data-efficient image transformers and distillation through at- tention. arXiv preprint arXiv:2012.12877, 2020. [56] Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994, 2020. [57] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. 2010 IEEE Computer Soci- ety Conference on Computer Vision and Pattern Recognition, pages 3485–3492, 2010. [58] Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. CVPR, 2016. [59] Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, E. Cubuk, and J. Gilmer. A fourier perspective on model ro- bustness in computer vision. ArXiv, abs/1906.08988, 2019. [60] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regu- larization strategy to train strong classifiers with localizable features. 2019 IEEE/CVF International Conference on Com- puter Vision (ICCV), pages 6022–6031, 2019. [61] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In ACL, 2019. [62] Hongyi Zhang, Moustapha Ciss´e, Yann Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. ArXiv, abs/1710.09412, 2018. [63] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. PAMI, 2017. # 6. Appendix # 7. Expanded Results # 7.1. Full Architecture Results Full results with various architectures are in Table 1. # 7.2. More OOD Detection Results and Background Works in out-of-distribution detection frequently use the maximum softmax baseline to detect out-of-distribution ex- amples [30]. Before neural networks, using the reject option or a k + 1st class was somewhat common [3], but with neu- ral networks it requires auxiliary anomalous training data. New neural methods that utilize auxiliary anomalous train- ing data, such as Outlier Exposure [31], do not use the reject option and still utilize the maximum softmax probability. We do not use Outlier Exposure since that paper’s authors were unable to get their technique to work on ImageNet- 1K with 224 × 224 images, though they were able to get it work on Tiny ImageNet which has 64 × 64 images. We do not use ODIN since it requires tuning hyperparameters directly using out-of-distribution data, a criticized practice [31]. We evaluate three additional out-of-distribution detec- tion methods, though none substantially improve perfor- mance. We evaluate method of [13], which trains an aux- iliary branch to represent the model confidence. Using a ResNet trained from scratch, we find this gets a 14.3% AUPR, around 2% less than the MSP baseline. Next we use the recent Maximum Logit detector [26]. With DenseNet- 121 the AUPR decreases from 16.1% (MSP) to 15.8% (Max Logit), while with ResNeXt-101 (32 × 8d) the AUPR of 20.5% increases to 20.6%. Across over 10 models we found the MaxLogit technique to be slightly worse. Finally, we evaluate the utility of self-supervised auxiliary objectives for OOD detection. The rotation prediction anomaly de- tector [33] was shown to help improve detection perfor- mance for near-distribution yet still out-of-class examples, and with this auxiliary objective the AUPR for ResNet-50 does not change; it is 16.2% with the rotation prediction and 16.2% with the MSP. Note this method requires training the network and does not work out-of-the-box. # 7.3. Calibration In this section we show IMAGENET-A calibration re- sults. Uncertainty Metrics. The ¢2 Calibration Error is how we measure miscalibration. We would like classifiers that can reliably forecast their accuracy. Concretely, we want classifiers which give examples 60% confidence to be cor- rect 60% of the time. We judge a classifier’s miscalibration with the @) Calibration Error [41]. Our second uncertainty estimation metric is the Area Figure 9: A demonstration of color sensitivity. While the leftmost image is classified as “banana” with high confi- dence, the images with modified color are correctly classi- fied. Not only would we like models to be more accurate, we would like them to be calibrated if they wrong. >/mageNet-A Accuracy vs. Response Rate Accuracy (%) 0 T T 7 7 0 20 40 60 80 100 Response Rate (%) Figure 10: The Response Rate Accuracy curve for a ResNeXt-101 (32×4d) with and without Squeeze-and- Excitation (SE). The Response Rate is the percent classi- fied. The accuracy at a n% response rate is the accuracy on the n% of examples where the classifier is most confident. Under the Response Rate Accuracy Curve (AURRA). Responding only when confident is often preferable to predicting falsely. In these experiments, we allow classi- fiers to respond to a subset of the test set and abstain from predicting the rest. Classifiers with quality uncertainty estimates should be capable identifying examples it is likely to predict falsely and abstain. If a classifier is required to abstain from predicting on 90% of the test set, or equivalently respond to the remaining 10% of the test set, then we should like the classifier’s uncertainty estimates to separate correctly and falsely classified examples and have high accuracy on the selected 10%. At a fixed response rate, we should like the accuracy to be as high as possible. At a 100% response rate, the classifier accuracy is the usual ImageNet-O (AUPR %) 15.44 15.31 16.58 16.80 16.57 16.11 15.23 16.00 16.20 17.20 18.00 17.52 17.91 18.65 14.34 16.20 19.50 22.69 23.90 17.60 19.60 20.51 17.78 21.10 17.4 20.9 24.8 AlexNet SqueezeNet1.1 VGG16 VGG19 VGG19+BN DenseNet121 ResNet-18 ResNet-34 ResNet-50 ResNet-101 ResNet-152 ResNet-50+Squeeze-and-Excite ResNet-101+Squeeze-and-Excite ResNet-152+Squeeze-and-Excite ResNet-50+DeVries Confidence Branch ResNet-50+Rotation Prediction Branch Res2Net-50 (v1b) Res2Net-101 (v1b) Res2Net-152 (v1b) ResNeXt-50 (32 × 4d) ResNeXt-101 (32 × 4d) ResNeXt-101 (32 × 8d) DPN 68 DPN 98 DeiT-tiny DeiT-small DeiT-base ImageNet-A (Acc %) 1.77 1.12 2.63 2.11 2.95 2.16 1.15 1.87 2.17 4.72 6.05 6.17 8.55 9.35 0.35 2.17 14.59 21.84 22.4 4.81 5.85 10.2 3.53 9.15 7.25 19.1 28.2 Table 1: Expanded IMAGENET-A and IMAGENET-O architecture results. Note IMAGENET-O performance is improving more slowly. test set accuracy. We vary the response rates and compute the corresponding accuracies to obtain the Response Rate Accuracy (RRA) curve. The area under the Response Rate Accuracy curve is the AURRA. To compute the AURRA in this paper, we use the maximum softmax probability. For response rate p, we take the p fraction of examples with highest maximum softmax probability. If the response rate is 10%, we select the top 10% of examples with the highest confidence and compute the accuracy on these examples. An example RRA curve is in Figure 10 . # 8. IMAGENET-A Classes The 200 ImageNet classes IMAGENET-A are as follows. goldfish, hammerhead, junco, frog, tarantula, bird, jellyfish, american egret, killer whale, hound, greyhound, whippet, weimaraner, boston terrier, terrier, spaniels, man shepherd dog, bernard, chow chow, dard poodle, cat, that we selected for great white shark, goldfinch, tree scorpion, humming- koala, flamingo, grey whale, afghan italian yorkshire terrier, west highland white cocker ger- saint pomeranian, stan- tabby chee- ostrich, hen, stingray, newt, vulture, axolotl, bald eagle, cobra, lorikeet, black swan, iguana, African chameleon, centipede, peacock, goose, toucan, duck, snail, lobster, hermit crab, pelican, king penguin, sea lion, basset hound, chihuahua, beagle, shih tzu, bloodhound, scottish terrier, golden retriever, labrador retriever, rottweiler, french bulldog, border collie, collie, boxer, dalmatian, pug, husky, toy poodle, red fox, tiger, pembroke welsh corgi, timber wolf, hyena, snow leopard, lion, leopard, The Effect of Self-Attention on ImageNet-A Calibration Mm Normal Mmm +SE Ss 8 wi Cc 2 S £ 2 i) O Rey 152 4d) snet (32x Re pesnext107 The Effect of Self-Attention on ImageNet-A Error Detection # 01 snet-t Re 30 Ma Normal Mmm +SE 15 g = 10 a 2 xt 5 et-101 et-101 pesN pesNet we Next 102 (32x49) Figure 11: Self-attention’s influence on IMAGENET-A (5 calibration and error detection. tah, ant, fly, monarch butterfly, cupine, fox squirrel, bra, skunk, gibbon, fish, pack, lighthouse, tie, canoe, boy hat, mask, ladybug, bee, dragon- por- ze- llama, chimpanzee, puffer back- bathtub, bow cannon, cow- gas- harmon- meerkat, fly, polar bear, grasshopper, cockroach, mantis, starfish, wood rabbit, guinea pig, beaver, gazelle, bison, pig, hippopotamus, badger, baboon, accordion, barn, orangutan, panda, ambulance, gorilla, eel, clown fish, assault rifle, basketball, wheelbarrow, binoculars, cauldron, beer glass, bucket, birdhouse, candle, broom, carousel, castle, mobile phone, electric guitar, flute, fire engine, grand piano, guillotine, hammer, # The Effect of Model Size on ImageNet-A Calibration M8 Baseline Ml Larger Model S L ° £ Ww c 2 S © 2 aod oO ng 8 ResNet ResNext DPN The Effect of Model Size on ImageNet-A Error Detection 20 M8 Baseline Mmm Larger Model 15 4 8 = 104 a 2 qt 5 4 0 a ResNet ResNext DPN Figure 12: Model size’s influence on IMAGENET-A £2 cal- ibration and error detection. ica, harp, lawn mower, ten, volver, schooner, web, tennis ball, military aircraft, pretzel, coli, Smith, pomegranate, baseball player, n01443537, jeep, joystick, hatchet, lipstick, lab coat, mit- re- school bus, spider tank, violin, bagel, broc- bell pepper, mushroom, Granny banana, volcano, missile, pirate ship, mailbox, parachute, pickup truck, sandal, soccer ball, rugby ball, shield, saxophone, space shuttle, submarine, vase, ice cream, cabbage, steam locomotive, scarf, trombone, tractor, wine bottle, cheeseburger, hotdog, cucumber, pineapple, espresso, strawberry, lemon, burrito, pizza, scuba diver, acorn, n01484850, n01498041, = n01514859, n01518878, n01494475, n01531178, n01534433, n01632777, n01748264, n01806143, n01847000, n01910747, n02007558, n02066245, n02086240, n02088466, n02094433, n02099601, n02106166, n02108915, n02110958, n02113624, n02119022, n02129165, n02138441, n02219486, n02268443, n02346627, n02391049, n02423022, n02480495, n02486410, n02655020, n02769748, n02808440, n02843684, n02939185, n02966193, n03272010, n03452741, n03495258, n03630383, n03773504, n03947888, n04141076, n04254680, n04325704, n04465501, n04552348, n07695742, n07714990, n07742313, n07753592, n07920052, n12267677, = n01614925, n01644373, n01770393, n01820546, n01855672, n01944390, n02009912, n02071294, n02088094, n02091032, n02096585, n02099712, n02106550, n02109525, n02112018, n02113799, n02123045, n02129604, n02165456, n02226429, n02279972, n02356798, n02395406, n02437616, n02480855, n02510455, n02672831, n02793495, n02814860, n02883205, n02948072, n02980441, n03345487, n03467068, n03498962, n03649909, n03775071, n04086273, n04146614, n04266014, n04347754, n04487394, n04591713, n07697313, n07718472, n07745940, n07768694, n09472597, n01616318, n01677366, n01774750, n01833805, n01860187, n01983481, n02051845, n02077923, n02088238, n02091134, n02097298, n02102318, n02106662, n02110185, n02112137, n02114367, n02128385, n02130308, n02190166, n02233338, n02317335, n02363005, n02398521, n02445715, n02481823, n02526121, n02701002, n02797295, n02823750, n02906734, n02950826, n02992529, n03372029, n03481172, n03594945, n03676483, n03888257, n04118538, n04147183, n04275548, n04389033, n04522168, n07614500, n07697537, n07720875, n07749582, n07873807, n09835506, = = n01630670, n01694178, n01784675, n01843383, n01882714, n01986214, n02056570, n02085620, n02088364, n02092339, n02098286, n02106030, n02108089, n02110341, n02113023, n02117135, n02128757, n02134084, n02206856, n02236044, n02325366, n02364673, n02410509, n02447366, n02483362, n02607072, n02749479, n02802426, n02841315, n02909870, n02951358, n03124170, n03424325, n03494278, n03602883, n03710193, n03930630, n04133789, n04192698, n04310018, n04409515, n04536866, n07693725, n07714571, n07734744, n07753275, n07880968, n10565667, = = = ‘Stingray;’ ‘goldfinch, Carduelis carduelis;’ ‘junco, snow- ‘robin, American robin, Turdus migratorius;’ bird;’ ‘jay;’ ‘bald eagle, American eagle, Haliaeetus leuco- cephalus;’ ‘vulture;’ ‘eft;’ ‘bullfrog, Rana catesbeiana;’ iguana, ‘box turtle, box tortoise;’ Iguana iguana;’ ‘agama;’ ‘African chameleon, Chamaeleo chamaeleon;’ ‘American alligator, Alligator mississipi- ensis;’ ‘garter snake, grass snake;’ ‘harvestman, daddy ‘tarantula;’ longlegs, Phalangium opilio;’ ‘centipede;’ ‘sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita;’ ‘toucan;’ ‘drake;’ ‘goose;’ ‘koala, koala bear, kangaroo bear, na- tive bear, Phascolarctos cinereus;’ ‘jellyfish;’ ‘sea anemone, anemone;’ ‘flatworm, platyhelminth;’ ‘snail;’ ‘crayfish, crawfish, crawdad, crawdaddy;’ ‘hermit crab;’ ‘flamingo;’ ‘American egret, great white heron, Egretta albus;’ ‘oyster- catcher, oyster catcher;’ ‘pelican;’ ‘sea lion;’ ‘Chihuahua;’ ‘golden retriever;’ ‘Rottweiler;’ ‘German shepherd, Ger- man shepherd dog, German police dog, alsatian;’ ‘pug, pug-dog;’ ‘red fox, Vulpes vulpes;’ ‘Persian cat;’ ‘lynx, catamount;’ ‘lion, king of beasts, Panthera leo;’ ‘Amer- ican black bear, black bear, Ursus americanus, Euarctos americanus;’ ‘mongoose;’ ‘ladybug, ladybeetle, lady bee- tle, ladybird, ladybird beetle;’ ‘rhinoceros beetle;’ ‘wee- vil;’ ‘fly;’ ‘bee;’ ‘ant, emmet, pismire;’ ‘grasshopper, hop- per;’ ‘walking stick, walkingstick, stick insect;’ ‘cockroach, roach;’ ‘mantis, mantid;’ ‘leafhopper;’ ‘dragonfly, darning needle, devil’s darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk;’ ‘monarch, monarch butterfly, milkweed butterfly, Danaus plexippus;’ ‘cabbage butterfly;’ ‘lycaenid, lycaenid butterfly;’ ‘starfish, sea star;’ ‘wood rabbit, cottontail, cottontail rabbit;’ ‘por- cupine, hedgehog;’ ‘fox squirrel, eastern fox squirrel, Sci- ‘skunk, polecat, wood ‘bison;’ urus niger;’ pussy;’ ‘armadillo;’ ‘baboon;’ ‘capuchin, ringtail, Cebus capucinus;’ ‘African elephant, Loxodonta africana;’ ‘puffer, pufferfish, blowfish, globefish;’ ‘academic gown, academic robe, judge’s robe;’ ‘accordion, piano accordion, squeeze box;’ ‘acoustic guitar;’ ‘airliner;’ ‘ambulance;’ ‘apron;’ ‘balance beam, beam;’ ‘balloon;’ ‘banjo;’ ‘barn;’ ‘barrow, garden cart, lawn cart, wheelbarrow;’ ‘basketball;’ ‘bea- con, lighthouse, beacon light, pharos;’ ‘beaker;’ ‘bikini, two-piece;’ ‘bow;’ ‘bow tie, bow-tie, bowtie;’ ‘breastplate, aegis, egis;’ ‘broom;’ ‘candle, taper, wax light;’ ‘canoe;’ ‘castle;’ ‘cello, violoncello;’ ‘chain;’ ‘chest;’ ‘Christmas stocking;’ ‘cowboy boot;’ ‘cradle;’ ‘dial telephone, dial phone;’ ‘digital clock;’ ‘doormat, welcome mat;’ ‘drum- stick;’ ‘dumbbell;’ ‘envelope;’ ‘feather boa, boa;’ ‘flag- pole, flagstaff;’ ‘forklift;’ ‘fountain;’ ‘garbage truck, dust- cart;’ ‘goblet;’ ‘go-kart;’ ‘golfcart, golf cart;’ ‘grand pi- ano, grand;’ ‘hand blower, blow dryer, blow drier, hair dryer, hair drier;’ ‘iron, smoothing iron;’ ‘jack-o’-lantern;’ ‘jeep, landrover;’ ‘kimono;’ ‘lighter, light, igniter, ignitor;’ ‘limousine, limo;’ ‘manhole cover;’ ‘maraca;’ ‘marimba, xylophone;’ ‘mask;’ ‘mitten;’ ‘mosque;’ ‘nail;’ ‘obelisk;’ ‘ocarina, sweet potato;’ ‘organ, pipe organ;’ ‘parachute, chute;’ ‘parking meter;’ ‘piggy bank, penny bank;’ ‘pool table, billiard table, snooker table;’ ‘puck, hockey puck;’ ‘quill, quill pen;’ ‘racket, racquet;’ ‘reel;’ ‘revolver, six- gun, six-shooter;’ ‘rocking chair, rocker;’ ‘rugby ball;’ ‘saltshaker, salt shaker;’ ‘sandal;’ ‘sax, saxophone;’ ‘school bus;’ ‘schooner;’ ‘sewing machine;’ ‘shovel;’ ‘sleeping bag;’ ‘snowmobile;’ ‘snowplow, snowplough;’ ‘soap dis- penser;’ ‘spatula;’ ‘spider web, spider’s web;’ ‘steam lo- comotive;’ ‘stethoscope;’ ‘studio couch, day bed;’ ‘subma- rine, pigboat, sub, U-boat;’ ‘sundial;’ ‘suspension bridge;’ ‘syringe;’ ‘tank, army tank, armored combat vehicle, ar- moured combat vehicle;’ ‘teddy, teddy bear;’ ‘toaster;’ ‘torch;’ ‘tricycle, trike, velocipede;’ ‘umbrella;’ ‘unicy- cle, monocycle;’ ‘viaduct;’ ‘volleyball;’ ‘washer, auto- matic washer, washing machine;’ ‘water tower;’ ‘wine bot- tle;’ ‘wreck;’ ‘guacamole;’ ‘pretzel;’ ‘cheeseburger;’ ‘hot- dog, hot dog, red hot;’ ‘broccoli;’ ‘cucumber, cuke;’ ‘bell pepper;’ ‘mushroom;’ ‘lemon;’ ‘banana;’ ‘custard apple;’ ‘pomegranate;’ ‘carbonara;’ ‘bubble;’ ‘cliff, drop, drop- off;’ ‘volcano;’ ‘ballplayer, baseball player;’ ‘rapeseed;’ ‘yellow lady’s slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum;’ ‘corn;’ ‘acorn.’ Their WordNet IDs are as follows. n01498041, n01580077, n01641577, n01694178, n01770393, n01820546, n01855672, n01924916, n02007558, n02077923, n02106662, n02127052, n02165456, n02206856, n02233338, n02279972, n02325366, n02410509, n02492035, n02672831, n02730930, n02793495, n02815834, n02895154, n02980441, n03026506, n03196217, n03291819, n03388043, n03445924, n03590841, n01531178, n01614925, n01669191, n01698640, n01774750, n01833805, n01882714, n01944390, n02009912, n02085620, n02110958, n02129165, n02174001, n02219486, n02236044, n02280649, n02346627, n02445715, n02504458, n02676566, n02777292, n02797295, n02837789, n02906734, n02992211, n03124043, n03223299, n03325584, n03417042, n03452741, n03594945, n01534433, n01616318, n01677366, n01735189, n01784675, n01843383, n01910747, n01985128, n02037110, n02099601, n02119022, n02133161, n02177972, n02226429, n02259212, n02281787, n02356798, n02454379, n02655020, n02690373, n02782093, n02802426, n02879718, n02948072, n02999410, n03125729, n03250847, n03355925, n03443371, n03483316, n03617480, n01558993, n01631663, n01687978, n01770081, n01819313, n01847000, n01914609, n01986214, n02051845, n02106550, n02123394, n02137549, n02190166, n02231487, n02268443, n02317335, n02361337, n02486410, n02669723, n02701002, n02787622, n02814860, n02883205, n02951358, n03014705, n03187595, n03255030, n03384352, n03444034, n03584829, n03666591, n03670208, n03724870, n03837869, n03891332, n04033901, n04099969, n04141076, n04208210, n04254120, n04317175, n04366367, n04442312, n04509417, n04562935, n07695742, n07718472, n07753592, n09229709, n11879895, n03717622, n03775071, n03840681, n03935335, n04039381, n04118538, n04146614, n04235860, n04270147, n04344873, n04376876, n04456115, n04532670, n04591713, n07697313, n07720875, n07760859, n09246464, n12057211, n03720891, n03788195, n03854065, n03982430, n04067472, n04131690, n04147183, n04252077, n04275548, n04347754, n04389033, n04482393, n04540053, n04606251, n07697537, n07734744, n07768694, n09472597, n03721384, n03804744, n03888257, n04019541, n04086273, n04133789, n04179913, n04252225, n04310018, n04355338, n04399382, n04507155, n04554684, n07583066, n07714990, n07749582, n07831146, n09835506, n12144580, n12267677. # 9. IMAGENET-O Classes The 200 ImageNet classes that we selected for IMAGENET-O are as follows. ‘goldfish, Carassius auratus;’ ‘triceratops;’ ‘harvestman, daddy longlegs, Phalangium opilio;’ ‘centipede;’ ‘sulphur- crested cockatoo, Kakatoe galerita, Cacatua galerita;’ ‘lori- keet;’ ‘jellyfish;’ ‘brain coral;’ ‘chambered nautilus, pearly nautilus, nautilus;’ ‘starfish, sea star;’ ‘sea urchin;’ ‘hog, pig, grunter, squealer, Sus scrofa;’ ‘armadillo;’ ‘rock beauty, Holocanthus tricolor;’ ‘puffer, pufferfish, blowfish, globefish;’ ‘abacus;’ ‘accor- dion, piano accordion, squeeze box;’ ‘apron;’ ‘balance beam, beam;’ ‘ballpoint, ballpoint pen, ballpen, Biro;’ ‘Band Aid;’ ‘banjo;’ ‘barbershop;’ ‘bath towel;’ ‘bearskin, busby, shako;’ ‘binoculars, field glasses, opera glasses;’ ‘bolo tie, bolo, bola tie, bola;’ ‘bottlecap;’ ‘brassiere, bra, bandeau;’ ‘broom;’ ‘buckle;’ ‘bulletproof vest;’ ‘candle, taper, wax light;’ ‘car mirror;’ ‘chainlink fence;’ ‘chain saw, chainsaw;’ ‘chime, bell, gong;’ ‘Christmas stock- ing;’ ‘cinema, movie theater, movie theatre, movie house, picture palace;’ ‘corkscrew, bottle screw;’ ‘crane;’ ‘croquet ball;’ ‘dam, dike, dyke;’ ‘dig- ital clock;’ ‘dishrag, dishcloth;’ ‘dogsled, dog sled, dog sleigh;’ ‘doormat, welcome mat;’ ‘drilling platform, off- shore rig;’ ‘electric fan, blower;’ ‘envelope;’ ‘espresso maker;’ ‘face powder;’ ‘feather boa, boa;’ ‘fireboat;’ ‘fire screen, fireguard;’ ‘flute, transverse flute;’ ‘folding chair;’ ‘fountain;’ ‘fountain pen;’ ‘frying pan, frypan, skillet;’ ‘golf ball;’ ‘guillo- tine;’ ‘hamper;’ ‘hand blower, blow dryer, blow drier, hair dryer, hair drier;’ ‘harmonica, mouth organ, harp, mouth harp;’ ‘honeycomb;’ ‘hourglass;’ ‘iron, smoothing iron;’ ‘jack-o’-lantern;’ ‘jigsaw puzzle;’ ‘joystick;’ ‘lawn mower, mower;’ ‘library;’ ‘lighter, light, igniter, ignitor;’ ‘lipstick, lip rouge;’ ‘loupe, jeweler’s loupe;’ ‘magnetic compass;’ ‘manhole cover;’ ‘maraca;’ ‘marimba, xylophone;’ ‘mask;’ ‘matchstick;’ ‘medicine chest, medicine cabinet;’ ‘mortar;’ ‘mosquito net;’ ‘mouse- trap;’ ‘nail;’ ‘neck brace;’ ‘necklace;’ ‘nipple;’ ‘ocarina, sweet potato;’ ‘oil filter;’ ‘organ, pipe organ;’ ‘oscillo- scope, scope, cathode-ray oscilloscope, CRO;’ ‘oxygen mask;’ ‘paddlewheel, paddle wheel;’ ‘panpipe, pandean pipe, syrinx;’ ‘park bench;’ ‘pencil sharpener;’ ‘Petri dish;’ ‘pick, plectrum, plectron;’ ‘picket fence, paling;’ ‘pill bot- tle;’ ‘ping-pong ball;’ ‘pinwheel;’ ‘plate rack;’ ‘plunger, plumber’s helper;’ ‘pool table, billiard table, snooker ta- ble;’ ‘pot, flowerpot;’ ‘power drill;’ ‘prayer rug, prayer mat;’ ‘prison, prison house;’ ‘punching bag, punch bag, punching ball, punchball;’ ‘quill, quill pen;’ ‘radiator;’ ‘reel;’ ‘remote control, remote;’ ‘rubber eraser, rubber, pen- cil eraser;’ ‘rule, ruler;’ ‘safe;’ ‘safety pin;’ ‘saltshaker, salt shaker;’ ‘scale, weighing machine;’ ‘screw;’ ‘screw- driver;’ ‘shoji;’ ‘shopping cart;’ ‘shower cap;’ ‘shower cur- tain;’ ‘ski;’ ‘sleeping bag;’ ‘slot, one-armed bandit;’ ‘snow- mobile;’ ‘soap dispenser;’ ‘solar dish, solar collector, so- lar furnace;’ ‘space heater;’ ‘spatula;’ ‘spider web, spider’s web;’ ‘stove;’ ‘strainer;’ ‘stretcher;’ ‘submarine, pigboat, sub, U-boat;’ ‘swimming trunks, bathing trunks;’ ‘swing;’ ‘switch, electric switch, electrical switch;’ ‘syringe;’ ‘ten- nis ball;’ ‘thatch, thatched roof;’ ‘theater curtain, theatre curtain;’ ‘thimble;’ ‘throne;’ ‘tile roof;’ ‘toaster;’ ‘tricy- cle, trike, velocipede;’ ‘turnstile;’ ‘umbrella;’ ‘vending ma- chine;’ ‘waffle iron;’ ‘washer, automatic washer, washing machine;’ ‘water bottle;’ ‘water tower;’ ‘whistle;’ ‘Windsor tie;’ ‘wooden spoon;’ ‘wool, woolen, woollen;’ ‘crossword puzzle, crossword;’ ‘traffic light, traffic signal, stoplight;’ ‘ice lolly, lolly, lollipop, popsicle;’ ‘bagel, beigel;’ ‘pret- zel;’ ‘hotdog, hot dog, red hot;’ ‘mashed potato;’ ‘broccoli;’ ‘cauliflower;’ ‘zucchini, courgette;’ ‘acorn squash;’ ‘cu- cumber, cuke;’ ‘bell pepper;’ ‘Granny Smith;’ ‘strawberry;’ ‘orange;’ ‘lemon;’ ‘pineapple, ananas;’ ‘banana;’ ‘jack- fruit, jak, jack;’ ‘pomegranate;’ ‘chocolate sauce, chocolate syrup;’ ‘meat loaf, meatloaf;’ ‘pizza, pizza pie;’ ‘burrito;’ ‘bubble;’ ‘volcano;’ ‘corn;’ ‘acorn;’ ‘hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa.’ Their WordNet IDs are as follows. n01443537, n01784675, n01917289, n02319095, n02655020, n02777292, n02791270, n02865351, n02910353, n03000134, n01704323, n01819313, n01968897, n02395406, n02666196, n02783161, n02808304, n02877765, n02916936, n03000684, n01820546, n02074367, n02454379, n02672831, n02786058, n02817516, n02892767, n02948072, n03017168, n01770081, n01910747, n02317335, n02606052, n02730930, n02787622, n02841315, n02906734, n02965783, n03026506, n03032252, n03134739, n03218198, n03291819, n03344393, n03388043, n03457902, n03494278, n03590841, n03661043, n03706229, n03724870, n03742115, n03804744, n03840681, n03868863, n03908714, n03937543, n03970156, n03998194, n04040759, n04118776, n04141975, n04204347, n04235860, n04258138, n04330267, n04371430, n04409515, n04429376, n04501370, n04554684, n04591157, n06874185, n07697537, n07716358, n07742313, n07753275, n07836838, n09229709, n13052670. n03075370, n03160309, n03223299, n03297495, n03347037, n03388183, n03467068, n03530642, n03598930, n03666591, n03717622, n03729826, n03786901, n03814639, n03843555, n03874293, n03920288, n03942813, n03982430, n04005630, n04067472, n04125021, n04153751, n04209133, n04243546, n04265275, n04332243, n04371774, n04417672, n04435653, n04507155, n04557648, n04597913, n07615774, n07711569, n07717410, n07745940, n07753592, n07871810, n09472597, n03109150, n03196217, n03240683, n03314780, n03372029, n03400231, n03482405, n03544143, n03602883, n03676483, n03720891, n03733131, n03788365, n03814906, n03854065, n03884397, n03929660, n03944341, n03991062, n04023962, n04074963, n04127249, n04154565, n04209239, n04252077, n04270147, n04336792, n04372370, n04418357, n04442312, n04525305, n04562935, n04599235, n07693725, n07714990, n07718472, n07747607, n07754684, n07873807, n12144580, n03126707, n03207743, n03271574, n03325584, n03376595, n03445777, n03483316, n03584829, n03649909, n03692522, n03721384, n03733281, n03794056, n03825788, n03857828, n03891251, n03930313, n03961711, n03995372, n04033901, n04116512, n04131690, n04201297, n04228054, n04254120, n04275548, n04347754, n04376876, n04423845, n04482393, n04542943, n04579432, n06785654, n07695742, n07715103, n07720875, n07749582, n07768694, n07880968, n12267677,
{ "id": "1708.04552" }
1907.06292
TWEETQA: A Social Media Focused Question Answering Dataset
With social media becoming increasingly pop-ular on which lots of news and real-time eventsare reported, developing automated questionanswering systems is critical to the effective-ness of many applications that rely on real-time knowledge. While previous datasets haveconcentrated on question answering (QA) forformal text like news and Wikipedia, wepresent the first large-scale dataset for QA oversocial media data. To ensure that the tweetswe collected are useful, we only gather tweetsused by journalists to write news articles. Wethen ask human annotators to write questionsand answers upon these tweets. Unlike otherQA datasets like SQuAD in which the answersare extractive, we allow the answers to be ab-stractive. We show that two recently proposedneural models that perform well on formaltexts are limited in their performance when ap-plied to our dataset. In addition, even the fine-tuned BERT model is still lagging behind hu-man performance with a large margin. Our re-sults thus point to the need of improved QAsystems targeting social media text.
http://arxiv.org/pdf/1907.06292
Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Yang Wang
cs.CL
ACL 2019
null
cs.CL
20190714
20190714
9 1 0 2 l u J 4 1 ] L C . s c [ 1 v 2 9 2 6 0 . 7 0 9 1 : v i X r a # TWEETQA: A Social Media Focused Question Answering Dataset Wenhan Xiong†, Jiawei Wu†, Hong Wang†, Vivek Kulkarni†, Mo Yu∗, Shiyu Chang∗, Xiaoxiao Guo∗, William Yang Wang† † University of California, Santa Barbara ∗ IBM Research {xwhan, william}@cs.ucsb.edu, [email protected], {shiyu.chang, xiaoxiao.guo}@ibm.com # Abstract With social media becoming increasingly pop- ular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effective- ness of many applications that rely on real- time knowledge. While previous datasets have concentrated on question answering (QA) for formal like news and Wikipedia, we present the first large-scale dataset for QA over social media data. To ensure that the tweets we collected are useful, we only gather tweets used by journalists to write news articles. We then ask human annotators to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, we allow the answers to be ab- stractive. We show that two recently proposed neural models that perform well on formal texts are limited in their performance when ap- plied to our dataset. In addition, even the fine- tuned BERT model is still lagging behind hu- man performance with a large margin. Our re- sults thus point to the need of improved QA systems targeting social media text. 1 Passage: Oh man just read about Paul Walk- ers death. So young. Ugggh makes me sick especially when it’s caused by an accident. God bless his soul. – Jay Sean (@jaysean) December 1, 2013 Q: why is sean torn over the actor’s death? A: walker was young Table 1: An example showing challenges of TWEETQA. Note the highly informal nature of the text and the presence of social media specific text like user- names which need to be comprehended to accurately answer the question. which is 10% points higher than the number in 2016. Among all major social media sites, Twit- ter is most frequently used as a news source, with 74% of its users obtaining their news from Twitter. All these statistical facts suggest that understand- ing user-generated noisy social media text from Twitter is a significant task. # Introduction Social media is now becoming an important real- time information source, especially during nat- ural disasters and emergencies. It is now very common for traditional news media to frequently probe users and resort to social media platforms to obtain real-time developments of events. Ac- cording to a recent survey by Pew Research Cen- ter2, in 2017, more than two-thirds of Americans read some of their news on social media. Even for American people who are 50 or older, 55% of them report getting news from social media, In recent years, while several tools for core nat- ural language understanding tasks involving syn- tactic and semantic analysis have been developed for noisy social media text (Gimpel et al., 2011; Ritter et al., 2011; Kong et al., 2014; Wang et al., 2014), there is little work on question answering or reading comprehension over social media, with the primary bottleneck being the lack of avail- able datasets. We observe that recently proposed QA datasets usually focus on formal domains, e.g. CNN/DAILYMAIL (Hermann et al., 2015) and NewsQA (Trischler et al., 2016) on news arti- cles; SQuAD (Rajpurkar et al., 2016) and WIKI- MOVIES (Miller et al., 2016) that use Wikipedia. 1The Dataset can be found at https://tweetqa. github.io/. 2http://www.journalism.org/2017/09/07/news-use- across-social-media-platforms-2017/ In this paper, we propose the first large-scale dataset for QA over social media data. Rather than naively obtaining tweets from Twitter using the Twitter API3 which can yield irrelevant tweets with no valuable information, we restrict ourselves only to tweets which have been used by journalists in news articles thus implicitly implying that such tweets contain useful and relevant information. To obtain such relevant tweets, we crawled thousands of news articles that include tweet quotations and then employed crowd-sourcing to elicit questions and answers based on these event-aligned tweets. Table 1 gives an example from our TWEETQA dataset. It shows that QA over tweets raises challenges not only because of the informal na- ture of oral-style texts (e.g. inferring the answer from multiple short sentences, like the phrase “so young” that forms an independent sentence in the example), but also from tweet-specific expressions (such as inferring that it is “Jay Sean” feeling sad about Paul’s death because he posted the tweet). Furthermore, we show the distinctive nature of TWEETQA by comparing the collected data with traditional QA datasets collected primarily from formal domains. In particular, we demonstrate empirically that three strong neural models which achieve good performance on formal data do not generalize well to social media data, bringing out challenges to developing QA systems that work well on social media domains. In summary, our contributions are: the first question answering dataset, TWEETQA, that focuses on social media context; • We conduct extensive analysis of questions and answer tuples derived from social media text and distinguish it from standard question answering datasets constructed from formal- text domains; • Finally, we show the challenges of question answering on social media text by quanti- fying the performance gap between human readers and recently proposed neural models, and also provide insights on the difficulties by analyzing the decomposed performance over different question types. # 2 Related Work Tweet NLP Traditional core NLP research typi- cally focuses on English newswire datasets such as the Penn Treebank (Marcus et al., 1993). In recent 3https://developer.twitter.com/ years, with the increasing usage of social media platforms, several NLP techniques and datasets for processing social media text have been proposed. For example, Gimpel et al. (2011) build a Twitter part-of-speech tagger based on 1,827 manually an- notated tweets. Ritter et al. (2011) annotated 800 tweets, and performed an empirical study for part- of-speech tagging and chunking on a new Twitter dataset. They also investigated the task of Twit- ter Named Entity Recognition, utilizing a dataset of 2,400 annotated tweets. Kong et al. (2014) an- notated 929 tweets, and built the first dependency parser for tweets, whereas Wang et al. (2014) built the Chinese counterpart based on 1,000 annotated Weibo posts. To the best of our knowledge, ques- tion answering and reading comprehension over short and noisy social media data are rarely stud- ied in NLP, and our annotated dataset is also an order of magnitude large than the above public social-media datasets. Reading Comprehension Machine reading comprehension (RC) aims to answer questions by comprehending evidence from passages. This direction has recently drawn much attention due to the fast development of deep learning techniques and large-scale datasets. The early development of the RC datasets focuses on either the cloze-style (Hermann et al., 2015; Hill et al., 2015) or quiz-style problems (Richardson et al., 2013; Lai et al., 2017). The former one aims to generate single-token answers from automatically constructed pseudo-questions while the latter requires choosing from multiple answer candi- dates. However, such unnatural settings make them fail to serve as the standard QA bench- marks. Instead, researchers started to ask human annotators to create questions and answers given passages in a crowdsourced way. Such efforts give the rise of large-scale human-annotated RC datasets, many of which are quite popular in the community such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016), NewsQA (Trischler et al., 2016). More recently, researchers propose even challenging datasets that require QA within dialogue or conversational context (Reddy et al., 2018; Choi et al., 2018). According to the difference of the answer format, these datasets can be further divided to two major categories: extractive and abstractive. In the first category, the answers are in text spans of the given passages, while in the latter case, the answers may not appear in the passages. It is worth mentioning that in almost all previously developed datasets, the passages are from Wikipedia, news articles or fiction stories, which are considered as the formal language. Yet, there is little effort on RC over informal one like tweets. # 3 TweetQA In this section, we first describe the three-step data collection process of TWEETQA: tweet crawling, question-answer writing and answer validation. Next, we define the specific task of TWEETQA and discuss several evaluation metrics. To better understand the characteristics of the TWEETQA task, we also include our analysis on the answer and question characteristics using a subset of QA pairs from the development set. # 3.1 Data Collection Tweet Crawling One major challenge of build- ing a QA dataset on tweets is the sparsity of in- formative tweets. Many users write tweets to ex- press their feelings or emotions about their per- sonal lives. These tweets are generally uninforma- tive and also very difficult to ask questions about. Given the linguistic variance of tweets, it is gener- ally hard to directly distinguish those tweets from informative ones. In terms of this, rather than starting from Twitter API Search, we look into the archived snapshots4 of two major news websites (CNN, NBC), and then extract the tweet blocks that are embedded in the news articles. In order to get enough data, we first extract the URLs of all section pages (e.g. World, Politics, Money, Tech) from the snapshot of each home page and then crawl all articles with tweets from these section pages. Note that another possible way to collect informative tweets is to download the tweets that are posted by the official Twitter accounts of news media. However, these tweets are often just the summaries of news articles, which are written in formal text. As our focus is to develop a dataset for QA on informal social media text, we do not consider this approach. After we extracted tweets from archived news articles, we observed that there is still a portion of tweets that have very simple semantic struc- tures and thus are very difficult to raise meaningful questions. An example of such tweets can be like: 4https://archive.org/ Example: FC Bayern English © y @FCBayemEN #F Bayer celebrate their 2-0 win in London with the travelling #FCBayern fans. #AFCFCB 1:49 PM- Feb 19, 2014 © 120 © 304 people are taking about this Good Question: Who is celebrating the win? Good Answer: FCBayern Bad Answer: A soccer team (require background knowledge) Bad Question: What happened? (too short, too general) Are FCBayern celebrating? (yes-no questions are not allowed) Figure 1: An example we use to guide the crowdwork- ers when eliciting question answer pairs. We elicit question that are neither too specific nor too general, do not require background knowledge. “Wanted to share this today - @IAmSteveHar- vey”. This tweet is actually talking about an im- age attached to this tweet. Some other tweets with simple text structures may talk about an inserted link or even videos. To filter out these tweets that heavily rely on attached media to convey informa- tion, we utilize a state-of-the-art semantic role la- beling model trained on CoNLL-2005 (He et al., 2017) to analyze the predicate-argument structure of the tweets collected from news articles and keep only the tweets with more than two labeled argu- ments. This filtering process also automatically filters out most of the short tweets. For the tweets collected from CNN, 22.8% of them were filtered via semantic role labeling. For tweets from NBC, 24.1% of the tweets were filtered. Question-Answer Writing We then use Ama- zon Mechanical Turk to collect question-answer pairs for the filtered tweets. For each Human In- telligence Task (HIT), we ask the worker to read three tweets and write two question-answer pairs for each tweet. To ensure the quality, we require the workers to be located in major English speak- ing countries (i.e. Canada, US, and UK) and have an acceptance rate larger than 95%. Since we use tweets as context, lots of important information are contained in hashtags or even emojis. Instead of only showing the text to the workers, we use javascript to directly embed the whole tweet into each HIT. This gives workers the same experience as reading tweets via web browsers and help them to better compose questions. To avoid trivial questions that can be simply answered by superficial text matching methods or too challenging questions that require back- ground knowledge. We explicitly state the follow- ing items in the HIT instructions for question writ- ing: • No Yes-no questions should be asked. • The question should have at least five words. • Videos, images or inserted links should not be considered. • No background knowledge should be re- quired to answer the question. To help the workers better follow the instructions, we also include a representative example showing both good and bad questions or answers in our in- structions. Figure 1 shows the example we use to guide the workers. As for the answers, since the context we con- sider is relatively shorter than the context of previ- ous datasets, we do not restrict the answers to be in the tweet, otherwise, the task may potentially be simplified as a classification problem. The work- ers are allowed to write their answers in their own words. We just require the answers to be brief and can be directly inferred from the tweets. After we retrieve the QA pairs from all HITs, we conduct further post-filtering to filter out the pairs from workers that obviously do not follow instructions. We remove QA pairs with yes/no answers. Questions with less than five words are also filtered out. This process filtered 13% of the QA pairs. The dataset now includes 10,898 ar- ticles, 17,794 tweets, and 13,757 crowdsourced question-answer pairs. The collected QA pairs will be directly available to the public, and we will provide a script to download the original tweets and detailed documentation on how we build our dataset. Also note that since we keep the origi- nal news article and news titles for each tweet, our dataset can also be used to explore more challeng- ing generation tasks. Table 2 shows the statistics of our current collection, and the frequency of dif- ferent types of questions is shown in Table 3. All QA pairs were written by 492 individual workers. Dataset Statistics # of Training triples # of Development triples # of Test triples 10,692 1,086 1,979 Average question length (#words) Average answer length (#words) 6.95 2.45 Table 2: Basic statistics of TWEETQA Question Type Percentage What Who How Where Why Which When Others 42.33% 29.36% 7.79% 7.00% 2.61% 2.43% 2.16% 6.32% Table 3: Question Type statistics of TWEETQA Answer Validation For the purposes of human performance evaluation and inter-annotator agree- ment checking, we launch a different set of HITs to ask workers to answer questions in the test and development set. The workers are shown with the tweet blocks as well as the questions collected in the previous step. At this step, workers are allowed to label the questions as “NA” if they think the questions are not answerable. We find that 3.1% of the questions are labeled as unanswerable by the workers (for SQuAD, the ratio is 2.6%). Since the answers collected at this step and previous step are written by different workers, the answers can be written in different text forms even they are se- mantically equal to each other. For example, one answer can be “Hillary Clinton” while the other is “@HillaryClinton”. As it is not straightforward to automatically calculate the overall agreement, we manually check the agreement on a subset of 200 random samples from the development set and ask an independent human moderator to verify the re- It turns out that 90% of the answers pairs sult. are semantically equivalent, 2% of them are par- tially equivalent (one of them is incomplete) and 8% are totally inconsistent. The answers collected at this step are also used to measure the human performance. We have 59 individual workers par- ticipated in this process. # 3.2 Task and Evaluation As described in the question-answer writing pro- cess, the answers in our dataset are different from those in some existing extractive datasets. Thus we consider the task of answer genera- tion for TWEETQA and we use several stan- dard metrics for natural language generation to evaluate QA systems on our dataset, namely we consider BLEU-15 (Papineni et al., 2002), Me- teor (Denkowski and Lavie, 2011) and Rouge- L (Lin, 2004) in this paper. To evaluate machine systems, we compute the scores using both the original answer and vali- dation answer as references. For human perfor- mance, we use the validation answers as generated ones and the original answers as references to cal- culate the scores. # 3.3 Analysis In this section, we analyze our dataset and out- line the key properties that distinguish it from standard QA datasets like SQuAD (Rajpurkar et al., 2016). First, our dataset is derived from social media text which can be quite informal and user-centric as opposed to SQuAD which is derived from Wikipedia and hence more for- the shared mal vocabulary between SQuAD and TWEETQA is only 10.79%, suggesting a significant difference in their lexical content. Figure 2 shows the 1000 most distinctive words in each domain as extracted from SQuAD and TWEETQA. Note the stark differences in the words seen in the TWEETQA dataset, which include a large num- ber of user accounts with a heavy tail. Examples include @realdonaldtrump, @jdsutter, @justinkirkland and #cnnworldcup, #goldenglobes. the SQuAD dataset rarely has usernames or hashtags that are used to signify events or refer to the authors. It is also worth noting that the data collected from social media can not only capture events and de- velopments in real-time but also capture individ- ual opinions and thus requires reasoning related to the authorship of the content as is illustrated in Ta- ble 1. In addition, while SQuAD requires all an- swers to be spans from the given passage, we do not enforce any such restriction and answers can be free-form text. In fact, we observed that 43% of our QA pairs consists of answers which do not have an exact substring matching with their corre- sponding passages. All of the above distinguish- ing factors have implications to existing models 5The answer phrases in our dataset are relatively short so we do not consider other BLEU scores in our experiments which we analyze in upcoming sections. We conduct analysis on a subset of TWEETQA to get a better understanding of the kind of reason- ing skills that are required to answer these ques- tions. We sample 150 questions from the develop- ment set, then manually label their reasoning cat- egories. Table 4 shows the analysis results. We use some of the categories in SQuAD (Rajpurkar et al., 2016) and also proposes some tweet-specific reasoning types. Our first observation is that almost half of the questions only require the ability to identify para- phrases. Although most of the “paraphrasing only” questions are considered as fairly easy ques- tions, we find that a significant amount (about 3/4) of these questions are asked about event-related topics, such as information about “who did what to whom, when and where”. This is actually con- sistent with our motivation to create TWEETQA, as we expect this dataset could be used to de- velop systems that automatically collect informa- tion about real-time events. Apart from these questions, there are also a group of questions that require understanding common sense, deep semantics (i.e. the answers cannot be derived from the literal meanings of the tweets), and relations of sentences6 (including co- reference resolution), which are also appeared in other RC datasets (Rajpurkar et al., 2016). On the other hand, the TWEETQA also has its unique properties. Specifically, a significant amount of questions require certain reasoning skills that are specific to social media data: • Understanding authorship: Since tweets are highly personal, it is critical to understand how questions/tweets related to the authors. Tweets are often oral and informal. QA over tweets requires the understanding of common oral English. Our TWEETQA also requires un- derstanding some tweet-specific English, like conversation-style English. • Understanding of user IDs & hashtags: Tweets often contains user IDs and hashtags, which are single special tokens. Understand- ing these special tokens is important to an- swer person- or event-related questions. 6There are more instances of this reasoning type com- pared to formal datasets since tweets are usually short sen- tences. Type Fraction (%) Example Paraphrasing only 47.3 P: Belgium camp is 32 miles from canceled game at US base. Surprised Klinsmann didn’t offer to use his helicopter pilot skills to give a ride. – Grant Wahl (@GrantWahl) Q: what expertise does klinsmann possess? A: helicopter pilot skills Types Beyond Paraphrasing Sentence relations 10.7 P: My heart is hurting. You were an amazing tv daddy! Proud and honored to have worked with one of the best. Love and Prayers #DavidCassidy— Alexa PenaVega (@alexavega) November 22, 2017 Q: who was an amazing tv daddy? A: #davidcassidy Authorship 17.3 P: Oh man just read about Paul Walkers death. So young. Ugggh makes me sick especially when it’s caused by an accident. God bless his soul. – Jay Sean (@jaysean) Q: why is sean torn over the actor’s death? A: walker was young Oral/Tweet English habits 10.7 P: I got two ways to watch the OLYMPICS!! CHEAH!! USA!! Leslie Jones (@Lesdoggg) August 6, 2016 Q: who is being cheered for? A: usa UserIDs & Hashtags 12.0 P: Started researching this novel in 2009. Now it is almost ready for you to read. Excited! #InTheUnlikelyEvent – Judy Blume (@judyblume) Q: what is the name of the novel? A: in the unlikely event. Other commonsense 6.7 P: Don’t have to be Sherlock Holmes to figure out what Russia is up to ... – Lindsey Graham (@LindseyGrahamSC) Q: what literary character is referenced? A: sherlock holmes. Deep semantic 3.3 P: @MayorMark its all fun and games now wait until we are old enough to vote #lastlaugh – Dylan (@DFPFilms1) Q: when does the author suggest a change? A: when he’s of voting age. Ambiguous (Meaningless questions) 5.3 P: The #endangeredriver would be a sexy bastard in this channel if it had water. Quick turns. Narrow. (I’m losing it) – John D. Sutter (@jdsutter) Q: what is this user ”losing” A: he is losing it Table 4: Types of reasoning abilities required by TWEETQA. Underline indicates tweet-specific reasoning types, which are common in TWEETQA but are rarely observed in previous QA datasets. Note that the first type repre- sents questions that only require the ability of paraphrasing, while the rest of the types require some other more salient abilities besides paraphrasing. Overlaps could exist between different reasoning types in the table. For example, the second example requires both the understanding of sentences relations and tweet language habits to answer the question; and the third example requires both the understanding of sentences relations and authorship. cypruscoldiy® “oghtenna dialects rhine units warsaw principle component italiananalysispatent distinct contains constructed migration genetic Consists tribes dialect apolloterrttories gainedhyderabadorthodox egyptian ruled later settlementegypt Feiuenty ‘a anpicatnnaning nteraldell iron igey Pe, 30 scholars tesla relativelyconstitutionaltucson inQtctlprominent re yf eagles distant eH0 sont incomelS| islands W itindows rea LY ci itationinfluenced: inne renia POPU! faujoneregions universities designedprovincecells dotware SOvietroman, \dynastydevelo pedir", courtsplymouthcontemporary souuht Paver ay eioped Gonessteam mune ener, ally drogen per YY fanslation census Oy atudien ilosophy, shipsdevel lopment chop ping ie? typicallyoxgends” sogthees primarily se Owe er ‘Conturies classical practices torch ; tcatoniegtiny rms HOTTA eee te on eee ne BET tibetanbuddhism protestant catalansciences invasion &stablis Contrast andatn Huet tuval! neumann "=Ptune, #ennworldcup serén TUbiO js Pel @cnn ay wasnt | fi iat i f acs acer ‘e that's! gonnanl aa she'sqon tivegantos there's won't VUE) Gtayoronift YG arat dsutter 0 cute - @Fealdonaldtrump couldn't they're ong here's jy) he’s a wo ele randpaul yy jp Clenadunham axatyperry ssn : tweeting' j let's wojnarow helin today’s, kelly Figure 2: Visualization of vocabulary differences between SQuAD (left) and TWEETQA (right). Note the presence of a heavy tail of hash-tags and usernames on TWEETQA that are rarely found on SQuAD. The color range from red to gray indicates the frequency (red the highest and gray the lowest). # 4 Experiments To show the challenge of TweetQA for existing approaches, we consider four representative meth- ods as baselines. For data processing, we first re- move the URLs in the tweets and then tokenize the QA pairs and tweets using NLTK.7 This process is consistent for all baselines. posed generative model (Song et al., 2017) that first encodes the context and question into a multi-perspective memory via four different neu- ral matching layers, then decodes the answer using an attention-based model equipped with both copy and coverage mechanisms. The model is trained on our dataset for 15 epochs and we choose the model parameters that achieve the best BLEU-1 score on the development set. # 4.1 Query Matching Baseline We first consider a simple query matching base- line similar to the IR baseline in Kocisk´y et al. (2017). But instead of only considering several genres of spans as potential answers, we try to match the question with all possible spans in the tweet context and choose the span with the highest BLEU-1 score as the final answer, which follows the method and implementation8 of answer span selection for open-domain QA (Wang et al., 2017). We include this baseline to show that TWEETQA is a nontrivial task which cannot be easily solved with superficial text matching. # 4.2 Neural Baselines We then explore three typical neural models that perform well on existing formal-text datasets. One takes a generative perspective and learns to decode the answer conditioned on the question and con- text, while the others learns to extract a text span from the context that best answers the question. BiDAF Unlike aforementioned genera- the Bi-Directional Attention Flow tive model, (BiDAF) (Seo et al., 2016) network learns to directly predict the answer span in the context. BiDAF first utilizes multi-level embedding layers to encode both the question and context, then uses bi-directional attention flow to get a query-aware context representation, which is further modeled by an RNN layer to make the span predictions. Since our TWEETQA does not have labeled answer spans as in SQuAD, we need to use the human-written answers to retrieve the answer- span labels for training. To get the approximate answer spans, we consider the same matching approach as in the query matching baseline. But instead of using questions to do matching, we use the human-written answers to get the spans that achieve the best BLEU-1 scores. Generative QA RNN-based encoder-decoder models (Cho et al., 2014; Bahdanau et al., 2014) have been widely used for natural language gen- eration tasks. Here we consider a recently pro- # 7http://www.nltk.org 8https://github.com/shuohangwang/mprc Fine-Tuning BERT This is another extractive RC model that benefits from the recent advance in pretrained general language encoders (Peters et al., 2018; Devlin et al., 2018). In our work, we select the BERT model (Devlin et al., 2018) which has achieved the best performance on SQuAD. In our experiments, we use the PyTorch reimple- Evaluation on Dev/Test Data Models BLEU-1 METEOR ROUGE-L HUMAN EXTRACT-UB 76.4|78.2 79.5|80.3 63.7|66.7 68.8|69.8 70.9|73.5 74.3|75.6 Query-Matching 30.3|29.4 12.0|12.1 17.0|17.4 BiDAF Generative BERT Neural Baselines 48.3|48.7 53.4|53.7 67.3|69.6 31.6|31.4 32.1|31.8 56.9|58.6 38.9|38.6 39.5|39.0 62.6|64.1 Table 5: Overall performance of baseline models. EXTRACT-UB refers to our estimation of the upper bound of extractive methods. mentation9 of the uncased base model. The batch size is set as 12 and we fine-tune the model for 2 epochs with learning rate 3e-5. # 5 Evaluation # 5.1 Overall Performance We test the performance of all baseline systems using the three generative metrics mentioned in Section 3.2. As shown in Table 5, there is a large performance gap between human performance and all baseline methods, including BERT, which has achieved superhuman performance on SQuAD. This confirms than TWEETQA is more challeng- ing than formal-test RC tasks. We also show the upper bound of the extrac- tive models (denoted as EXTRACT-UPPER). In the upper bound method, the answers are defined as n-grams from the tweets that maximize the BLEU-1/METEOR/ROUGE-L compared to the annotated groundtruth. From the results, we can see that the BERT model still lags behind the up- per bound significantly, showing great potential for future research. It is also interesting to see that the HUMAN performance is slightly worse com- pared to the upper bound. This indicates (1) the difficulty of our problem also exists for human- beings and (2) for the answer verification process, the workers tend to also extract texts from tweets as answers. According to the comparison between the two non-pretraining baselines, our generative baseline yields better results than BiDAF. We believe this is largely due to the abstractive nature of our dataset, since the workers can sometimes write the answers using their own words. 9https://github.com/huggingface/ pytorch-pretrained-BERT Reasoning Types Generative|BERT METEOR | ROUGE-L Paraphrasing 37.6|73.4 44.1|81.8 Sentence relations 34.0|46.1 42.2|51.1 Authorship 38.4/55.9 | 46.1|61.9 Oral/Tweet habits 721808 aorgLot UserIDs&Hashtags | 3.8°|13.0' | 9.9°|16.2" Commonsense 20.1|63.5 33.1|67.1 Deep semantics 7.19°|7.1' | 13.4°|10.3" Ambiguous 4.1°/25.0' | 11.0°|67.1 Table 6: BiDAF’s and the Generative model’s perfor- mance on questions that require different types of rea- soning. ° and? denote the three most difficult reason- ing types for the Generative and the BERT models. # 5.2 Performance Analysis over Human-Labeled Question Types To better understand the difficulty of the TWEETQA task for current neural models, we analyze the decomposed model performance on the different kinds of questions that require dif- ferent types of reasoning (we tested on the sub- set which has been used for the analysis in Table 4). Table 6 shows the results of the best performed non-pretraining and pretraining approach, i.e., the generative QA baseline and the fine-tuned BERT. Our full comparison including the BiDAF per- formance and evaluation on more metrics can be found in Appendix A. Following previous RC re- search, we also include analysis on automatically- labeled question types in Appendix B. As indicated by the results on METEOR and ROUGE-L (also indicated by a third metric, BLEU-1, as shown in Appendix A), both baselines perform worse on questions that require the un- derstanding deep semantics and userID&hashtags. The former kind of questions also appear in other benchmarks and is known to be challenging for many current models. The second kind of ques- tions is tweet-specific and is related to specific properties of social media data. Since both mod- els are designed for formal-text passages and there is no special treatment for understanding user IDs and hashtags, the performance is severely limited on the questions requiring such reasoning abili- ties. We believe that good segmentation, disam- biguation and linking tools developed by the so- cial media community for processing the userIDs and hashtags will significantly help these question types. On non-pretraining model Besides the easy questions requiring mainly paraphrasing skill, we also find that the questions requiring the un- derstanding of authorship and oral/tweet English habits are not very difficult. We think this is due to the reason that, except for these tweet-specific tokens, the rest parts of the questions are rather simple, which may require only simple reasoning skill (e.g. paraphrasing). On pretraining model Although BERT was demonstrated to be a powerful tool for reading comprehension, this is the first time a detailed analysis has been done on its reasoning skills. From the results, the huge improvement of BERT mainly comes from two types. The first is para- phrasing, which is not surprising because that a well pretrained language model is expected to be able to better encode sentences. Thus the derived embedding space could work better for sentence comparison. The second type is commonsense, which is consistent with the good performance of BERT (Devlin et al., 2018) on SWAG (Zellers et al., 2018). We believe that this provides fur- ther evidence about the connection between large- scaled deep neural language model and certain kinds of commonsense. # 6 Conclusion We present the first dataset for QA on social me- dia data by leveraging news media and crowd- sourcing. The proposed dataset informs us of the distinctiveness of social media from formal do- mains in the context of QA. Specifically, we find that QA on social media requires systems to com- prehend social media specific linguistic patterns like informality, hashtags, usernames, and author- ship. These distinguishing linguistic factors bring up important problems for the research of QA that currently focuses on formal text. We see our dataset as a first step towards enabling not only a deeper understanding of natural language in social media but also rich applications that can extract essential real-time knowledge from social media. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Neural machine translation by CoRR, Bengio. 2014. jointly learning to align and translate. abs/1409.0473. Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Michael J. Denkowski and Alon Lavie. 2011. Me- teor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In WMT@EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Kevin Gimpel, Nathan Schneider, Brendan T. O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part- of-speech tagging for twitter: Annotation, features, and experiments. In ACL. Luheng He, Kenton Lee, Mike Lewis, and Luke S. Zettlemoyer. 2017. Deep semantic role labeling: What works and what’s next. In ACL. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- In Proc. of Conf. chines to read and comprehend. on Advances in NIPS. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301. Tom´as Kocisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2017. The narrativeqa reading comprehension challenge. CoRR, abs/1712.07040. Swabha Swayamdipta, Archna Bhatia, Chris Dyer, and Noah A. Smith. 2014. A dependency parser for tweets. In EMNLP. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. Proc. of Conf. on EMNLP. Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional linguistics, 19(2):313–330. Alexander Miller, Adam Fisch, Jesse Dodge, Amir- Hossein Karimi, Antoine Bordes, and Jason We- ston. 2016. Key-value memory networks for directly reading documents. EMNLP. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268. Kishore Papineni, Salim E. Roucos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proc. of Conf. on EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proc. of Conf. on EMNLP. Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the conference on empiri- cal methods in natural language processing, pages 1524–1534. Association for Computational Linguis- tics. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional at- tention flow for machine comprehension. CoRR, abs/1611.01603. Linfeng Song, Zhiguo Wang, and Wael Hamza. 2017. A unified query-based generative model for ques- tion generation and question answering. CoRR, abs/1709.01058. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. NewsQA: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017. R3: Reinforced reader-ranker for open-domain question answering. arXiv preprint arXiv:1709.00023. William Yang Wang, Lingpeng Kong, Kathryn Mazaitis, and William W. Cohen. 2014. Depen- dency parsing for weibo: An efficient probabilistic logic programming approach. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), Doha, Qatar. ACL. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. # A Full results of Performance Analysis over Human-Labeled Question Types Table 7 gives our full evaluation on human anno- tated question types. Compared with the BiDAF model, one interest- ing observation is that the generative baseline gets much worse results on ambiguous questions. We conjecture that although these questions are mean- ingless, they still have many words that overlapped with the contexts. This can give BiDAF potential advantage over the generative baseline. # B Performance Analysis over Automatically-Labeled Question Types Besides the analysis on different reasoning types, we also look into the performance over questions with different first tokens in the development set, which provide us an automatic categorization of questions. According to the results in Table 8, the three neural baselines all perform the best on “Who” and “Where” questions, to which the an- swers are often named entities. Since the tweet contexts are short, there are only a small num- ber of named entities to choose from, which could make the answer pattern easy to learn. On the other hand, the neural models fail to perform well on the “Why” questions, and the results of neural baselines are even worse than that of the match- ing baseline. We find that these questions gener- ally have longer answer phrases than other types of questions, with the average answer length being 3.74 compared to 2.13 for any other types. Also, since all the answers are written by humans in- stead of just spans from the context, these abstrac- tive answers can make it even harder for current models to handle. We also observe that when peo- ple write “Why” questions, they tend to copy word spans from the tweet, potentially making the task easier for the matching baseline. BLEU-1 METEOR ROUGE-L Reasoning Types BiDAF|Generative|BERT Paraphrasing 49.1|56.8|81.7 35.4|37.6|73.4 44.5|44.181.8 Sentence relations 43.3|53.4|50.0 26.8|34.0|46.1 32.8/42.2|51.1 Authorship 52.5|65.4|63.0 30.5|38.4|55.9 42.3|46.1|61.9 Oral/Tweet habits 45.8|60.8|60.4 34.8|37.2|50.3 35.1|40.7|51.0' UserIDs&Hashtags | 30.0*|41.5°|29.3' | 8.30*|3.81°|13.0' | 13.7*|9.88°|16.2" Commonsense 27.6*|38.1°|72.9 22.4*|20.1|63.5 31.0*|33.1|67.1 Deep semantics 34.8*|53.8/25.0' | 7.85*|7.19°|7.1' | 17.5*|13.4°|10.3¢ Ambiguous 35.1]18.1°|31.6' | 29.2/4.11°|25.07 | 34.3]11.0°|67.1 Table 7: BiDAF’s and the Generative model’s performance on questions that require different types of reasoning. *, ° and t denote the three most difficult reasoning types for BIDAF/Generative/BERT models. Models What Who How Where When Why Which Others HUMAN 74.1 83.5 61.1 74.8 72.2 66.0 76.8 76.0 Query-Matching 32.4 29.8 28.4 27.1 22.9 51.9 22.7 21.1 BiDAF Generative BERT 44.5 46.8 64.8 54.9 63.8 72.5 Neural Baselines 41.0 53.4 57.7 60.2 61.7 78.1 46.5 45.4 64.5 36.1 44.3 61.0 44.7 51.4 67.2 41.6 43.1 59.2 Table 8: BLEU-1 scores on different types of questions. Calculated on the development set.
{ "id": "1511.02301" }
1907.05686
And the Bit Goes Down: Revisiting the Quantization of Neural Networks
In this paper, we address the problem of reducing the memory footprint of convolutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss reconstruction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using byte-aligned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5MB (20x compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet object classification and by compressing a Mask R-CNN with a 26x factor.
http://arxiv.org/pdf/1907.05686
Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, Hervé Jégou
cs.CV
ICLR 2020 camera-ready
null
cs.CV
20190712
20201109
0 2 0 2 v o N 9 ] V C . s c [ 5 v 6 8 6 5 0 . 7 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # AND THE BIT GOES DOWN: REVISITING THE QUAN- TIZATION OF NEURAL NETWORKS Pierre Stock1,2, Armand Joulin1, R´emi Gribonval2, Benjamin Graham1, Herv´e J´egou1 1Facebook AI Research, 2Univ Rennes, Inria, CNRS, IRISA # ABSTRACT In this paper, we address the problem of reducing the memory footprint of con- volutional network architectures. We introduce a vector quantization method that aims at preserving the quality of the reconstruction of the network outputs rather than its weights. The principle of our approach is that it minimizes the loss recon- struction error for in-domain inputs. Our method only requires a set of unlabelled data at quantization time and allows for efficient inference on CPU by using byte- aligned codebooks to store the compressed weights. We validate our approach by quantizing a high performing ResNet-50 model to a memory size of 5 MB (20× compression factor) while preserving a top-1 accuracy of 76.1% on ImageNet ob- ject classification and by compressing a Mask R-CNN with a 26× factor.1 # INTRODUCTION There is a growing need for compressing the best convolutional networks (or ConvNets) to sup- port embedded devices for applications like robotics and virtual/augmented reality. Indeed, the performance of ConvNets on image classification has steadily improved since the introduction of AlexNet (Krizhevsky et al., 2012). This progress has been fueled by deeper and richer ar- chitectures such as the ResNets (He et al., 2015) and their variants ResNeXts (Xie et al., 2017) or DenseNets (Huang et al., 2017). Those models particularly benefit from the recent progress made with weak supervision (Mahajan et al., 2018; Yalniz et al., 2019; Berthelot et al., 2019). Compres- sion of ConvNets has been an active research topic in the recent years, leading to networks with a 71% top-1 accuracy on ImageNet object classification that fit in 1 MB (Wang et al., 2018b). In this work, we propose a compression method particularly adapted to ResNet-like architectures. Our approach takes advantage of the high correlation in the convolutions by the use of a structured quantization algorithm, Product Quantization (PQ) (J´egou et al., 2011). More precisely, we exploit the spatial redundancy of information inherent to standard convolution filters (Denton et al., 2014). Besides reducing the memory footprint, we also produce compressed networks allowing efficient inference on CPU by using byte-aligned indexes, as opposed to entropy decoders (Han et al., 2016). Our approach departs from traditional scalar quantizers (Han et al., 2016) and vector quantiz- ers (Gong et al., 2014; Carreira-Perpi˜n´an & Idelbayev, 2017) by focusing on the accuracy of the activations rather than the weights. This is achieved by leveraging a weighted k-means technique. To our knowledge this strategy (see Section 3) is novel in this context. The closest work we are aware of is the one by Choi et al. (2016), but the authors use a different objective (their weighted term is derived from second-order information) along with a different quantization technique (scalar quantization). Our method targets a better in-domain reconstruction, as depicted by Figure 1. Finally, we compress the network sequentially to account for the dependency of our method to the activations at each layer. To prevent the accumulation of errors across layers, we guide this compression with the activations of the uncompressed network on unlabelled data: training by dis- tillation (Hinton et al., 2014) allows for both an efficient layer-by-layer compression procedure and a global fine-tuning of the codewords. Thus, we only need a set of unlabelled images to adjust the codewords. As opposed to recent works by Mishra & Marr (2017), Lopes et al. (2017), our distilla- tion scheme is sequential and the underlying compression method is different. Similarly, Wu et al. (2016) use use Vector Quantization (VQ) instead PQ and do not finetune the learned codewords. Contrary to our approach, they do not compress the classifier’s weights and simply finetune them. # 1Code and compressed models: https://github.com/facebookresearch/kill-the-bits. 1 Published as a conference paper at ICLR 2020 in-domain standard Pactivations out-of-domain Figure 1: Illustration of our method. We approximate a binary classifier y that labels images as dogs or cats by quantizing its weights. Standard method: quantizing y with the standard objective function promotes a classifier Aandara that tries to approximate ¢y over the entire input space and can thus perform badly for in-domain inputs. Our method: quantizing y with our objective function promotes a classifier Pactivations that performs well for in-domain inputs. Images lying in the hatched area of the input space are correctly classified by activations but incorrectly by Ystandard- We show that applying our approach to the semi-supervised ResNet-50 of Yalniz et al. (Yalniz et al., 2019) leads to a 5 MB memory footprint and a 76.1% top-1 accuracy on ImageNet object classification (hence 20× compression vs. the original model). Moreover, our approach generalizes to other tasks such as image detection. As shown in Section 4.3, we compress a Mask R-CNN (He et al., 2017) with a size budget around 6 MB (26× compression factor) while maintaining a competitive performance. # 2 RELATED WORK There is a large body of literature on network compression. We review the works closest to ours and refer the reader to two recent surveys (Guo, 2018; Cheng et al., 2017) for a comprehensive overview. Low-precision training. Since early works like those of Courbariaux et al. (2015), researchers have developed various approaches to train networks with low precision weights. Those approaches include training with binary or ternary weights (Shayer et al., 2017; Zhu et al., 2016; Li & Liu, 2016; Rastegari et al., 2016; McDonnell, 2018), learning a combination of binary bases (Lin et al., 2017) and quantizing the activations (Zhou et al., 2016; 2017; Mishra et al., 2017). Some of these methods assume the possibility to employ specialized hardware that speed up inference and improve power efficiency by replacing most arithmetic operations with bit-wise operations. However, the back-propagation has to be adapted to the case where the weights are discrete. Quantization. Vector Quantization (VQ) and Product Quantization (PQ) have been extensively studied in the context of nearest-neighbor search (Jegou et al., 2011; Ge et al., 2014; Norouzi & Fleet, 2013). The idea is to decompose the original high-dimensional space into a cartesian product of subspaces that are quantized separately with a joint codebook. To our knowledge, Gong et al. (2014) were the first to introduce these stronger quantizers for neural network quantization, followed by Carreira-Perpi˜n´an & Idelbayev (2017). As we will see in the remainder of this paper, employing this discretization off-the-shelf does not optimize the right objective function, and leads to a catastrophic drift of performance for deep networks. Pruning. Network pruning amounts to removing connections according to an importance criteria (typically the magnitude of the weight associated with this connection) until the desired model size/accuracy tradeoff is reached (LeCun et al., 1990). A natural extension of this work is to prune structural components of the network, for instance by enforcing channel-level (Liu et al., 2017) or filter-level (Luo et al., 2017) sparsity. However, these methods alternate between pruning and re-training steps and thus typically require a long training time. 2 Published as a conference paper at ICLR 2020 Dedicated architectures. Architectures such as SqueezeNet (Iandola et al., 2016), NASNet (Zoph et al., 2017), ShuffleNet (Zhang et al., 2017; Ma et al., 2018), MobileNets (Sandler et al., 2018) and EfficientNets (Tan & Le, 2019) are designed to be memory efficient. As they typically rely on a combination of depth-wise and point-wise convolutional filters, sometimes along with channel shuffling, they are less prone than ResNets to structured quantization techniques such as PQ. These architectures are either designed by hand or using the framework of architecture search (Howard et al., 2019). For instance, the respective model size and test top-1 accuracy of ImageNet of a MobileNet are 13.4 MB for 71.9%, to be compared with a vanilla ResNet-50 with size 97.5 MB for a top-1 of 76.2%. Moreover, larger models such as ResNets can benefit from large-scale weakly- or semi-supervised learning to reach better performance (Mahajan et al., 2018; Yalniz et al., 2019). Combining some of the mentioned approaches yields high compression factors as demonstrated by Han et al. with Deep Compression (DC) (Han et al., 2016) or more recently by Tung & Mori (Tung & Mori, 2018). Moreover and from a practical point of view, the process of compressing networks depends on the type of hardware on which the networks will run. Recent work directly quantizes to optimize energy-efficiency and latency time on a specific hardware (Wang et al., 2018a). Finally, the memory overhead of storing the full activations is negligible compared to the storage of the weights for two reasons. First, in realistic real-time inference setups, the batch size is almost always equal to one. Second, a forward pass only requires to store the activations of the current layer –which are often smaller than the size of the input– and not the whole activations of the network. # 3 OUR APPROACH In this section, we describe our strategy for network compression and we show how to extend our ap- proach to quantize a modern ConvNet architecture. The specificity of our approach is that it aims at a small reconstruction error for the outputs of the layer rather than the layer weights themselves. We first describe how we quantize a single fully connected and convolutional layer. Then we describe how we quantize a full pre-trained network and finetune it. # 3.1 QUANTIZATION OF A FULLY-CONNECTED LAYER We consider a fully-connected layer with weights W ∈ RCin×Cout and, without loss of generality, we omit the bias since it does not impact reconstruction error. Product Quantization (PQ). Applying the PQ algorithm to the columns of W consists in evenly splitting each column into m contiguous subvectors and learning a codebook on the resulting mCout subvectors. Then, a column of W is quantized by mapping each of its subvector to its nearest codeword in the codebook. For simplicity, we assume that Cin is a multiple of m, i.e., that all the subvectors have the same dimension d = Cin/m. More formally, the codebook C = {c1, . . . , ck} contains k codewords of dimension d. Any col- umn wj of W is mapped to its quantized version q(wj) = (ci1 , . . . , cim ) where i1 denotes the index of the codeword assigned to the first subvector of wj, and so forth. The codebook is then learned by minimizing the following objective function: |W — WI3 = S> Iw; — a(w,) 3, a) j where W denotes the quantized weights. This objective can be efficiently minimized with k-means. When m is set to 1, PQ is equivalent to vector quantization (VQ) and when m is equal to Cin, it is the scalar k-means algorithm. The main benefit of PQ is its expressivity: each column w; is mapped to a vector in the product C = C x --- x C, thus PQ generates an implicit codebook of size k"™. Our algorithm. PQ quantizes the weight matrix of the fully-connected layer. However, in prac- tice, we are interested in preserving the output of the layer, not its weights. This is illustrated in the case of a non-linear classifier in Figure 1: preserving the weights a layer does not necessarily guar- antee preserving its output. In other words, the Frobenius approximation of the weights of a layer is not guaranteed to be the best approximation of the output over some arbitrary domain (in particular for in-domain inputs). We thus propose an alternative to PQ that directly minimizes the reconstruc- tion error on the output activations obtained by applying the layer to in-domain inputs. More pre- cisely, given a batch of B input activations x ∈ RB×Cin , we are interested in learning a codebook C 3 Published as a conference paper at ICLR 2020 that minimizes the difference between the output activations and their reconstructions: lly — F113 = SO lbw; — a(w,))|[3, (2) j where y = xW is the output and ¥ = XW its reconstruction. Our objective is a re-weighting of the objective in Equation (1). We can thus learn our codebook with a weighted k-means algorithm. First, we unroll x of size B x Cin into X of size (B x m) x di.e. we split each row of x into m subvectors of size d and stack these subvectors. Next, we adapt the EM algorithm as follows. (1) E-step (cluster assignment). Recall that every column wj is divided into m subvectors of dimension d. Each subvector v is assigned to the codeword cj such that cj = argmin ||X(¢ — v)||3. (3) ce This step is performed by exhaustive exploration. Our implementation relies on broadcast- ing to be computationally efficient. (2) M-step (codeword update). Let us consider a codeword c € C. We denote (vp) per, the subvectors that are currently assigned to c. Then, we update c < c*, where c* =argmin © ||X(¢ — v,)||3- (4) d ceR® Te This step explicitly computes the solution of the least-squares problen?| Our implemen- tation performs the computation of the pseudo-inverse of X before alternating between the Expectation and Minimization steps as it does not depend on the learned codebook C. We initialize the codebook C by uniformly sampling k vectors among those we wish to quantize. After performing the E-step, some clusters may be empty. To resolve this issue, we iteratively perform the following additional steps for each empty cluster of index 7. (1) Find codeword co corresponding to the most populated cluster ; (2) define new codewords cj = co +e and ci, = cy—e, where e ~ N’(0, eI) and (3) perform again the E-step. We proceed to the M-step after all the empty clusters are resolved. We set ¢ = le—8 and we observe that its generally takes less than 1 or 2 E-M iterations to resolve all the empty clusters. Note that the quality of the resulting compression is sensitive to the choice of x. 3.2 CONVOLUTIONAL LAYERS Despite being presented in the case of a fully-connected layer, our approach works on any set of vectors. As a consequence, our apporoach can be applied to a convolutional layer if we split the associated 4D weight matrix into a set of vectors. There are many ways to split a 4D matrix in a set of vectors and we are aiming for one that maximizes the correlation between the vectors since vector quantization based methods work the best when the vectors are highly correlated. Given a convolutional layer, we have Cout filters of size K × K × Cin, leading to an overall 4D weight matrix W ∈ RCout×Cin×K×K. The dimensions along the output and input coordinate have no particular reason to be correlated. On the other hand, the spatial dimensions related to the filter size are by nature very correlated: nearby patches or pixels likely share information. As depicted in Figure 2, we thus reshape the weight matrix in a way that lead to spatially coherent quantization. More precisely, we quantize W spatially into subvectors of size d = K × K using the following procedure. We first reshape W into a 2D matrix of size (Cin × K × K) × Cout. Column j of the reshaped matrix Wr corresponds to the jth filter of W and is divided into Cin subvectors of size K × K. Similarly, we reshape the input activations x accordingly to xr so that reshaping back the matrix xrWr yields the same result as x ∗ W. In other words, we adopt a dual approach to the one using bi-level Toeplitz matrices to represent the weights. Then, we apply our method exposed in Section 3.1 to quantize each column of Wr into m = Cin subvectors of size d = K × K with k codewords, using xr as input activations in (2). As a natural extension, we also quantize with larger subvectors, for example subvectors of size d = 2 × K × K, see Section 4 for details. 2 emp — . pe ning* — L Denoting x™ the Moore-Penrose pseudoinverse of x, we obtain c* = # L Ste aX # (Sper ve) # p∈Ic 4 a conference paper at ICLR 2020 Filters Reshaped filters Codebook Cout : L} 7" | Na > ——_— k YY —_ Cin Cout Published as a conference paper at ICLR 2020 Figure 2: We quantize Cout filters of size Cin × K × K using a subvector size of d = K × K. In other words, we spatially quantize the convolutional filters to take advantage of the redundancy of information in the network. Similar colors denote subvectors assigned to the same codewords. In our implementation, we adapt the reshaping of W and x to various types of convolutions. We ac- count for the padding, the stride, the number of groups (for depthwise convolutions and in particular for pointwise convolutions) and the kernel size. We refer the reader to the code for more details. 3.3 NETWORK QUANTIZATION In this section, we describe our approach for quantizing a neural network. We quantize the network sequentially starting from the lowest layer to the highest layer. We guide the compression of the student network by the non-compressed teacher network, as detailled below. Learning the codebook. We recover the current input activations of the layer, i.e. the input activa- tions obtained by forwarding a batch of images through the quantized lower layers, and we quantize the current layer using those activations. Experimentally, we observed a drift in both the reconstruc- tion and classification errors when using the activations of the non-compressed network rather than the current activations. Finetuning the codebook. We finetune the codewords by distillation (Hinton et al., 2014) using the non-compressed network as the teacher network and the compressed network (up to the cur- rent layer) as the student network. Denoting yt (resp. ys) the output probabilities of the teacher (resp. student) network, the loss we optimize is the Kullback-Leibler divergence L = KL(ys, yt). Finetuning on codewords is done by averaging the gradients of each subvector assigned to a given codeword. More formally, after the quantization step, we fix the assignments once for all. Then, denoting (bp)p∈Ic the subvectors that are assigned to codeword c, we perform the SGD update with a learning rate η 1 OL cHe-n—) ~: (5) [Tel pele dbp # p∈Ic Experimentally, we find the approach to perform better than finetuning on the target of the images as demonstrated in Table 3. Moreover, this approach does not require any labelled data. 3.4 GLOBAL FINETUNING In a final step, we globally finetune the codebooks of all the layers to reduce any residual drifts and we update the running statistics of the BatchNorm layers: We empirically find it beneficial to finetune all the centroids after the whole network is quantized. The finetuning procedure is exactly the same as described in Section 3.3, except that we additionally switch the BatchNorms to the training mode, meaning that the learnt coefficients are still fixed but that the batch statistics (running mean and variance) are still being updated with the standard moving average procedure. We perform the global finetuning using the standard ImageNet training set for 9 epochs with an initial learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. The learning rate is decayed by a factor 10 every 3 epochs. As demonstrated in the ablation study in Table 3, finetuning on the true labels performs worse than finetuning by distillation. A possible explanation is that the supervision signal coming from the teacher network is richer than the one-hot vector used as a traditional learning signal in supervised learning (Hinton et al., 2014). 5 Published as a conference paper at ICLR 2020 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We quantize vanilla ResNet-18 and ResNet-50 architectures pretrained on the ImageNet dataset (Deng et al., 2009). Unless explicit mention of the contrary, the pretrained models are taken from the PyTorch model zoo3. We run our method on a 16 GB Volta V100 GPU. Quantizing a ResNet- 50 with our method (including all finetuning steps) takes about one day on 1 GPU. We detail our experimental setup below. Our code and the compressed models are open-sourced. Compression regimes. We explore a large block sizes (resp.small block sizes) compression regime by setting the subvector size of regular 3×3 convolutions to d = 9 (resp.d = 18) and the sub- vector size of pointwise convolutions to d = 4 (resp.d = 8). For ResNet-18, the block size of pointwise convolutions is always equal to 4. The number of codewords or centroids is set to k ∈ {256, 512, 1024, 2048} for each compression regime. Note that we clamp the number of cen- troids to min(k, Cout × m/4) for stability. For instance, the first layer of the first stage of the ResNet-50 has size 64× 64× 1 ×1, thus we always use k = 128 centroids with a block size d = 8. For a given number of centroids k, small blocks lead to a lower compression ratio than large blocks. Sampling the input activations. Before quantizing each layer, we randomly sample a batch of 1024 training images to obtain the input activations of the current layer and reshape it as described in Section 3.2. Then, before each iteration (E+M step) of our method, we randomly sample 10, 000 rows from those reshaped input activations. Hyperparameters. We quantize each layer while performing 100 steps of our method (sufficient for convergence in practice). We finetune the centroids of each layer on the standard ImageNet training set during 2,500 iterations with a batch size of 128 (resp 64) for the ResNet-18 (resp.ResNet- 50) with a learning rate of 0.01, a weight decay of 10−4 and a momentum of 0.9. For accuracy and memory reasons, the classifier is always quantized with a block size d = 4 and k = 2048 (resp. k = 1024) centroids for the ResNet-18 (resp., ResNet-50). Moreover, the first convolutional layer of size 7 × 7 is not quantized, as it represents less than 0.1% (resp., 0.05%) of the weights of a ResNet-18 (resp.ResNet-50). Metrics. We focus on the tradeoff between accuracy and memory. The accuracy is the top-1 error on the standard validation set of ImageNet. The memory footprint is calculated as the indexing cost (number of bits per weight) plus the overhead of storing the centroids in float16. As an example, quantizing a layer of size 128 × 128 × 3 × 3 with k = 256 centroids (1 byte per subvector) and a block size of d = 9 leads to an indexing cost of 16 kB for m = 16, 384 blocks plus the cost of storing the centroids of 4.5 kB. IMAGE CLASSIFICATION RESULTS We report below the results of our method applied to various ResNet models. First, we compare our method with the state of the art on the standard ResNet-18 and ResNet-50 architecture. Next, we show the potential of our approach on a competitive ResNet-50. Finally, an ablation study validates the pertinence of our method. Vanilla ResNet-18 and ResNet-50. We evaluate our method on the ImageNet benchmark for ResNet-18 and ResNet-50 architectures and compare our results to the following methods: Trained Ternary Quantization (TTQ) (Zhu et al., 2016), LR-Net (Shayer et al., 2017), ABC-Net (Lin et al., 2017), Binary Weight Network (XNOR-Net or BWN) (Rastegari et al., 2016), Deep Compression (DC) (Han et al., 2016) and Hardware-Aware Automated Quantization (HAQ) (Wang et al., 2018a). We report the accuracies and compression factors in the original papers and/or in the two surveys (Guo, 2018; Cheng et al., 2017) for a given architecture when the result is available. We do not compare our method to DoReFa-Net (Zhou et al., 2016) and WRPN (Mishra et al., 2017) as those approaches also use low-precision activations and hence get lower accuracies, e.g., 51.2% top-1 accuracy for a XNOR-Net with ResNet-18. The results are presented in Figure 4.2. For better read- ability, some results for our method are also displayed in Table 1. We report the average accuracy and standard deviation over 3 runs. Our method significantly outperforms state of the art papers for # 3https://pytorch.org/docs/stable/torchvision/models 6 Published as a conference paper at ICLR 2020 ResNet-18 on ImageNet ResNet-50 on ImageNet 70 76: ABC-Net (M=5) 68 a 75 = TTQ = aL x a “ wk 2048 956 x < 66 ABC-Net (M=3) @mk=1024 273 e a 6a ABC-Net (M=2) 2? [eR rest z 3 . & LR-Net (2 bits) a B71 HAQ (2 bits), 62/ © Ours, small blocks ABC-Net (M=1) Ours, small blocks k=256 70 am Ours, large blocks BWN 3 mu Ours, large blocks 4 Reference methods a 69| 4 Reference methods 256 60] -~-- Original model ER-Net (1 bit) ~~ Original model DC (2 bits) “S D) 10 20 30 40 0 5 10 15 20 25 30 Compression factor (original network size: 44.6MB) Compression factor (original network size: 97.5MB) 2 Figure 3: Compression results for ResNet-18 and ResNet-50 architectures. We explore two com- pression regimes as defined in Section 4.1: small block sizes (block sizes of d = 4 and 9) and large block sizes (block sizes d = 8 and 18). The results of our method for k = 256 centroids are of practical interest as they correspond to a byte-compatible compression scheme. Table 1: Results for vanilla ResNet-18 and ResNet-50 architectures for k = 256 centroids. Model (original top-1) Compression Size ratio Model size Top-1 (%) ResNet-18 (69.76%) Small blocks Large blocks 29x 43x 1.54 MB 1.03 MB 65.81 ±0.04 61.10 ±0.03 ResNet-50 (76.15%) Small blocks Large blocks 19x 31x 5.09 MB 3.19 MB 73.79 ±0.05 68.21 ±0.04 various operating points. For instance, for a ResNet-18, our method with large blocks and k = 512 centroids reaches a larger accuracy than ABC-Net (M = 2) with a compression ratio that is 2x larger. Similarly, on the ResNet-50, our compressed model with k = 256 centroids in the large blocks setup yields a comparable accuracy to DC (2 bits) with a compression ratio that is 2x larger. The work by Tung & Mori (Tung & Mori, 2018) is likely the only one that remains competitive with ours with a 6.8 MB network after compression, with a technique that prunes the network and therefore implicitly changes the architecture. The authors report the delta accuracy for which we have no direct comparable top-1 accuracy, but their method is arguably complementary to ours. Semi-supervised ResNet-50. Recent works (Mahajan et al., 2018; Yalniz et al., 2019) have demonstrated the possibility to leverage a large collection of unlabelled images to improve the per- formance of a given architecture. In particular, Yalniz et al. (Yalniz et al., 2019) use the publicly available YFCC-100M dataset (Thomee et al., 2015) to train a ResNet-50 that reaches 79.3% top-1 accuracy on the standard validation set of ImageNet. In the following, we use this particular model and refer to it as semi-supervised ResNet-50. In the low compression regime (block sizes of 4 and 9), with k = 256 centroids (practical for implementation), our compressed semi-supervised ResNet-50 reaches 76.12% top-1 accuracy. In other words, the model compressed to 5.20 MB at- tains the performance of a vanilla, non-compressed ResNet50 (vs.97.5MB for the non-compressed ResNet-50). Comparison for a given size budget. To ensure a fair comparison, we compare our method for a given model size budget against the reference methods in Table 2. It should be noted that our method can further benefit from advances in semi-supervised learning to boosts the performance of the non-compressed and hence of the compressed network. Ablation study. We perform an ablation study on the vanilla ResNet-18 to study the respective effects of quantizing using the activations and finetuning by distillation (here, finetuning refers both to the per-layer finetuning and to the global finetuning after the quantization described in Section 3). We refer to our method as Act + Distill. First, we still finetune by distillation but change the quanti- zation: instead of quantizing using our method (see Equation (2)), we quantizing using the standard PQ algorithm and do not take the activations into account, see Equation (1). We refer to this method 7 Published as a conference paper at ICLR 2020 Table 2: Best test top-1 accuracy on ImageNet for a given size budget (no architecture constraint). Ours 70.90% (HAQ (Wang et al., 2018a), MobileNet v2) 71.74% (HAQ (Wang et al., 2018a), MobileNet v1) 75.30% (HAQ (Wang et al., 2018a), ResNet-50) # 64.01% (vanilla ResNet-18) 76.12% (semi-sup.ResNet-50) 77.85% (semi-sup.ResNet-50) Table 3: Ablation study on ResNet-18 (test top-1 accuracy on ImageNet). Compression Centroids k No act + Distill Act + Labels Act + Distill (ours) Small blocks 256 512 1024 2048 64.76 66.31 67.28 67.88 65.55 66.82 67.53 67.99 65.81 67.15 67.87 68.26 Large blocks 256 512 1024 2048 60.46 63.21 64.74 65.94 61.01 63.67 65.48 66.21 61.18 63.99 65.72 66.50 as No act + Distill. Second, we quantize using our method but perform a standard finetuning using the image labels (Act + Labels). The results are displayed in Table 3. Our approach consistently yields significantly better results. As a side note, quantizing all the layers of a ResNet-18 with the standard PQ algorithm and without any finetuning leads to top-1 accuracies below 25% for all oper- ating points, which illustrates the drift in accuracy occurring when compressing deep networks with standard methods (as opposed to our method). IMAGE DETECTION RESULTS To demonstrate the generality of our method, we compress the Mask R-CNN architecture used for image detection in many real-life applications (He et al., 2017). We compress the backbone (ResNet- 50 FPN) in the small blocks compression regime and refer the reader to the open-sourced compressed model for the block sizes used in the various heads of the network. We use k = 256 centroids for every layer. We perform the fine-tuning (layer-wise and global) using distributed training on 8 V100 GPUs. Results are displayed in Table 4. We argue that this provides an interesting point of comparison for future work aiming at compressing such architectures for various applications. # 5 CONCLUSION We presented a quantization method based on Product Quantization that gives state of the art re- sults on ResNet architectures and that generalizes to other architectures such as Mask R-CNN. Our compression scheme does not require labeled data and the resulting models are byte-aligned, al- lowing for efficient inference on CPU. Further research directions include testing our method on a wider variety of architectures. In particular, our method can be readily adapted to simultaneously compress and transfer ResNets trained on ImageNet to other domains. Finally, we plan to take the non-linearity into account to improve our reconstruction error. Table 4: Compression results for Mask R-CNN (backbone ResNet-50 FPN) for k = 256 centroids (compression factor 26×). Model Size Box AP Mask AP Non-compressed Compressed 170 MB 6.65 MB 37.9 33.9 34.6 30.8 8 Published as a conference paper at ICLR 2020 # ACKNOWLEDGMENTS The authors thank Julieta Martinez for pointing out small discrepancies in the compressed sizes of the semi-supervised ResNet-50 and the Mask R-CNN. # REFERENCES David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin arXiv preprint Raffel. Mixmatch: A holistic approach to semi-supervised learning. arXiv:1905.02249, 2019. Miguel ´A. Carreira-Perpi˜n´an and Yerlan Idelbayev. Model compression as constrained optimization, with application to neural nets. part ii: quantization, 2017. Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression and acceleration for deep neural networks. CoRR, 2017. Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Towards the limit of network quantization. CoRR, 2016. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. CoRR, 2015. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Conference on Computer Vision and Pattern Recognition, 2009. Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Informa- tion Processing Systems 27. 2014. Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization. IEEE Trans. Pattern Anal. Mach. Intell., 2014. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net- works using vector quantization. arXiv preprint arXiv:1412.6115, 2014. Yunhui Guo. A survey on methods and theories of quantized neural networks. CoRR, 2018. Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. CoRR, 2015. Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. International Confer- ence on Computer Vision (ICCV), 2017. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. NIPS Deep Learning Workshop, 2014. Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3. arXiv e-prints, 2019. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. Conference on Computer Vision and Pattern Recognition, 2017. Forrest Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡0.5mb model size. CoRR, 2016. Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011. 9 Published as a conference paper at ICLR 2020 Herv´e J´egou, Matthijs Douze, and Cordelia Schmid. Product Quantization for Nearest Neighbor Search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems. 2012. Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, 1990. Fengfu Li and Bin Liu. Ternary weight networks. CoRR, 2016. Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. CoRR, 2017. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learn- International Conference on ing efficient convolutional networks through network slimming. Computer Vision, 2017. Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks, 2017. Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. CoRR, 2017. Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet V2: practical guidelines for efficient CNN architecture design. CoRR, 2018. Dhruv Mahajan, Ross B. Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. CoRR, 2018. Mark D. McDonnell. Training wide residual networks for deployment using a single bit for each weight, 2018. Asit K. Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. CoRR, 2017. Asit K. Mishra, Eriko Nurvitadhi, Jeffrey J. Cook, and Debbie Marr. WRPN: wide reduced-precision networks. CoRR, 2017. Mohammad Norouzi and David J Fleet. Cartesian k-means. In Conference on Computer Vision and Pattern Recognition, 2013. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, 2016. Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and seg- mentation. CoRR, 2018. Oran Shayer, Dan Levi, and Ethan Fetaya. Learning discrete weights using the local reparameteri- zation trick. CoRR, 2017. Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks, 2019. Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. The new data and new challenges in multimedia research. CoRR, 2015. Frederick Tung and Greg Mori. Deep neural network compression by in-parallel pruning- quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. 10 Published as a conference paper at ICLR 2020 Kuan Wang, Zhijian Liu, Yujun Lin andx Ji Lin, and Song Han. HAQ: hardware-aware automated quantization. CoRR, 2018a. Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: hardware-aware automated quan- tization. arXiv preprint arXiv:1811.08886, 2018b. Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized convolutional neural networks for mobile devices. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual trans- formations for deep neural networks. In Conference on Computer Vision and Pattern Recognition, 2017. I. Zeki Yalniz, Herv´e J´egou, Kan Chen, Manohar Paluri, and Dhruv Mahajan. Billion-scale semi- supervised learning for image classification. arXiv e-prints, 2019. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR, 2017. Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantiza- tion: Towards lossless cnns with low-precision weights. CoRR, 2017. Shuchang Zhou, Zekun Ni, Xinyu Zhou, He Wen, Yuxin Wu, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR, 2016. Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained ternary quantization. CoRR, 2016. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. CoRR, 2017. 11
{ "id": "1811.08886" }
1907.05012
Making AI Forget You: Data Deletion in Machine Learning
Intense recent discussions have focused on how to provide individuals with control over when their data can and cannot be used --- the EU's Right To Be Forgotten regulation is an example of this effort. In this paper we initiate a framework studying what to do when it is no longer permissible to deploy models derivative from specific user data. In particular, we formulate the problem of efficiently deleting individual data points from trained machine learning models. For many standard ML models, the only way to completely remove an individual's data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. We investigate algorithmic principles that enable efficient data deletion in ML. For the specific setting of k-means clustering, we propose two provably efficient deletion algorithms which achieve an average of over 100X improvement in deletion efficiency across 6 datasets, while producing clusters of comparable statistical quality to a canonical k-means++ baseline.
http://arxiv.org/pdf/1907.05012
Antonio Ginart, Melody Y. Guan, Gregory Valiant, James Zou
cs.LG, stat.ML
To appear in NeurIPS 2019
null
cs.LG
20190711
20191104
9 1 0 2 # v o N 4 ] G L . s c [ 2 v 2 1 0 5 0 . 7 0 9 1 : v i X r a # Making AI Forget You: Data Deletion in Machine Learning Antonio A. Ginart1, Melody Y. Guan2, Gregory Valiant2, and James Zou3 1Dept. of Electrical Engineering 2Dept. of Computer Science 3Dept. of Biomedial Data Science Stanford University, Palo Alto, CA 94305 {tginart, mguan, valiant, jamesz}@stanford.edu # Abstract Intense recent discussions have focused on how to provide individuals with control over when their data can and cannot be used — the EU’s Right To Be Forgotten regulation is an example of this effort. In this paper we initiate a framework studying what to do when it is no longer permissible to deploy models derivative from specific user data. In particular, we formulate the problem of efficiently deleting individual data points from trained machine learning models. For many standard ML models, the only way to completely remove an individual’s data is to retrain the whole model from scratch on the remaining data, which is often not computationally practical. We investigate algorithmic principles that enable efficient data deletion in ML. For the specific setting of k-means clustering, we propose two provably efficient deletion algorithms which achieve an average of over 100× improvement in deletion efficiency across 6 datasets, while producing clusters of comparable statistical quality to a canonical k-means++ baseline. # Introduction Recently, one of the authors received the redacted email below, informing us that an individual’s data cannot be used any longer. The UK Biobank [79] is one of the most valuable collections of genetic and medical records with half a million participants. Thousands of machine learning classifiers are trained on this data, and thousands of papers have been published using this data. EMAIL –– UK BIOBANK –– Subject: UK Biobank Application [REDACTED], Participant Withdrawal Notification [REDACTED] Dear Researcher, As you are aware, participants are free to withdraw form the UK Biobank at any time and request that their data no longer be used. Since our last review, some participants involved with Application [REDACTED] have requested that their data should longer be used. The email request from the UK Biobank illustrates a fundamental challenge the broad data science and policy community is grappling with: how should we provide individuals with flexible control over how corporations, governments, and researchers use their data? Individuals could decide at any time that they do not wish for their personal data to be used for a particular purpose by a particular entity. This ability is sometimes legally enforced. For example, the European Union’s General Data Protection Regulation (GDPR) and former Right to Be Forgotten [24, 23] both require that companies and organizations enable users to withdraw consent to their data at any time under certain circumstances. These regulations broadly affect international companies and technology platforms with EU customers and users. Legal scholars have pointed out that the continued use of AI systems directly trained on deleted data could be considered illegal under certain interpretations and ultimately concluded that: it may be impossible to fulfill the legal aims of the Right to be Forgotten in artificial intelligence environments [86]. Furthermore, so-called model-inversion attacks have demonstrated the capability of adversaries to extract user information from trained ML models [85]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. (Preprint) Concretely, we frame the problem of data deletion in machine learning as follows. Suppose a statistical model is trained on n datapoints. For example, the model could be trained to perform disease diagnosis from data collected from n patients. To delete the data sampled from the i-th patient from our trained model, we would like to update it such that it becomes independent of sample i, and looks as if it had been trained on the remaining n − 1 patients. A naive approach to satisfy the requested deletion would be to retrain the model from scratch on the data from the remaining n−1 patients. For many applications, this is not a tractable solution – the costs (in time, computation, and energy) for training many machine learning models can be quite high. Large scale algorithms can take weeks to train and consume large amounts of electricity and other resources. Hence, we posit that efficient data deletion is a fundamental data management operation for machine learning models and AI systems, just like in relational databases or other classical data structures. Beyond supporting individual data rights, there are various other possible use cases in which efficient data deletion is desirable. To name a few examples, it could be used to speed-up leave-one- out-cross-validation [2], support a user data marketplace [75, 80], or identify important or valuable datapoints within a model [37]. Deletion efficiency for general learning algorithms has not been previously studied. While the desired output of a deletion operation on a deterministic model is fairly obvious, we have yet to even define data deletion for stochastic learning algorithms. At present, there is only a handful of learning algorithms known to support fast data deletion operations, all of which are deterministic. Even so, there is no pre-existing notion of how engineers should think about the asymptotic deletion efficiency of learning systems, nor understanding of the kinds of trade-offs such systems face. The key components of this paper include introducing deletion efficient learning, based on an intuitive and operational notion of what it means to (efficiently) delete data from a (possibly stochastic) statistical model. We pose data deletion as an online problem, from which a notion of optimal deletion efficiency emerges from a natural lower bound on amortized computation time. We do a case-study on deletion efficient learning using the simple, yet perennial, k-means clustering problem. We propose two deletion efficient algorithms that (in certain regimes) achieve optimal deletion efficiency. Empirically, on six datasets, our methods achieve an average of over 100× speedup in amortized runtime with respect to the canonical Lloyd’s algorithm seeded by k-means++ [53, 5]. Simultaneously, our proposed deletion efficient algorithms perform comparably to the canonical algorithm on three different statistical metrics of clustering quality. Finally, we synthesize an algorithmic toolbox for designing deletion efficient learning systems. We summarize our work into three contributions: (1) We formalize the problem and notion of efficient data deletion in the context of machine learning. (2) We propose two different deletion efficient solutions for k-means clustering that have theoretical guarantees and strong empirical results. (3) From our theory and experiments, we synthesize four general engineering principles for designing deletion efficient learning systems. # 2 Related Works Deterministic Deletion Updates As mentioned in the introduction, efficient deletion operations are known for some canonical learning algorithms. They include linear models [55, 27, 83, 81, 18, 74], certain types of lazy learning [88, 6, 11] techniques such as non-parametric Nadaraya-Watson kernel regressions [61] or nearest-neighbors methods [22, 74], recursive support vector machines [19, 81], and co-occurrence based collaborative filtering [74]. Data Deletion and Data Privacy Related ideas for protecting data in machine learning — e.g. cryptography [63, 16, 14, 13, 62, 31], and differential privacy [30, 21, 20, 64, 1] — do not lead to efficient data deletion, but rather attempt to make data private or non-identifiable. Algorithms that support efficient deletion do not have to be private, and algorithms that are private do not have to support efficient deletion. To see the difference between privacy and data deletion, note that every learning algorithm supports the naive data deletion operation of retraining from scratch. The algorithm is not required to satisfy any privacy guarantees. Even an operation that outputs the entire dataset in the clear could support data deletion, whereas such an operation is certainly not private. In this sense, the challenge of data deletion only arises in the presence of computational limitations. Privacy, on the other hand, presents statistical challenges, even in the absence of any computational limitations. With that being said, data deletion has direct connections and consequences in data privacy and security, which we explore in more detail in Appendix A. 2 # 3 Problem Formulation We proceed by describing our setting and defining the notion of data deletion in the context of a machine learning algorithm and model. Our definition formalizes the intuitive goal that after a specified datapoint, x, is deleted, the resulting model is updated to be indistinguishable from a model that was trained from scratch on the dataset sans x. Once we have defined data deletion, we define a notion of deletion efficiency in the context of an online setting. Finally, we conclude by synthesizing high-level principles for designing deletion efficient learning algorithms. Throughout we denote dataset D = {x1,...,xn} as a set consisting of n datapoints, with each datapoint xi ∈ Rd; for simplicity, we often represent D as a n × d real-valued matrix as well. Let A denote a (possibly randomized) algorithm that maps a dataset to a model in hypothesis space H. We allow models to also include arbitrary metadata that is not necessarily used at inference time. Such metadata could include data structures or partial computations that can be leveraged to help with subsequent deletions. We also emphasize that algorithm A operates on datasets of any size. Since A is often stochastic, we can also treat A as implicitly defining a conditional distribution over H given dataset D. Definition 3.1. Data Deletion Operation: We define a data deletion operation for learning algorithm A, RA(D,A(D),i), which maps the dataset D, model A(D), and index i ∈ {1,...,n} to some model in H. Such an operation is a data deletion operation if, for all D and i, random variables A(D−i) and RA(D,A(D),i) are equal in distribution, A(D−i) =d RA(D,A(D),i). Here we focus on exact data deletion: after deleting a training point from the model, the model should be as if this training point had never been seen in the first place. The above definition can naturally be relaxed to approximate data deletion by requiring a bound on the distance (or divergence) between dis- tributions of A(D−i) and RA(D,A(D),i). Refer to Appendix A for more details on approximate data deletion, especially in connection to differential privacy. We defer a full discussion of this to future work. A Computational Challenge Every learning algorithm, A, supports a trivial data deletion operation corresponding to simply retraining on the new dataset after the specified datapoint has been removed — namely running algorithm A on the dataset D−i. Because of this, the challenge of data deletion is computational: 1) Can we design a learning algorithm A, and supporting data structures, so as to allow for a computationally efficient data deletion operation? 2) For what algorithms A is there a data deletion operation that runs in time sublinear in the size of the dataset, or at least sublinear in the time it takes to compute the original model, A(D)? 3) How do restrictions on the memory-footprint of the metadata contained in A(D) impact the efficiency of data deletion algorithms? Data Deletion as an Online Problem One convenient way of concretely formulating the computa- tional challenge of data deletion is via the lens of online algorithms [17]. Given a dataset of n datapoints, a specific training algorithm A, and its corresponding deletion operation RA, one can consider a stream of m ≤ n distinct indices, i1,i2,...,im ∈ {1,...,n}, corresponding to the sequence of datapoints to be deleted. The online task then is to design a data deletion operation that is given the indices {ij} one at a time, and must output A(D−{i1,...,ij }) upon being given index ij. As in the extensive body of work on online algorithms, the goal is to minimize the amortized computation time. The amortized runtime in the proposed online deletion setting is a natural and meaningful way to measure deletion efficiency. A formal definition of our proposed online problem setting can be found in Appendix A. In online data deletion, a simple lower bound on amortized runtime emerges. All (sequential) learning algorithms A run in time Ω(n) under the natural assumption that A must process each datapoint at least once. Furthermore, in the best case, A comes with a constant time deletion operation (or a deletion oracle). Remark 3.1. In the online setting, for n datapoints and m deletion requests we establish an asymptotic lower bound of Ω( n m ) for the amortized computation time of any (sequential) learning algorithm. We refer to an algorithm achieving this lower bound as deletion efficient. Obtaining tight upper and lower bounds is an open question for many basic learning paradigms including ridge regression, decision tree models, and settings where A corresponds to the solution to a stochastic optimization problem. In this paper, we do a case study on k-means clustering, showing that we can achieve deletion efficiency without sacrificing statistical performance. # 3.1 General Principles for Deletion Efficient Machine Learning Systems We identify four design principles which we envision as the pillars of deletion efficient learning algorithms. 3 Linearity Use of linear computation allows for simple post-processing to undo the influence of a single datapoint on a set of parameters. Generally speaking, the Sherman-Morrison-Woodbury matrix identity and matrix factorization techniques can be used to derive fast and explicit formulas for updating linear models [55, 27, 83, 43]. For example, in the case of linear least squares regressions, QR factorization can be used to delete datapoints from learned weights in time O(d2) [41, 90]. Linearity should be most effective in domains in which randomized [70], reservoir [89, 76], domain-specific [54], or pre-trained feature spaces elucidate linear relationships in the data. Laziness Lazy learning methods delay computation until inference time [88, 11, 6], resulting in trivial deletions. One of the simplest examples of lazy learning is k-nearest neighbors [32, 4, 74], where deleting a point from the dataset at deletion time directly translates to an updated model at inference time. There is a natural affinity between lazy learning and non-parametric techniques [61, 15]. Although we did not make use of laziness for unsupervised learning in this work, pre-existing literature on kernel density estimation for clustering would be a natural starting place [44]. Laziness should be most effective in regimes when there are fewer constraints on inference time and model memory than training time or deletion time. In some sense, laziness can be interpreted as shifting computation from training to inference. As a side effect, deletion can be immensely simplified. Modularity In the context of deletion efficient learning, modularity is the restriction of dependence of computation state or model parameters to specific partitions of the dataset. Under such a modularization, we can isolate specific modules of data processing that need to be recomputed in order to account for deletions to the dataset. Our notion of modularity is conceptually similar to its use in software design [10] and distributed computing [67]. In DC-k-means, we leverage modularity by managing the dependence between computation and data via the divide-and-conquer tree. Modularity should be most effective in regimes for which the dimension of the data is small compared to the dataset size, allowing for partitions of the dataset to capture the important structure and features. Quantization Many models come with a sense of continuity from dataset space to model space — small changes to the dataset should result in small changes to the (distribution over the) model. In statistical and computational learning theory, this idea is known to as stability [60, 47, 50, 29, 77, 68]. We can leverage stability by quantizing the mapping from datasets to models (either explicitly or implicitly). Then, for a small number of deletions, such a quantized model is unlikely to change. If this can be efficiently verified at deletion time, then it can be used for fast average-case deletions. Quantization is most effective in regimes for which the number of parameters is small compared to the dataset size. # 4 Deletion Efficient Clustering Data deletion is a general challenge for machine learning. Due to its simplicity we focus on k-means clustering as a case study. Clustering is a widely used ML application, including on the UK Biobank (for example as in [33]). We propose two algorithms for deletion efficient k-means clustering. In the context of k-means, we treat the output centroids as the model from which we are interested in deleting datapoints. We summarize our proposed algorithms and state theoretical runtime complexity and statistical performance guarantees. Please refer to [32] for background concerning k-means clustering. # 4.1 Quantized k-Means We propose a quantized variant of Lloyd’s algorithm as a deletion efficient solution to k-means clustering, called Q-k-means. By quantizing the centroids at each iteration, we show that the algorithm’s centroids are constant with respect to deletions with high probability. Under this notion of quantized stability, we can support efficient deletion, since most deletions can be resolved without re-computing the centroids from scratch. Our proposed algorithm is distinct from other quantized versions of k-means [73], which quantize the data to minimize memory or communication costs. We present an abridged version of the algorithm here (Algorithm 1). Detailed pseudo-code for Q-k-means and its deletion operation may be found in Appendix B. Q-k-means follows the iterative protocol as does the canonical Lloyd’s algorithm (and makes use of the k-means++ initialization). There are four key differences from Lloyd’s algorithm. First and foremost, the centroids are quantized in each iteration before updating the partition. The quantization maps each point to the nearest vertex of a uniform e-lattice [38]. To de-bias the quantization, we apply a random phase shift to the lattice. The particulars of the quantization scheme are discussed in Appendix B. Second, at various steps throughout the computation, we memoize the optimization state into the model’s metadata for use at deletion time (incurring an additional O(ktd) memory cost). Third, we 4 introduce a balance correction step, which compensates for γ-imbalanced clusters by averaging current centroids with a momentum term based on the previous centroids. Explicitly, for some γ ∈ (0,1), we consider any partition πκ to be γ-imbalanced if |πκ| ≤ γn k . We may think of γ as being the ratio of the smallest cluster size to the average cluster size. Fourth, because of the quantization, the iterations are no longer guaranteed to decrease the loss, so we have an early termination if the loss increases at any iteration. Note that the algorithm terminates almost surely. Deletion in Q-k-means is straightforward. Us- ing the metadata saved from training time, we can verify if deleting a specific datapoint would have resulted in a different quantized centroid than was actually computed during training. If this is the case (or if the point to be deleted is one of randomly chosen initial centroids accord- ing to k-means++) we must retrain from scratch to satisfy the deletion request. Otherwise, we may satisfy deletion by updating our metadata to reflect the deletion of the specified datapoint, but we do not have to recompute the centroids. Q-k-means directly relies the principle of quanti- zation to enable fast deletion in expectation. It is also worth noting that Q-k-means also leverages on the principle of linearity to recycle computa- tion. Since centroid computation is linear in the datapoints, it is easy to determine the centroid update due to a removal at deletion time. # Algorithm 1 Quantized k-means (abridged) Input: data matrix DE R"*? Parameters: k €N, TEN, y€(0,1),€>0 e<-kt*(D)// initialize centroids with k-means++ Save initial centroids: save(c). L «k-means loss of initial partition 7(c) forr=lto[do Store current centroids: c’ <—c Compute centroids: c+ c(7) Apply correction to y-imbalanced partitions Quantize to random e-lattice: €+- Q(c;0) Update partition: 7’ + 7(é) Save state to metadata: save(c,0,¢,|7"|) Compute loss L’ a if L’ <L then (c,r,L) < (é,7",L’) else break end for return c //output final centroids as model Deletion Time Complexity We turn our attention to an asymptotic time complexity analysis of Q-k-means deletion operation. Q-k-means supports deletion by quantizing the centroids, so they are stable to against small perturbations (caused by deletion of a point). Theorem 4.1. Let D be a dataset on (0,1]¢ of size n. Fix parameters T, k, ¢, andy for Q-k-means. Then, Q-k-means supports m deletions in time O(m?d>/? /e) in expectation, with probability over the randomness in the quantization phase and k-means++ initialization. The proof for the theorem is given in Appendix C. The intuition is as follows. Centroids are computed by taking an average. With enough terms in an average, the effect of a small number of those terms is negligible. The removal of those terms from the average can be interpreted as a small perturbation to the centroid. If that small perturbation is on a scale far below the granularity of the quantizing €-lattice, then it is unlikely to change the quantized value of the centroid. Thus, beyond stability verification, no additional computation is required for a majority of deletion requests. This result is in expectation with respect to the randomized initializations and randomized quantization phase, but is actually worst-case over all possible (normalized) dataset instances. The number of clusters k, iterations T, and cluster imbalance ratio 7 are usually small constants in many applications, and are treated as such here. Interestingly, for constant m and ¢, the expected deletion time is independent of n due to the stability probability increasing at the same rate as the problem size (see Appendix C). Deletion time for this method may not scale well in the high-dimensional setting. In the low-dimensional case, the most interesting interplay is between e€, n, and m. To obtain as high-quality statistical performance as possible, it would be ideal if ¢— 0 as n > oo. In this spirit, we can parameterize e=n~° for 8 € (0,1). We will use this parameterization for theoretical analysis of the online setting in Section 4.3. Theoretical Statistical Performance We proceed to state a theoretical guarantee on statistical performance of Q-k-means, which complements the asymptotic time complexity bound of the deletion operation. Recall that the loss for a k-means problem instance is given by the sum of squared Euclidean distance from each datapoint to its nearest centroid. Let £* be the optimal loss for a particular problem instance. Achieving the optimal solution is, in general, NP-Hard Bh. Instead, we can approximate it with k-means++, which achieves EL*+ < (8logk +16) £* [5]. Corollary 4.1.1. Let £ be a random variable denoting the loss of Q-k-means on a particular problem instance of size n. Then BL < (8logk +16) L* +€y/nd(8logk +16) L* + tnde?. # tnde?. This corollary follows from the theoretical guarantees already known to apply to Lloyd’s algorithm when initialized with k-means++, given by [5]. The proof can be found in Appendix C. We can 5 interpret the bound by looking at the ratio of expected loss upper bounds for k-means++ and Q-k- means. If we assume our problem instance is generated by iid samples from some arbitrary non- atomic distribution, then it follows that £* = O(n). Taking the loss ratio of upper bounds yields EL/ELt++ <1+O(de?+Vde). Ensuring that << 1/v/d implies the upper bound is as good as that of k-means++. # 4.2 Divide-and-Conquer k-Means We turn our attention to another variant of Lloyd’s algorithm that also supports efficient deletion, albeit through quite different means. We refer to this algorithm as Divide-and-Conquer k-means (DC-k-means). At a high-level, DC- k-means works by partitioning the dataset into small sub-problems, solving each sub-problem as an independent k-means instance, and recur- sively merging the results. We present pseudo- code for DC-k-means here, and we refer the reader to Appendix B for pseudo-code of the deletion operation. # Algorithm 2 DC-k-means Input: data matrix D ∈ Rn×d Parameters: k ∈ N, T ∈ N, tree width w ∈ N, tree height h ∈ N Initialize a w-ary tree of height h such that each node has a pointer to a dataset and centroids for i = 1 to n do Select a leaf node uniformly at random node.dataset.add(Di) end for for l = h down to 0 do for each node in level l do c ← k-means++(node.dataset,k,T ) node.centroids ← c if l > 0 then node.parent.dataset.add(c) else save all nodes as metadata return c //model output end if end for end for DC-k-means operates on a perfect w-ary tree of height h (this could be relaxed to any rooted tree). The original dataset is partitioned into each leaf in the tree as a uniform multinomial random variable with datapoints as trials and leaves as outcomes. At each of these leaves, we solve for some number of centroids via k-means++. When we merge leaves into their parent node, we construct a new dataset consisting of all the centroids from each leaf. Then, we compute new centroids at the parent via another instance of k-means++. For simplicity, we keep k fixed throughout all of the sub-problems in the tree, but this could be relaxed. We make use of the tree hierarchy to modularize the computation’s dependence on the data. At deletion time, we need only to recompute the sub-problems from one leaf up to the root. This observation allows us to support fast deletion operations. Our method has close similarities to pre-existing distributed k-means algorithms [69, 67, 9, 7, 39, 8, 92], but is in fact distinct (not only in that it is modified for deletion, but also in that it operates over general rooted trees). For simplicity, we restrict our discussion to only the simplest of divide-and- conquer trees. We focus on depth-1 trees with w leaves where each leaf solves for k centroids. This requires only one merge step with a root problem size of kn/w. Analogous to how € serves as a knob to trade-off between deletion efficiency and statistical perfor- mance in Q-k-means, for DC-k-means, we imagine that w might also serve as a similar knob. For example, if w = 1, DC-k-means degenerates into canonical Lloyd’s (as does Q-k-means as € — 0). The dependence of statistical performance on tree width w is less theoretically tractable than that of Q-k-means on €, but in Appendix D, we empirically show that statistical performance tends to decrease as w increases, which is perhaps somewhat expected. As we show in our experiments, depth-1 DC-k-means demonstrates an empirically compelling trade- off between deletion time and statistical performance. There are various other potential extensions of this algorithm, such as weighting centroids based on cluster mass as they propagate up the tree or exploring the statistical performance of deeper trees. Deletion Time Complexity For ensuing asymptotic analysis, we may consider parameterizing tree width w as w = Θ(nρ) for ρ ∈ (0,1). As before, we treat k and T as small constants. Although intuitive, there are some technical minutia to account for to prove correctness and runtime for the DC-k-means deletion operation. The proof of Proposition 3.2 may be found in Appendix C. Proposition 4.2. Let D be a dataset on Rd of size n. Fix parameters T and k for DC-k-means. Let w = Θ(nρ) and ρ ∈ (0,1) Then, with a depth-1, w-ary divide-and-conquer tree, DC-k-means supports m deletions in time O(mmax{nρ,n1−ρ}d) in expectation with probability over the randomness in dataset partitioning. 6 # 4.3 Amortized Runtime Complexity in Online Deletion Setting We state the amortized computation time for both of our algorithms in the online deletion setting defined in Section 3. We are in an asymptotic regime where the number of deletions m= O(n“) for 0 <a<1 (see Appendix C for more details). Recall the Q() lower bound from Section 3. For a particular fractional power a, an algorithm achieving the optimal asymptotic lower bound on amortized computation is said to be a-deletion efficient. This corresponds to achieving an amortized runtime of O(n!~*). The following corollaries result from direct calculations which may be found in Appendix C. Note that Corollary 4.2.2 assumes DC-k-means is training sequentially. Corollary 4.2.1. With e=@(n-*), for 0< 8 <\, the Q-k-means algorithm is a-deletion efficient in expectation if a < i. Corollary 4.2.2. With w = O(n*), for 0 < p <1, and a depth-I w-ary divide-and-conquer tree, DC-k-means is a-deletion efficient in expectation if «<1—max{1-—p,p}. # 5 Experiments With a theoretical understanding in hand, we seek to empirically characterize the trade-off between runtime and performance for the proposed algorithms. In this section, we provide proof-of-concept for our algorithms by benchmarking their amortized runtimes and clustering quality on a simulated stream of online deletion requests. As a baseline, we use the canonical Lloyd’s algorithm initialized by k-means++ seeding [53, 5]. Following the broader literature, we refer to this baseline simply as k-means, and refer to our two proposed methods as Q-k-means and DC-k-means. Datasets We run our experiments on five real, publicly available datasets: Celltype (N = 12,009, D = 10, K = 4) [42], Covtype (N = 15,120, D = 52, K = 7) [12], MNIST (N = 60,000, D = 784, K = 10) [51], Postures (N = 74,975, D = 15, K = 5) [35, 34] , Botnet (N = 1,018,298, D = 115, K = 11)[56], and a synthetic dataset made from a Gaussian mixture model which we call Gaussian (N = 100,000, D = 25, K = 5). We refer the reader to Appendix D for more details on the datasets. All datasets come with ground-truth labels as well. Although we do not make use of the labels at learning time, we can use them to evaluate the statistical quality of the clustering methods. Online Deletion Benchmark We simulate a stream of 1,000 deletion requests, selected uniformly at random and without replacement. An algorithm trains once, on the full dataset, and then runs its deletion operation to satisfy each request in the stream, producing an intermediate model at each request. For the canonical k-means baseline, deletions are satisfied by re-training from scratch. Protocol To measure statistical performance, we evaluate with three metrics (see Section 5.1) that measure cluster quality. To measure deletion efficiency, we measure the wall-clock time to complete our online deletion benchmark. For both of our proposed algorithms, we always fix 10 iterations of Lloyd’s, and all other parameters are selected with simple but effective heuristics (see Appendix D). This alleviates the need to tune them. To set a fair k-means baseline, when reporting runtime on the online deletion benchmark, we also fix 10 iterations of Lloyd’s, but when reporting statistical performance metrics, we run until convergence. We run five replicates for each method on each dataset and include standard deviations with all our results. We refer the reader to Appendix D for more experimental details. # 5.1 Statistical Performance Metrics To evaluate clustering performance of our algorithms, the most obvious metric is the optimization loss of the k-means objective. Recall that this is the sum of square Euclidean distances from each datapoint to its nearest centroid. To thoroughly validate the statistical performance of our proposed algorithms, we additionally include two canonical clustering performance metrics. Silhouette Coefficient [72]: This coefficient measures a type of correlation (between -1 and +1) that captures how dense each cluster is and how well-separated different clusters are. The silhouette coefficient is computed without ground-truth labels, and uses only spatial information. Higher scores indicate denser, more well-separated clusters. Normalized Mutual Information (NMI) [87, 49]: This quantity measures the agreement of the assigned clusters to the ground-truth labels, up to permutation. NMI is upper bounded by 1, achieved by perfect assignments. Higher scores indicate better agreement between clusters and ground-truth labels. 7 # 5.2 Summary of Results We summarize our key find- ings in four tables. In Tables 1-3, we report the statistical clustering performance of the 3 algorithms on each of the 6 datasets. In Table 1, we report the optimization loss ratios of our proposed methods over the k-means++ baseline. # Table 1: Loss Ratio Dataset Celltype Covtype MNIST Postures Gaussian Botnet k-means Q-k-means 1.0±0.0 1.0±0.029 1.0±0.002 1.0±0.004 1.0±0.014 1.0±0.126 1.158±0.099 1.033±0.017 1.11±0.004 1.014±0.015 1.019±0.019 1.018±0.014 DC-k-means 1.439±0.157 1.017±0.031 1.014±0.003 1.034±0.017 1.003±0.014 1.118±0.102 In Table 2, we report the sil- houette coefficient for the clus- ters. In Table 3, we report the NMI. In Table 4, we report the amortized total runtime of training and deletion for each method. Overall, we see that the statistical clustering per- formance of the three meth- ods are competitive. Table 2: Silhouette Coefficients (higher is better) Dataset k-means Q-k-means DC-k-means Celltype Covtype Gaussian Postures Gaussian Botnet 0.384±0.001 0.238±0.027 0.036±0.002 0.107±0.003 0.066±0.007 0.583±0.042 0.367±0.048 0.203±0.026 0.031±0.002 0.107±0.004 0.053±0.003 0.639±0.028 0.422±0.057 0.222±0.017 0.035±0.001 0.109±0.005 0.071±0.004 0.627±0.046 Furthermore, we find that both proposed algorithms yield orders of magnitude of speedup. As expected from the theoretical analysis, Q-k- means offers greater speed-ups when the dimension is lower relative to the sample size, whereas DC-k-means is more consistent across dimensional- ities. — Table 3: Normalized Mutual Information (higher is better) Dataset k-means Q-k-means DC-k-means Celltype Covtype MNIST Gaussian Postures Botnet 0.36±0.0 0.311±0.009 0.494±0.006 0.319±0.024 0.163±0.018 0.708±0.048 0.336±0.032 0.332±0.024 0.459±0.011 0.245±0.024 0.169±0.012 0.73±0.015 0.294±0.067 0.335±0.02 0.494±0.004 0.318±0.024 0.173±0.011 0.705±0.039 Table 4: Amortized Runtime in Online Deletion Benchmark (Train once + 1,000 Deletions) Dataset k-means Runtime (s) Q-k-means Runtime (s) Speedup DC-k-means Runtime (s) Speedup Celltype Covtype MNIST Postures Gaussian Botnet 4.241±0.248 6.114±0.216 65.038±1.528 26.616±1.222 206.631±67.285 607.784±64.687 0.026±0.011 0.454±0.276 29.386±0.728 0.413±0.305 0.393±0.104 1.04±0.368 163.286× 0.272±0.007 13.464× 0.469±0.021 2.562±0.056 2.213× 64.441× 1.17±0.398 525.63× 5.992±0.269 584.416× 8.568±0.652 15.6× 13.048× 25.381× 22.757× 34.483× 70.939× 10! Celltype Covtype MNIST 10! 102 10° 10-1 10° 10! 10° 10! 10? 10° 10° 10! 10? 10° 10° 101 10? 10° Postures Gaussian Botnet 2 10° 108 g 10? q 2 Z 10° z 10" 3 10°) +~ k-means 10! 3 —t DC-k-means 10° & 1071 —- Qk-means 10° 10° 10! 107 10° 10° 10! 107 10° 10° 10! 107 10° # of Deletion Requests 1: Online deletion efficiency: # of deletions amortized runtime (secs) for 3 algorithms 6 Figure 1: Online deletion efficiency: # of deletions vs. amortized runtime (secs) for 3 algorithms on 6 datasets. 8 In particular, note that MNIST has the highest d/n ratio of the datasets we tried, followed by Covtype, These two datasets are, respectively, the datasets for which Q-k-means offers the least speedup. On the other hand, DC-k-means offers consistently increasing speedup as n increases, for fixed d. Furthermore, we see that Q-k-means tends to have higher variance around its deletion efficiency, due to the randomness in centroid stabilization having a larger impact than the randomness in the dataset partitioning. We remark that 1,000 deletions is less than 10% of every dataset we test on, and statistical performance remains virtually unchanged throughout the benchmark. In Figure 1, we plot the amortized runtime on the online deletion benchmark as a function of number of deletions in the stream. We refer the reader to Appendix D for supplementary experiments providing more detail on our methods. # 6 Discussion At present, the main options for deletion efficient supervised methods are linear models, support vector machines, and non-parametric regressions. While our analysis here focuses on the concrete problem of clustering, we have proposed four design principles which we envision as the pillars of deletion efficient learning algorithms. We discuss the potential application of these methods to other supervised learning techniques. Segmented Regression Segmented (or piece-wise) linear regression is a common relaxation of canonical regression models [58, 59, 57]. It should be possible to support a variant of segmented regression by combining Q-k-means with linear least squares regression. Each cluster could be given a separate linear model, trained only on the datapoints in said cluster. At deletion time, Q-k-means would likely keep the clusters stable, enabling a simple linear update to the model corresponding to the cluster from which the deleted point belonged. Kernel Regression Kernel regressions in the style of random Fourier features [70] could be readily amended to support efficient deletions for large-scale supervised learning. Random features do not depend on data, and thus only the linear layer over the feature space requires updating for deletion. Furthermore, random Fourier feature methods have been shown to have affinity for quantization [91]. Decision Trees and Random Forests Quantization is also a promising approach for decision trees. By quantizing or randomizing decision tree splitting criteria (such as in [36]) it seems possible to support efficient deletion. Furthermore, random forests have a natural affinity with bagging, which naturally can be used to impose modularity. Deep Neural Networks and Stochastic Gradient Descent A line of research has observed the robustness of neural network training robustness to quantization and pruning [84, 46, 40, 71, 25, 52]. It could be possible to leverage these techniques to quantize gradient updates during SGD-style optimization, enabling a notion of parameter stability analgous to that in Q-k-means. This would require larger batch sizes and fewer gradient steps in order to scale well. It is also possible that approximate deletion methods may be able to overcome shortcomings of exact deletion methods for large neural models. # 7 Conclusion In this work, we developed a notion of deletion efficiency for large-scale learning systems, proposed provably deletion efficient unsupervised clustering algorithms, and identified potential algorithmic principles that may enable deletion efficiency for other learning algorithms and paradigms. We have only scratched the surface of understanding deletion efficiency in learning systems. Throughout, we made a number of simplifying assumptions, such that there is only one model and only one database in our system. We also assumed that user-based deletion requests correspond to only a single data point. Understanding deletion efficiency in a system with many models and many databases, as well as complex user-to-data relationships, is an important direction for future work. Acknowledgments: This research was partially supported by NSF Awards AF:1813049, CCF:1704417, and CCF 1763191, NIH R21 MD012867-01, NIH P30AG059307, an Office of Naval Research Young Investigator Award (N00014-18-1-2295), a seed grant from Stanford’s Institute for Human-Centered AI, and the Chan-Zuckerberg Initiative. We would also like to thank I. Lemhadri, B. He, V. Bagaria, J. Thomas and anonymous reviewers for helpful discussion and feedback. 9 # References [1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308–318. ACM, 2016. [2] Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T. Lin. Learning from data, volume 4. AMLBook New York, NY, USA:, 2012. [3] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. Np-hardness of euclidean sum-of-squares clustering. Machine learning, 75(2):245–248, 2009. [4] N. S. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician, 46(3):175–185, 1992. [5] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 1027–1035. Society for Industrial and Applied Mathematics, 2007. [6] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning for control. In Lazy learning, pages 75–113. Springer, 1997. [7] O. Bachem, M. Lucic, and A. Krause. Distributed and provably good seedings for k-means in constant rounds. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 292–300. JMLR. org, 2017. [8] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii. Scalable k-means++. Proceedings of the VLDB Endowment, 5(7):622–633, 2012. [9] M.-F. F. Balcan, S. Ehrlich, and Y. Liang. Distributed k-means and k-median clustering on general topologies. In Advances in Neural Information Processing Systems, pages 1995–2003, 2013. [10] O. Berman and N. Ashrafi. Optimization models for reliability of modular software systems. IEEE Transactions on Software Engineering, 19(11):1119–1123, 1993. [11] M. Birattari, G. Bontempi, and H. Bersini. Lazy learning meets the recursive least squares algorithm. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pages 375–381, Cambridge, MA, USA, 1999. MIT Press. [12] J. A. Blackard and D. J. Dean. Comparative accuracies of artificial neural networks and dis- criminant analysis in predicting forest cover types from cartographic variables. Computers and electronics in agriculture, 24(3):131–151, 1999. [13] D. Bogdanov, L. Kamm, S. Laur, and V. Sokk. Implementation and evaluation of an algorithm for cryptographically private principal component analysis on genomic data. IEEE/ACM transactions on computational biology and bioinformatics, 15(5):1427–1432, 2018. [14] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191. ACM, 2017. [15] G. Bontempi, H. Bersini, and M. Birattari. The local paradigm for modeling and control: from neuro-fuzzy to lazy learning. Fuzzy sets and systems, 121(1):59–72, 2001. [16] R. Bost, R. A. Popa, S. Tu, and S. Goldwasser. Machine learning classification over encrypted data. In NDSS, 2015. [17] L. Bottou. Online learning and stochastic approximations. On-line learning in neural networks, 17(9):142, 1998. [18] Y. Cao and J. Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, pages 463–480. IEEE, 2015. [19] G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In Advances in neural information processing systems, pages 409–415, 2001. 10 [20] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimiza- tion. Journal of Machine Learning Research, 12(Mar):1069–1109, 2011. [21] K. Chaudhuri, A. D. Sarwate, and K. Sinha. A near-optimal algorithm for differentially-private principal components. The Journal of Machine Learning Research, 14(1):2905–2943, 2013. [22] D. Coomans and D. L. Massart. Alternative k-nearest neighbour rules in supervised pattern recognition: Part 1. k-nearest neighbour classification by using alternative voting rules. Analytica Chimica Acta, 136:15–27, 1982. [23] Council of European Union. Council regulation (eu) no 2012/0011, 2014. https://eur-lex. europa.eu/legal-content/EN/TXT/?uri=CELEX:52012PC0011. [24] Council of European Union. Council regulation (eu) no 2016/678, 2014. https://eur-lex. europa.eu/eli/reg/2016/679/oj. [25] M. Courbariaux, Y. Bengio, and J.-P. David. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014. [26] T. M. Cover and J. A. Thomas. Elements of information theory. John Wiley & Sons, 2012. [27] R. E. W. D. A. Belsley, E. Kuh. Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. John Wiley & Sons, Inc., New York, NY, USA, 1980. [28] S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random Structures and Algorithms, 22(1):60–65, 2003. [29] L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. IEEE Transactions on Information Theory, 25(5):601–604, 1979. [30] C. Dwork, A. Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3–4):211–407, 2014. [31] Z. Erkin, T. Veugen, T. Toft, and R. L. Lagendijk. Generating private recommendations efficiently using homomorphic encryption and data packing. IEEE transactions on information forensics and security, 7(3):1053–1066, 2012. [32] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning. Number 10. Springer series in statistics New York, 2001. [33] K. J. Galinsky, P.-R. Loh, S. Mallick, N. J. Patterson, and A. L. Price. Population structure of uk biobank and ancient eurasians reveals adaptation at genes influencing blood pressure. The American Journal of Human Genetics, 99(5):1130–1139, 2016. [34] A. Gardner, C. A. Duncan, J. Kanno, and R. Selmic. 3d hand posture recognition from small unlabeled point sets. In 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 164–169. IEEE, 2014. [35] A. Gardner, J. Kanno, C. A. Duncan, and R. Selmic. Measuring distance between unordered sets of different sizes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 137–143, 2014. [36] P. Geurts, D. Ernst, and L. Wehenkel. Extremely randomized trees. Machine learning, 63(1):3–42, 2006. [37] A. Ghorbani and J. Zou. Data shapley: Equitable valuation of data for machine learning. arXiv preprint arXiv:1904.02868, 2019. [38] R. M. Gray and D. L. Neuhoff. Quantization. IEEE transactions on information theory, 44(6):2325–2383, 1998. [39] S. Guha, R. Rastogi, and K. Shim. Cure: an efficient clustering algorithm for large databases. In ACM Sigmod Record, pages 73–84. ACM, 1998. [40] P. Gysel, J. Pimentel, M. Motamedi, and S. Ghiasi. Ristretto: A framework for empirical study of resource-efficient inference in convolutional neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2018. 11 [41] S. Hammarling and C. Lucas. Updating the qr factorization and the least squares problem. Tech. Report, The University of Manchester (2008), 2008. [42] X. Han, R. Wang, Y. Zhou, L. Fei, H. Sun, S. Lai, A. Saadatpour, Z. Zhou, H. Chen, F. Ye, et al. Mapping the mouse cell atlas by microwell-seq. Cell, 172(5):1091–1107, 2018. [43] N. J. Higham. Accuracy and stability of numerical algorithms, volume 80. Siam, 2002. [44] A. Hinneburg and H.-H. Gabriel. Denclue 2.0: Fast clustering based on kernel density estimation. In International symposium on intelligent data analysis, pages 70–80. Springer, 2007. [45] W. B. Johnson and J. Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984. [46] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Computer Architecture (ISCA), 2017 ACM/IEEE 44th Annual International Symposium on, pages 1–12. IEEE, 2017. [47] M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-one-out cross- validation. Neural computation, 11(6):1427–1453, 1999. [48] A. Knoblauch. Closed-form expressions for the moments of the binomial probability distribution. SIAM Journal on Applied Mathematics, 69(1):197–204, 2008. [49] Z. F. Knops, J. A. Maintz, M. A. Viergever, and J. P. Pluim. Normalized mutual information based registration using k-means clustering and shading correction. Medical image analysis, 10(3):432–439, 2006. [50] S. Kutin and P. Niyogi. Almost-everywhere algorithmic stability and generalization error: Tech. rep. Technical report, TR-2002-03: University of Chicago, Computer Science Department, 2002. [51] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [52] D. Lin, S. Talathi, and S. Annapureddy. Fixed point quantization of deep convolutional networks. In International Conference on Machine Learning, pages 2849–2858, 2016. [53] S. Lloyd. Least squares quantization in pcm. IEEE transactions on information theory, 28(2):129– 137, 1982. [54] D. G. Lowe et al. Object recognition from local scale-invariant features. In ICCV, number 2, pages 1150–1157, 1999. [55] J. H. Maindonald. Statistical Computation. John Wiley & Sons, Inc., New York, NY, USA, 1984. [56] Y. Meidan, M. Bohadana, Y. Mathov, Y. Mirsky, A. Shabtai, D. Breitenbacher, and Y. Elovici. N-baiot—network-based detection of iot botnet attacks using deep autoencoders. IEEE Pervasive Computing, 17(3):12–22, 2018. [57] V. M. Muggeo. Estimating regression models with unknown break-points. Statistics in medicine, 22(19):3055–3071, 2003. [58] V. M. Muggeo. Testing with a nuisance parameter present only under the alternative: a score- based approach with application to segmented modelling. Journal of Statistical Computation and Simulation, 86(15):3059–3067, 2016. [59] V. M. Muggeo et al. Segmented: an r package to fit regression models with broken-line relation- ships. R news, 8(1):20–25, 2008. [60] S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin. Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics, 25(1-3):161–193, 2006. [61] E. A. Nadaraya. On estimating regression. Theory of Probability & Its Applications, 9(1):141–142, 1964. 12 [62] V. Nikolaenko, U. Weinsberg, S. Ioannidis, M. Joye, D. Boneh, and N. Taft. Privacy-preserving ridge regression on hundreds of millions of records. In Security and Privacy (SP), 2013 IEEE Symposium on, pages 334–348. IEEE, 2013. [63] O. Ohrimenko, F. Schuster, C. Fournet, A. Mehta, S. Nowozin, K. Vaswani, and M. Costa. Oblivious multi-party machine learning on trusted processors. In USENIX Security Symposium, pages 619–636, 2016. [64] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar. Semi-supervised knowledge transfer for deep learning from private training data. arXiv preprint arXiv:1610.05755, 2016. [65] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, et al. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825–2830, 2011. [66] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret- tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. [67] D. Peleg. Distributed computing. SIAM Monographs on discrete mathematics and applications, 5:1–1, 2000. [68] T. Poggio, R. Rifkin, S. Mukherjee, and P. Niyogi. General conditions for predictivity in learning theory. Nature, 428(6981):419, 2004. [69] J. Qin, W. Fu, H. Gao, and W. X. Zheng. Distributed k-means algorithm and fuzzy c-means algorithm for sensor networks based on multiagent consensus theory. IEEE transactions on cybernetics, 47(3):772–783, 2016. [70] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177–1184, 2008. [71] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525–542. Springer, 2016. [72] P. J. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53–65, 1987. [73] V. Schellekens and L. Jacques. Quantized compressive k-means. IEEE Signal Processing Letters, 25(8):1211–1215, 2018. [74] S. Schelter. “amnesia”–towards machine learning models that can forget user data very fast. In 1st International Workshop on Applied AI for Database Systems and Applications (AIDB’19), 2019. [75] F. Schomm, F. Stahl, and G. Vossen. Marketplaces for data: an initial survey. ACM SIGMOD Record, 42(1):15–26, 2013. [76] B. Schrauwen, D. Verstraeten, and J. Van Campenhout. An overview of reservoir computing: theory, applications and implementations. In Proceedings of the 15th european symposium on artificial neural networks. p. 471-482 2007, pages 471–482, 2007. [77] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence. Journal of Machine Learning Research, 11(Oct):2635–2670, 2010. [78] C. E. Shannon. Communication theory of secrecy systems. Bell system technical journal, 28(4):656–715, 1949. [79] C. Sudlow, J. Gallacher, N. Allen, V. Beral, P. Burton, J. Danesh, P. Downey, P. Elliott, J. Green, M. Landray, et al. Uk biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS medicine, 12(3):e1001779, 2015. [80] H.-L. Truong, M. Comerio, F. De Paoli, G. Gangadharan, and S. Dustdar. Data contracts for cloud-based data marketplaces. International Journal of Computational Science and Engineering, 7(4):280–295, 2012. 13 [81] C.-H. Tsai, C.-Y. Lin, and C.-J. Lin. Incremental and decremental training for linear classification. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 343–352. ACM, 2014. [82] S. Van Der Walt, S. C. Colbert, and G. Varoquaux. The numpy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22, 2011. [83] C. F. Van Loan and G. H. Golub. Matrix computations. Johns Hopkins University Press, 1983. [84] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. Citeseer. [85] M. Veale, R. Binns, and L. Edwards. Algorithms that remember: model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133):20180083, 2018. [86] E. F. Villaronga, P. Kieseberg, and T. Li. Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review, 34(2):304–313, 2018. [87] N. X. Vinh, J. Epps, and J. Bailey. Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(Oct):2837–2854, 2010. [88] G. I. Webb. Lazy Learning, pages 571–572. Springer US, 2010. [89] J. Yin and Y. Meng. Self-organizing reservior computing with dynamically regulated cortical neural networks. In The 2012 International Joint Conference on Neural Networks (IJCNN), pages 1–7. IEEE, 2012. [90] S. Zeb and M. Yousaf. Updating qr factorization procedure for solution of linear least squares problem with equality constraints. Journal of inequalities and applications, 2017(1):281, 2017. [91] J. Zhang, A. May, T. Dao, and C. Ré. Low-precision random fourier features for memory- constrained kernel approximation. arXiv preprint arXiv:1811.00155, 2018. [92] W. Zhao, H. Ma, and Q. He. Parallel k-means clustering based on mapreduce. In IEEE Interna- tional Conference on Cloud Computing, pages 674–679. Springer, 2009. 14 # A Supplementary Materials Here we provide material supplementary to the main text. While some of the material provided here may be somewhat redundant, it also contains technical minutia perhaps too detailed for the main body. # A.1 Online Data Deletion We precisely define the notion of a learning algorithm for theoretical discussion in the context of data deletion. Definition A.1. Learning Algorithm A learning algorithm A is an algorithm (on some standard model of computation) taking values in some hypothesis space and metadata space H×M based on an input dataset D. Learning algorithm A may be randomized, implying a conditional distribution over H ×M given D. Finally, learning algorithms must process each datapoint in D at least once, and are constrained to sequential computation only, yielding a runtime bounded by Ω(n). We re-state the definition of data deletion. We distinguish between a deletion operation and a robust deletion operation. We focus on the former throughout our main body, as it is appropriate for average-case analysis in a non-security context. We use =d to denote distributional equality. Definition A.2. Data Deletion Operation Fix any dataset D and learning algorithm A. Operation RA is a deletion operation for A if RA(D,A(D),i) =d A(D−i) for any i selected independently of A(D). For notational simplicity, we may let RA refer to an entire sequence of deletions (∆ = {i1,i2,...,im}) by writing RA(D,A(D),∆). This notation means the output of a sequence of applications of RA to each i in deletion sequence ∆. We also may drop the dependence on A when it is understood for which A the deletion operation R corresponds. We also drop the arguments for A and R when they are understood from context. For example, when dataset D can be inferred from context, we let A−i directly mean A(D−i) and when and deletion stream ∆ can be inferred, we let R directly mean R(D,A(D),∆). Our definition is somewhat analogous to information-theoretic (or perfect) secrecy in cryptography [78]. Much like in cryptography, it is possible to relax to weaker notions – for example, by statistically approximating deletion and bounding the amount of computation some hypothetical adversary could use to determine if a genuine deletion took place. Such relaxations are required for encryption algorithms because perfect secrecy can only be achieved via one-time pad [78]. In the context of deletion operations, retraining from scratch is, at least slightly, analogous to one-time pad encryption: both are simple solutions that satisfy distributional equality requirements, but both solutions are impractical. However, unlike in encryption, when it comes to deletion, we can, in fact, at least for some learning algorithms, find deletion operations that would be both practical and perfect. The upcoming robust definition may be of more interest in a worst-case, security setting. In such a setting, an adaptive adversary makes deletion requests while also having perfect eavesdropping capabilities to the server (or at least the internal state of the learning algorithm, model and metadata). Definition A.3. Robust Data Deletion Operation Fix any dataset D and learning algorithm A. Operation RA is a robust deletion operation if RA(D,A(D),i) =d A(D−i) in distribution, for any i, perhaps selected by an adversarial agent with knowledge of A(D). To illustrate the difference between these two definitions, consider Q-k-means and DC-k-means. Assume an adversary has compromised the server with read-access and gained knowledge of the algorithm’s internal state. Further assume that said adversary may issue deletion requests. Such a powerful adversary could compromise the exactness of DC-k-means deletion operations by deleting datapoints from specific leaves. For example, if the adversary always deletes datapoints partitioned to the first leaf, then the number of datapoints assigned to each leaf is no longer uniform or independent of deletion requests. In principle, this, at least rigorously speaking, violates equality in distribution. Note that this can only occur if requests are somehow dependent on the partition. However, despite an adversary being able to compromise the correctness of the deletion operation, it cannot compromise the efficiency. That is because efficiency depends on the maximum number of datapoints partitioned to a particular leaf, a quantity which is decided randomly without input from the adversary. 15 In the case of Q-k-means we can easily see the deletion is robust to the adversary by the enforced equality of outcome imposed by the deletion operation. However, an adversary with knowledge of algorithm state could make the Q-k-means deletion operation entirely inefficient by always deleting an initial centroid. This causes every single deletion to be satisfied by retraining from scratch. From the security perspective, it could be of interest to study deletion operations that are both robust and efficient. We continue by defining the online data deletion setting in the average-case. # Definition A.4. Online Data Deletion (Average-Case) We may formally define the runtime in the online deletion setting as the expected runtime of Algorithm 3. We amortize the total runtime by m. # Algorithm 3 Online Data Deletion Input: Dataset D, learning algorithm A, deletion operation R Parameters: α ∈ (0,1) µ ← A(D) m ← Θ(|D|α) for τ = 1 to m do i ← Unif[1,...,|D|] //constant time µ ← R(D,µ,i) D ← D−i //constant time end for Fractional power regime. For dataset size n, when m = Θ(nα) for 0 < α < 1, we say we are in the fractional power regime. For k-means, our proposed algorithms achieve the ideal lower bound for small enough α, but not for all α in (0,1). Online data deletion is interesting in both the average-case setting (treated here), where the indices {i1,...,im} are chosen uniformly and independently without replacement, as well as in a worst-case setting, where the sequence of indices is computed adversarially (left for future work). It may also be practical to include a bound on the amount of memory available to the data deletion operation and model (including metadata) as an additional constraint. # Definition A.5. Deletion Efficient Learning Algorithm Recall the Ω(n/m) lower bound on amortized computation for any sequential learning algorithm in the online deletion setting (Section 2). Given some fractional power scaling m = Θ(nα), we say an algorithm A is α-deletion efficient if it runs Algorithm 3 in amortized time O(n1−α). Inference Time Of course, lazy learning and non-parametric techniques are a clear exception to our notions of learning algorithm. For these methods, data is processed at inference time rather than training time – a more complete study of the systems trade-offs between training time, inference time, and deletion time is left for future work. # A.2 Approximate Data Deletion We present one possible relaxation from exact to approximate data deletion. # Definition A.6. Approximate deletion We say that such a data deletion operation RA is an δ-deletion for algorithm A if, for all D and for every measurable subset S ⊆ H×M : Pr[A(D−i) ∈ S|D−i] ≥ δPr[RA(D,A(D),i) ∈ S|D−i] The above definition corresponds to requiring that the probability that the data deletion operation returns a model in some specified set, S, cannot be more than a δ−1 factor larger than the probability that algorithm A retrained on the dataset D−i returns a model in that set. We note that the above definition does allow for the possibility that some outcomes that have positive probability under A(D−i) have zero probability under the deletion operation. In such a case, an observer could conclude that the model was returned by running A from scratch. 16 # A.2.1 Approximate Deletion and Differential Privacy We recall the definition of differential privacy [30]. A map, A, from a dataset D to a set of outputs, H, is said to be e-differentially private if, for any two datasets D, ,D» that differ in a single datapoint, and any subset SCH, Pr[A(D,) € $]<e*-Pr|A(D2) €S]. Under the relaxed notion of data deletion, it is natural to consider privatization as a manner to support approximation deletion. The idea could be to privatize the model, and then resolve deletion requests by ignoring them. However, there are some nuances involved here that one should be careful of. For example, differential privacy does not privatize the number of datapoints, but this should not be leaked in data deletion. Furthermore, since we wish to support a stream of deletions in the online setting, we would need to use group differential privacy [30], which can greatly increase the amount of noise needed for privatization. Even worse, this requires selecting the group size (i.e. total privacy budget) during training time (at least for canonical constructions such as the Laplace mechanism). In differential privacy, this group size is not necessarily a hidden parameter. In the context of deletion, it could leak information about the total dataset size as well as how many deletions any given model instance has processed. While privatization-like methods are perhaps a viable approach to support approximate deletion, there remain some technical details to work out, and this is left for future work. # B Algorithmic Details In Appendix B, we present psuedo-code for the algorithms described in Section 3. We also reference https://github.com/tginart/deletion-efficient-kmeans for Python implementations of our algorithms. # B.1 Quantized k-Means We present the psuedo-code for Q-k-means (Algo. 4). Q-k-means follows the iterative protocol as the canonical Lloyd’s (and makes use of the k-means++ initialization). As mentioned in the main body, there are four key variations from the canonical Lloyd’s algorithm that make this method different: quantization, memoization, balance correction, and early termination. The memoization of the optimization state and the early termination for increasing loss are self-explanatory from Algo. 4. We provide more details concerning the quantization step and the balance correction in Appendix B.1.1 and B.1.2 respectively. Algorithm 4 Quantized k-means Input: data matrix De R"** Parameters: k €N, TEN, y€ (0,1), €>0 c¢-k** (D) // initialize centroids with k-means++ Save initial centroids: save(c). L «k-means loss of initial partition 7 for 7=1toT do Store current centroids: c’ <—c Compute centroids: c+ c(7) for k=1tokdo if |r(cx)|<72 then Apply correction to y-imbalanced partition: cx |(Cn))en + (22 — [mex Ie end if end for Generate random phase @ ~ Unif [— 5 43 Quantize to €-lattice: é Q(c;0) Update partition: 7’ — 1(é) Save state to metadata: save(c,,é, Compute loss L’ if L’<L then (c,7,L) < (é,n' L’) //update state else break end if end for return c //output final centroids as model ¢ n'|) 17 Although it is rare, it is possible for a Lloyd’s iteration to result in a degenerate (empty) cluster. In this scenario, we have two reasonable options. All of the theoretical guarantees are remain valid under both of the following options. The first option is to re-initialize a new cluster via a k-means++ seeding. Since the number of clusters k and iterations T are constant, this does not impact any of the asymptotic deletion efficiency results. The second option is to simply leave a degenerate partition. This does not impact the upper bound on expected statistical performance which is derived only as a function of the k-means++ initialization. For most datasets, this issue hardly matters in practice, since Lloyd’s iterations do not usually produce degenerate clusters (even the presence of quantization noise). In our implementation, we have chosen to re-initialize degenerate clusters, and are careful to account for this in our metadata, since such a re-initialization could trigger the need to retrain at deletion time if the re-initialized point is part of the deletion stream. We present the pseudo-code for the deletion operation (Algo. 5), and then elaborate on the quantiza- tion scheme and balance correction. # Algorithm 5 Deletion Op for Q-k-means Input: data matrix De R”™ a target deletion index 7, training metadata Obtain target deletion point p+ D; Retrieve initial centroids from metadata: load(co) ifpe€co then // Selected initial point. return Q-k-means(D_;) // Need to retrain from scratch. else for 7=1 to T do Retrieve state for iteration rT: load(c,9,é, Perturbed centroid: ci. Cn p/ (cx)| Apply y-correction to c/, if necessary Quantize perturbed centroid: @, — Q(c/.;9) if. AC, then // Centroid perturbed — unstable quantization return Q-k-means(D_;) // Need to retrain from scratch. end if Update metadata with perturbed state: save(c’ ,0,¢’,|7’|) end for end if Update D+ D_; return é //Successfully verified centroid stability We proceed elaborate on the details of the balance correction and quantization steps # B.1.1 γ-Balanced Clusters # Definition B.1. γ-Balanced Given a partition π, we say it us γ-balanced if |πκ| ≥ γn γ-imbalanced if it is not γ-balanced. k for all partitions κ. The partition is In Q-k-means, imbalanced partitions can lead to unstable quantized centroids. Hence, it is preferable to avoid such partitions. As can be seen in the pseudo-code, we add mass to small clusters to correct for γ-unbalance. At each iteration we apply the following formula on all clusters such that |πκ| ≥ γn k : cκ ← |π(cκ)|cκ +( γn In prose, for small clusters, current centroids are averaged with the centroids from the previous iteration to increase stability. For use in practice, a choice of γ must be made. If no class balance information is known, then, based on our observations, setting γ = 0.2 is a solid heuristic for all but severely imbalanced datasets, in which case it is likely that DC-k-means would be preferable to Q-k-means. # B.1.2 Quantizing with an ¢-Lattice We detail the quantization scheme used. A quantization maps analog values to a discrete set of points. In our scheme, we uniformly cover R“ with an ¢-lattice, and round analog values to the nearest vertex 18 on the lattice. It is also important to add an independent, uniform random phase shift to each dimension of lattice, effectively de-biasing the quantization. We proceed to formally define our quantization Q(,9)- Q(c,9) is parameterized by a phase shift de[- 4,4)? and an granularity parameter ¢ > 0. For brevity, we omit the explicit dependence on phase and granularity when it is clear from context. For a given (€,0): a(x) =argmin <zi{\|2—(0+3)||2} Q(x) =€(0+a(e)) We set {θ}t 0 with an iid random sequence such that θτ ∼ Unif [− 1 2 , 1 2 ]d. # B.2 Divide-and-Conquer k-Means We present pseudo-code for the deletion operation of divide-and-conquer k-means. The pseudo-code for the training algorithm may be found in the main body. The deletion update is conceptually simple. Since a deleted datapoint only belong to one leaf’s dataset, we only need recompute the sub-problems on the path from said leaf to the root. # Algorithm 6 Deletion Op for DC-k-means Input: data matrix D ∈ Rn×d, target deletion index i, model metadata M Obtain target deletion point p ← Di node ← leaf node assignment of p node.dataset ← node.dataset\p while node is not root do node.parent.data ← node.parent.data ode.centroids node.centroids ← k-means++(node.data,k,T ) node.parent.dataset.add(node.centroids) node ← node.parent end while node.centroids ← k-means++(node.dataset,k,T ) Update D ← D−i return node.centroids # B.3 Initialization for k-Means For both of our algorithms, we make use of the k-means++ initialization scheme [5]. This initialization is commonplace, and is the standard initialization in many scientific libraries, such as Sklearn [66]. In order to provide a more self-contained presentation, we provide some pseudo-code for the k-means++ initialization. # Algorithm 7 Initialization by k-means++ Input: data matrix D€ R”* 4 number of clusters k i<Uni{l,...,n} I< {Di}. uO" ER”. for 1<l<kdo uj =min,e1| —Dj\|? forall 1<j<n = jai Sample i~ gu Ie IU{Di} end for return [ # C Mathematical Details Here, we provide proofs for the claims in the main body. We follow notation introduced in the main body and Appendix A. As a notational shorthand, we will let denote A(D−i) by A−i and R(D,A(D),D−i) as R when there is only one dataset in the context. Also, when it is unambiguous, we will use A to denote the specific learning algorithm in question, and R to denote its corresponding deletion operation. 19 # C.1 Proof of Theorem 4.1 Refer to the main body for the statement of Theorem 4.1. Here is an abridged version: Theorem. Q-k-means supports m deletions in expected time O(m?d°/?/e). Note that we assume the dataset is scaled onto the unit hypercube. Otherwise, the theorem still holds with an assumed constant factor radial bound. We prove the theorem in three successive steps, given by Lemma C.1 through Lemma C.3. Lemma C.1. Define C=[-5, §)¢ for some € > 0. Cc is the hypercube in Euclidean d-space of side length € centered at the origin. Let C! = [=$£, Ss s<i4 variable with support C. Then, Pr|X €C/C’| < aie . for some é' <«. Let X be a uniform random =] Proof. (Lemma C.1) If X € C/C’, then there exists some i € {1,...,d} such that X; € [-§, =] U S|. Marginally, Pr[X; € [—§, =e] U Ise, §]] = 2e’/e. Taking a union bound over the d [S055 dimensions obtains the bound. We make use of Lemma C.1 in proving the following lemma. First, recall the definition of our quantization scheme Q from Section 3: Q(x) =(0+argmin, <2. {||e7—€(0+3)|l2})- We take 6~ Uni[— 4,5], implying a distribution for Q. Lemma C.2. Let Q be a uniform quantization -lattice over R4 with uniform phase shift 0. Let Q(-) denote the quantization mapping over R and let Q[-] denote the quantization image for subsets of R“. Let X ER“. Then Pr[{Q(X)} FQ[Be(X)]] < dae’ where B.(X) is the €'-ball about X under Euclidean norm. 2 , 1 2 ]d, implying a distribution for Q. Proof. (Lemma C.2) Due to invariance of measure under translation, we may apply a coordinate trans- formation by translation Q(X) to the origin of R“. Under this coordinate transform, X ~ Uni[— 505 qd. Further, note that Pr[B. (X) c [—§,§]“] is precisely equivalent to Pr|X € ($2, $414. Because X is uniform, applying Lemma C.1 as an upper bound completes the proof. With Lemma C.2 in hand, we proceed to state and prove Lemma C.3. Lemma C.3. Let D be an dataset on (0,1]4 of size n. Let€(D) be the centroids computed by Q-k-means wp . “ys -d3/2 with initialization I and parameters T, k, ¢, and y. Then, with probability greater than 1— ant it holds that €(D) =€(D_a) for any AC D with |A| <m and ANI =, where probability is with respect to the randomness in the quantization phase. Proof. (Lemma C.3) We analyze two instances of Q-k-means algorithm operating with the same initial centroids and the same sequence of iid quantization maps {Qτ }t 1. One instance runs on input dataset D and the other runs on input dataset D−∆. This is the only difference between the two instances. Let c(τ,κ) I Let ef") ) (6) denote the «-th analog (i.e. non-quantized) centroid at the r-th iteration of Q-k-means on some input dataset 6 with initialization J. By construction, for any datasets 5,6’, we have that E1(5) =E7(6") if Q- (0) (5) = Q,-(c"™ (6)) for all r € {1,...,¢} and all &€ {1,...,k}. Fix any particular rand x. We can bound ||c"")(D) — {” (D_a)||2 as follows. Note that o\” ")(D) = 45, Dil(D; € m,) where 1(-) denotes the indicator function. Furthermore, ce”) (D_a) = 22, Dil (Di € t.)1(Di ¢ A). Assume that |7,.| > yn/k. Because |A| <m and ||D;||2 < Vd, these sums can differ by at most mba On the other hand, assume that |7,,|<yn/k. In this case, the y-correction still ensures that the sums differ by at most mkvd This bounds ile") (D) ef" (D-A)|l2s M4. yn √ γn . Taking a union bound over τ ∈ {1,...,t} . —_ mkVd . . To complete the proof, apply Lemma C.2 setting ¢’ = — . Taking a union bound over 7 € {1,...,t} and « € {1,...,k} yields the desired result. 20 We are now ready to complete the theorem. We briefly sketch and summarize the argument before presenting the proof. Recall the deletion algorithm for Q-k-means (Appendix B). Using the runtime memo, we verify that the deletion of a point does not change what would have been the algorithm’s output. If it would have changed the output, then we retrain the entire algorithm from scratch. Thus, we take a weighted average of the computation expense in these two scenarios. Recall that retraining from scratch takes time O(nkTd) and verifying the memo at deletion time takes time O(kTd). Finally, note that we must sequentially process a sequence of m deletions, with a valid model output after each request. We are mainly interested in the scaling with respect to m, € and n, treating other factors as non-asymptotic constants in our analysis. We now proceed with the proof. Theorem. Q-k-means supports m deletions in expected time O(m?d°/?/e). Proof. (Correctness) In order for R to be a valid deletion, we require that R =d A−i for any i. In this setting, we identify models with the output centroids: A(D) = cI (D). Consider the sources of randomness: the iid sequence of random phases and the k-means++ initializations. Let I(-) be a set-valued random function computing the k-means++ initializations over a given dataset. E denote the event that D; ¢ I(D). Then, from the construction of k-means++, we have that for all j 4 i, Pr[D; € I(D_;)] = Pr[D; € I(D)|E]. Thus, I(D) and I(D_;) are equal in distribution conditioned on 7 not being an initial centroid. Note that this is evident from the construction of k-means++ (see algorithm 7). Let @ denote the iid sequence of random phases for A and let 9_; denote the iid sequence of random phases for A_;. Within event E, we define a set of events E’(@), parameterized by @, as the event that output centroids are stable under deletion conditioned on a given sequence of phases 6: EB ={i¢I(D)},B'(0)={Al(0=6}=A_ (0-1 =O} By construction of R, we have R= A1(E’(@))+A_;1(E'(0)) where event E’(6) is verified given the training time memo. To conclude, let S be any Borel set: Pr[R € S] = Pr[E’(0)|Pr[R € S|E’(@)| + (1 — Pr[E’(6)])Pr[R € S$ probability. E'()] by law of total =Pr[E’(0)|Pr[A€ S|E’(0)]+(1—Pr[E’(6)])Pr[A_; € 5] by construction of R =Pr[E’(0)|Pr[A_; €S|6_; =6|+ (1—Pr[E’(0)])Pr[A_; € S] by definition of E’ =Pr[E’(6)|Pr[A_; € $]+(1—Pr[E’(6)])Pr[A_; € S]=Pr[A_; €S] by 0=46_i. Proof. (Runtime) Let T be the total runtime of R after training A once and then satisfying m deletion requests with R. Let ∆ = {i1,i2,...,im} denote the deletion sequence, with each deletion sampled uniformly without replacement from D. Let Ψ be the event that the centroids are stable for all m deletions. Using Theorem 3.1 to bound the event complement probability Pr(Ψ): ET <E(7|W]+Pr[W)E[7 |W] =O(mkTd)+O0(e7!m?T?k3d?°) =O(m?d?> /e). In Ψ the centroids are stable, and verifying in Ψ takes time O(mktd) in total. In Ψ we coarsely upper bounded T by assuming we re-train to satisfy each deletion. # C.2 Proofs of Corollaries and Propositions We present the proofs of the corollaries in the main body. We are primarily interested in the asymptotic effects of n,m, €, and w. We treat other variables as constants. For the purposes of online analysis, we let e=O(n-*) for some 6 € (0,1) and w= O(n”) for some p € (0,1) 21 # C.2.1 Proof of Corollory 4.1.1 We state the following Theorem of Arthur and Vassilvitskii concerning k-means++ initializations [5]: Theorem C.4. (Arthur and Vassilivitskii) Let L∗ be the optimal loss for a k-means clustering problem instance. Then k-means++ achieves expected loss EL++ ≤ (8lnk+16)L∗ We re-state corollary 4.1.1: Corollary. Let £ be a random variable denoting the loss of Q-k-means on a particular problem instance of size n. Then BL < (Sink +16)L* +€\/nd(8lnk-+16)L* + Fnde?. Proof. Let c be the initialization produced k-means++. Let £** =>), ||e(i) centroid closest to the i-th datapoint. Let ||c(z) —a;||3 = vi 63,, with 5;; between x; and c(i) inthe j-th dimension. Then we may upper bound £ <)>; by adding a worst-case 5 quantization penalty in each dimension. This sum L< DP ier, + wryy ge+ TP Wpdije =L+++1nde? +eVndLt*. Proof. Let c be the initialization produced k-means++. Let £** =>), ||e(i) — xi ||3 where c(#) is the centroid closest to the i-th datapoint. Let ||c(z) —a;||3 = vi 63,, with 5;; denoting the scalar distance between x; and c(i) inthe j-th dimension. Then we may upper bound £ <)>; ian (67, + 4 +5i7€) by adding a worst-case 5 quantization penalty in each dimension. This sum reduces to: inthe j-th dimension. Then we may upper bound £ <)>; ian (67, 5 quantization penalty in each dimension. This sum reduces to: wryy ge+ TP Wpdije =L+++1nde? +eVndLt*. The third Vnd£+* > IL oi > VLFF if TPG, =L** and 6;; >0 (to √ L< DP ier, + wryy ge+ TP Wpdije =L+++1nde? +eVndLt*. The third term comes from the fact that Vnd£+* > IL oi > VLFF if TPG, =L** and 6;; >0 (to see this, treat it as a constrained optimization over the 6;;). Thus: 1 4 EL ≤ EL++ + √ ndE √ L++ # nde? + √ Using Jensen’s inequality [26] yields E L ≤ EL: EL<ELtt + Jnde+eVnd VEL To complete the proof, apply Theorem C.5: EL ≤ CL∗ + BL<CL* + inde? bevnd OE where C = 8lnk+16. # C.2.2 Proof of Proposition 4.2 Proposition. Let D be an dataset on Rd of size n. Fix parameters T and k for DC-k-means. Let w = Θ(nρ) and ρ ∈ (0,1) Then, with a depth-1, w-ary divide-and-conquer tree, DC-k-means supports m deletions in time O(mnmax(ρ,1−ρ)d) in expectation with probability over the randomness in dataset partitioning. Proof. (Correctness) We require that R(D,A(D),i) =a A(D_;). Since each datapoint is assigned to a leaf independently, the removal of a datapoint does not change the distribution of the remaining datapoints to leaves. However, one must be careful when it comes to the number of leaves, which cannot change due to a deletion. This is problematic if the number of leaves is [7”] (or another similar quantity based on n). The simplest way to address this (without any impact on asymptotic rates) is to round the number of leaves to the nearest power of 2. This works because the intended number of leaves will only be off from nρ by at most a factor of 2. In the rare event this rounding changes due to a deletion, we will have to default to retraining from scratch, but, asymptotically in the fractional power regime, this can only happen a constant number of times which does not affect an amortized or average-case time complexity analysis. We proceed to prove the runtime analysis. Proof. (Runtime) Let T be the total runtime of R after training A once and then satisfying m deletion requests with R. Let ∆ = {i1,i2,...,im} denote the deletion sequence, with each deletion sampled uniformly without replacement from D. 22 Let S be the uniform distribution over nρ elements and let ˆS be the empirical distribution of n independent samples of S. The fraction of datapoints assigned to the i-th leaf is then modeled by ˆSi. We treat ˆS as probability vector. Let random variable J = n ˆSi with probability ˆSi. Thus, J models the distribution over sub-problem sizes for a randomly selected datapoint. Direct calculation yields the following upper bound on runtime: T ≤ m(O(kT dJ)+O(nρk2T d)) where the first term is due to the total deletion time at the leaves, the second term is due to the total deletion time at the root, and the m factor is due to the number of deletions. Hence, we have E(T ) ≤ O(mkT d)E(J)+O(mnρk2T d), with E(J) representing the quantity of interest. Computing E(J) is simple using the second moments of the Binomial distribution , denoted by B: E(J)=E(E(J|S)) -S es ) # Noting that ˆSi ∼ 1 # n B(n,n−ρ) and E((B(n,p)2) = n(n−1)p+np [48] yields: = nρ−1E((B(n,n−ρ)2) = O(n1−ρ) This yields the final bound: E(T ) ≤ O(n1−ρmkT d)+O(nρmk2T d) = O(mmax{n1−ρ,nρ}d) # C.2.3 Proof of Corollary 4.2.1 Corollary. With «=O0(n-*) for 0 <8 <1, Q-k-means algorithm is deletion efficient in expectation if a< i a) Proof. We are interested in the asymptotic scaling of n, m, and ¢, and treat other factors as cosntants. We begin with the expected deletion time from Theorem 4.1, given by O(m?d°/2e—'). Recall we are using rates « = @(n—°) and m = O(n). Applying the rates, adding in the training time, and amortizing yields O(n!~* +n°+*). Thus, deletion efficiency follows if 1—a > a+. Rearranging terms completes the calculation. # C.2.4 Proof of Corollary 4.2.2 Corollary. With w = Θ(nρ) and a depth-1 w-ary divide-and-conquer tree, DC-k-means is deletion efficient in expectation if α ≤ 1−max(ρ,1−ρ) Proof. We are interested in the asymptotic scaling of n, m, and w, and treat other factors as constants. Recall we are using rates w = Θ(nρ) and m = Θ(nα) By Proposition 4.2, the runtime of each deletion is upper bounded by O(nmax(ρ,1−ρ)) and the training time is O(n). Amortizing and comparing the terms yields the desired inequaltiy. Deletion efficiency follows if max{ρ,1−ρ} ≤ 1−α. Rearranging terms completes the calculation. # D Implementation and Experimental Details We elaborate on implementation details, the experimental protocol used in the main body, and present some supplementary experiments that inform our understanding of the proposed techniques. # D.1 Experimental Protocol We run a k-means baseline (i.e. a k-means++ seeding followed by Lloyd’s algorithm), Q-k-means, and DC-k-means on 6 datasets in the simulated online deletion setting. As a proxy for deletion efficiency, we report the wall-clock time of the program execution on a single-core of an Intel Xeon E5-2640v4 (2.4GHz) machine. We are careful to only clock the time used by the algorithm and pause the clock when executing test-bench infrastructure operations. We do not account for random OS-level 23 interruptions such as context switches, but we are careful to allocate at most one job per core and we maintain high CPU-utilization throughout. For each of the three methods and six datasets, we run five replicates of each benchmark to obtain standard deviation estimates. To initialize the benchmark, each algorithm trains on the complete dataset, which is timed by wall-clock. We then evaluate the loss and clustering performance of the centroids (untimed). Then, each model must sequentially satisfy a sequence of 1,000 uniformly random (without replacement) deletion requests. The time it takes to satisfy each request is also timed and added to the training time to compute a total computation time. The total computation time of the benchmark is then amortized by dividing by 1,000 (the number of deletion requests). This produces a final amortized wall-clock time. For the k-means baseline, we satisfy deletion via naive re-training. For Q-k-means and DC-k-means we use the respective deletion operations. As part of our benchmark, we also evaluate the statistical performance of each method after deletions 1,10,100, and 1,000. Since we are deleting less than 10% of any of our datasets, the statistical performance metrics do not change significantly throughout the benchmark and neither do the training times (when done from scratch). However, a deletion operation running in time significantly less than it takes to train from scratch should greatly reduce the total runtime of the benchmark. Ideally, this can be achieved without sacrificing too much cluster quality, as we show in our results (Section 5). # Implementation Framework We are interested in fundamental deletion efficiency, however, empirical runtimes will always be largely implementation specific. In order to minimize the implementation dependence of our results, we control by implementing an in-house version of Lloyd’s iterations which is used as the primary optimization sub-routine in all three methods. Our solver is based on the Numpy Python library [82]. Thus, Q-k-means and DC-k-means use the same sub-routine for computing partitions and centroids as does the k-means baseline. Our implementation for all three algorithms can be found at https://github.com/tginart/deletion-efficient-kmeans. # D.1.2 Heuristic Parameter Selection Hyperparameter tuning poses an issue for deletion efficiency. In order to be compliant to the strictest notions of deletion, we propose the following heuristics to select the quantization granularity parameter e and the number of leaves w for Q-k-means and DC-k-means, respectively. Recall that we always set iterations to 10 for both methods. Heurstic Parameter Selection for Q-k-means. Granularity € tunes the centroid stability versus the quantization noise. Intuitively, when the number of datapoints in a cluster is high compared to the dimension, we need lower quantization noise to stabilize the centroids. A good rule-of-thumb is to use ¢= 2! 1810872 )-3 1 , which yields an integer power of 2. The heuristic can be conceptualized as capturing the effective cluster mass per dimension of a dataset. We use an exponent of 1.5 for d, which scales like the stability probability (see Lemmas C.1 - C.3). The balance correction parameter 7 is always set to 0.2, which should work well for all but the most imbalanced of datasets. Heurstic Parameter Selection for DC-k-means. Tree width w tunes the sub-problem size versus the number of sub-problems. Intuitively, it is usually better to have fewer larger sub-problems than many smaller ones. A good rule-of-thumb is to set w to n0.3, rounded to the nearest power of two. # D.1.3 Clustering Performance Metrics We evaluate our cluster quality using the silhouette coefficient and normalized mutual information, as mentioned in the main body. To do this evaluation, we used the routines provided in the Scikit-Learn Python library [65]. Because computing the silhouette is expensive, for each instance we randomly sub-sample 10,000 datapoints to compute the score. # D.1.4 Scaling We note that all datasets except MNIST undergo a minmax scaling in order to map them into the unit hypercube (MNIST is already a scaled greyscale image). In our main body, we treat this as a one-time scaling inherit to the dataset itself. In practice, the scaling of a dataset can change due to deletions. However, this is a minor concern (at least for minmax scaling) as only a small number of extremal datapoints affect the scale. Retraining from scratch when these points come up as a deletion request does not impact asymptotic runtime, and has a negligible impact on empirical runtime. Furthermore, we point out that scaling is not necessary for our methods to work. In fact, in datasets where the notion 24 of distance remains coherent across dimensions, one should generally refrain from scaling. Our theory holds equally well in the case of non-scaled data, albeit with an additional constant scaling factor such as a radial bound. # D.2 Datasets • Celltypes [42] consists of 12, 009 single cell RNA sequences from a mixture of 4 cell types: microglial cells, endothelial cells, fibroblasts, and mesenchymal stem cells. The data was retrieved from the Mouse Cell Atlas and consists of 10 feature dimensions, reduced from an original 23,433 dimensions using principal component analysis. Such dimensionality reduction procedures are a common practice in computational biology. • Postures [35, 34] consists of 74,975 motion capture recordings of users performing 5 different hand postures with unlabeled markers attached to a left-handed glove. • Covtype [12] consists of 15,120 samples of 52 cartographic variables such as elevation and hillshade shade at various times of day for 7 forest cover types. • Botnet [56] contains statistics summarizing the traffic between different IP addresses for a com- mercial IoT device (Danmini Doorbell). We aim to distinguish between benign traffic data (49,548 instances) and 11 classes of malicious traffic data from botnet attacks, for a total of 1,018,298 instances. • MNIST [51] consists of 60,000 images of isolated, normalized, handwritten digits. The task is to classify each 28×28 image into one of the ten classes. • Gaussian consists of 5 clusters, each generated from 25-variate Gaussian distribution centered at randomly chosen locations in the unit hypercube. 20,000 samples are taken from each of the 5 clusters, for a total of 100,000 samples. Each Gaussian cluster is spherical with variance of 0.8. # D.3 Supplementary Experiments We include three supplementary experiments. Our first is specific to Q-k-means (See Appendix D.3.1), and involves the stability of the quantized centroids against the deletion stream. In our second experiment we explore how the choices of key parameters (€ and w) in our proposed algorithms contribute to the statistical performance of the clustering. In our third experiment, we explore how said choices contribute to the deletion efficiency in the online setting. # D.3.1 Re-training During Deletion Stream for Q-k-means In this experiment, we explore the stability of the quantized centroids throughout the deletion stream. This is important to understand since it is a fundamental behavior of the Q-k-means, and is not an implementation or hardware specific as a quantity like wall-clock time is. We plot, as a function of deletion request, the average number of times that Q-k-means was forced to re-train from scratch to satsify a deletion request. Celltype Covtype MNIST 80 300 6 60 4 200 40 2 20 100 0 0 0 250 500 750 1000 0 250 500 750 1000 0 250 500 750 1000 Postures Gaussian Botnet a 1.0 0.6 0.8 ne 06 0.4 o4 i 0.2 0.2 0.0 0.0 0 250 500 750 ~—-1000 0 250 «500-750 +1000 0 250 «500-750 +1000 # of Deletion Requests S a # of Retrain Occurences ° Figure 2: Average retrain occurrences during deletion stream for Q-k-means 25 As we can see in Fig. 2, when the effective dimensionality is higher (relative to sample size), like in the case of MNIST, our retraining looks like somewhat of a constant slope across the deletions, indicating that the quantization is unable to stabilize the centroids for an extended number of deletion requests. # D.3.2 Effects of Quantization Granularity and Tree Width on Optimization Loss Although the viability of hyperparameter tuning in the context of deletion efficient learning is dubious, from a pedagogical point of view, it is still interesting to sweep the main parameters (quantization granularity € and tree width w) for the two proposed methods. In this experiment, we compare the k-means optimization loss for a range of € and w. As in the main body, we normalize the k-means objective loss to the baseline and restrict ourselves to depth-1 trees. Celltype Covtype MNIST 50 40 30 20 10 of: =< Pa er 2 Postures ---- k-means (baseline) | TH Qkmeans S 3 4 24 g i} = 2 a 2 Epsilon Figure 3: Loss Ratio vs. € for Q-k-means on 6 datasets Celltype Covtype MNIST vp 1.08 vp 1.06 1.04 1.02 1.00 0.98 Pr ey ey er a Postures 1.03 ---- k-means (baseline) 1.06 —b DC-k-means 1.02 i) FI 2 1.04 1.01 Fy é 1.02 1.00 0.99 1,00 b=-=5----2s22-2=22222=2222=2255 22828 a7 8 Epsilon Figure 4: Loss Ratio vs. w for DC-k-means on 6 datasets In Fig. 3, Q-k-means performance rapidly deteriorates as «> 1. This is fairly expected given our theoretical analysis, and is also consistent across the six datasets. 26 On the other hand, in Fig. 4, we see that the relationship between w and loss is far weaker. The general trend among the datasets is that performance decreases as width increases, but this is not always monotonically the case. As was mentioned in the main body, it is difficult to analyze relationship between loss and w theoretically, and, for some datasets, it seems variance amongst different random seeds can dominate the impact of w. # D.3.3 Effects of Quantization Granularity and Tree Width on Deletion Efficiency On the Covtype dataset, we plot the amortized runtimes on the deletion benchmark for a sweep of € and w for both Q-k-means and DC-k-means, respectively. As expected, the runtimes for Q-k-means monotonically increase as € > 0. The runtimes for DC-k-means are minimized by some an optimal tree width at approximately 32-64 leaves. oO & iz bo] oO N a lo} § < 2-6 2-4 2-2 2° Epsilon oO & iz bo] oO N a lo} § < 2! 23 28 27 29 Width Figure 5: Amortized runtime (seconds) for Q-k-means as a function of quantization granularity on Covtype Figure 6: Amortized runtime (seconds) for DC-k-means as a function of tree width on Covtype # E Extended Discussion We include an extended discussion for relevant and interesting ideas that are unable to fit in the main body. # E.1 Deletion Efficiency vs. Statistical Performance In relational databases, data is highly structured, making it easy to query and delete it. This is not the case for most machine learning models. Modern learning algorithms involve data processing that is highly complex, costly, and stochastic. This makes it difficult to efficiently quantify the effect of an individual datapoint on the entire model. Complex data processing may result in high-quality statistical learning performance, but results in models for which data deletion is inefficient, and, in the worst case, would require re-training from scratch. On the other hand, simple and structured data processing yields efficient data deletion operations (such as in relational databases) but may not boast as strong statistical performance. This is the central difficulty and trade-off engineers would face in designing deletion efficient learning systems. Hence, we are primarily concerned with deletion efficiency and statistical performance (i.e. the performance of the model in its intended learning task). In principle, these quantities can both be measured theoretically or empirically. We believe that the amortized runtime in the proposed online deletion setting is a natural and meaningful way to measure deletion efficiency. For deletion time, theoretical analysis involves finding the amortized complexity in a particular asymptotic deletion regime. In the empirical setting, we can simulate sequences of online deletion requests from real datasets and measure the amortized deletion time on wall-clocks. For statistical performance, theoretical analysis can be difficult but might often take the shape of a generalization bound or an approximation ratio. In the empirical setting, we can take the actual optimization loss or label accuracy of the model on a real dataset. # E.2 Overparametrization, Dimensionality Reduction and Quantization One primary concern with quantization is that it performs poorly in the face of overparameterized models. In some situations, metric-preserving dimensionality reduction techniques [45, 28] could potentially be used. 27 # E.3 Hyperparameter Tuning Hyperparameter tuning is an essential part of many machine learning pipelines. From the perspective of deletion efficient learning, hyperparameter tuning presents somewhat of a conundrum. Ultimately, in scenarios in which hyperparameter tuning does indeed fall under the scope of deletion, one of the wisest solutions may be to tune on a subset of data that is unlikely to be deleted in the near future, or to pick hyperparameters via good heuristics that do not depend on specific datapoints. 28
{ "id": "1811.00155" }
1907.05019
Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges
We introduce our efforts towards building a universal neural machine translation (NMT) system capable of translating between any language pair. We set a milestone towards this goal by building a single massively multilingual NMT model handling 103 languages trained on over 25 billion examples. Our system demonstrates effective transfer learning ability, significantly improving translation quality of low-resource languages, while keeping high-resource language translation quality on-par with competitive bilingual baselines. We provide in-depth analysis of various aspects of model building that are crucial to achieving quality and practicality in universal NMT. While we prototype a high-quality universal translation system, our extensive empirical analysis exposes issues that need to be further addressed, and we suggest directions for future research.
http://arxiv.org/pdf/1907.05019
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, Yonghui Wu
cs.CL, cs.LG
null
null
cs.CL
20190711
20190711
9 1 0 2 l u J 1 1 ] L C . s c [ 1 v 9 1 0 5 0 . 7 0 9 1 : v i X r a # Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges Naveen Arivazhagan ∗ Ankur Bapna ∗ Orhan Firat ∗ Dmitry Lepikhin Melvin Johnson Maxim Krikun Mia Xu Chen Yuan Cao George Foster Colin Cherry Wolfgang Macherey Zhifeng Chen Yonghui Wu Google AI # Abstract We introduce our efforts towards building a universal neural machine translation (NMT) system capable of translating between any language pair. We set a milestone towards this goal by building a single massively multilingual NMT model handling 103 lan- guages trained on over 25 billion examples. Our system demonstrates effective trans- fer learning ability, significantly improv- ing translation quality of low-resource lan- guages, while keeping high-resource lan- guage translation quality on-par with com- petitive bilingual baselines. We provide in- depth analysis of various aspects of model building that are crucial to achieving qual- ity and practicality in universal NMT. While we prototype a high-quality universal trans- lation system, our extensive empirical anal- ysis exposes issues that need to be further addressed, and we suggest directions for fu- ture research. # Introduction Sequence-to-sequence neural models (seq2seq) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014) have been widely adopted as the state- of-the-art approach for machine translation, both in the research community (Bojar et al., 2016a, 2017, 2018b) and for large-scale production sys- tems (Wu et al., 2016; Zhou et al., 2016; Crego et al., 2016; Hassan et al., 2018). As a highly ex- pressive and abstract framework, seq2seq models can be trained to perform several tasks simulta- neously (Luong et al., 2015), as exemplified by multilingual NMT (Dong et al., 2015; Firat et al., # ∗ Equal ∗ Equal contribution. navari,ankurbpn,[email protected] Correspondence to 2016a; Ha et al., 2016c; Johnson et al., 2017) - us- ing a single model to translate between multiple languages. Multilingual NMT models are appealing for several reasons. Let’s assume we are interested in mapping between N languages; a naive approach that translates between any language pair from the given N languages requires O(N 2) individ- ually trained models. When N is large, the huge number of models become extremely difficult to train, deploy and maintain. By contrast, a mul- tilingual model, if properly designed and trained, can handle all translation directions within a sin- gle model, dramatically reducing the training and serving cost and significantly simplifying deploy- ment in production systems. Apart from reducing operational costs, multi- lingual models improve performance on low and zero-resource language pairs due to joint train- ing and consequent positive transfer from higher- resource languages (Zoph et al., 2016; Firat et al., 2016b; Nguyen and Chiang, 2017; Johnson et al., 2017; Neubig and Hu, 2018; Aharoni et al., 2019; Escolano et al., 2019; Arivazhagan et al., 2019; Hokamp et al., 2019). Unfortunately, this si- multaneously results in performance degradation on high-resource languages due to interference and constrained capacity (Johnson et al., 2017; Tan et al., 2019). Improving translation perfor- mance across the board on both high and low re- source languages is an under-studied and chal- lenging task. While multilingual NMT has been widely stud- ied, and the above benefits realized, most ap- proaches are developed under constrained set- tings; their efficacy is yet to be demonstrated in real-world scenarios. In this work, we attempt to study multilingual neural machine translation in the wild, using a massive open-domain dataset containing over 25 billion parallel sentences in 103 languages. We first survey the relevant work in various ar- eas: vocabulary composition, learning techniques, modeling and evaluation. In each area, we iden- tify key determinants of performance and assess their impact in our setting. The result is a map of the landscape on this still largely unexplored frontier of natural language processing (NLP) and machine learning (ML). To the best of our knowl- edge, this is the largest multilingual NMT sys- tem to date, in terms of the amount of training data and number of languages considered at the same time. Based on experiments centered around different aspects of multilingual NMT we high- light key challenges and open problems on the way to building a real-world massively multilin- gual translation system. # 2 Towards Universal Machine Translation Enabling a single model to translate between an arbitrary language pair is the ultimate goal of uni- versal MT. In order to reach this goal, the under- lying machinery, the learner, must model a mas- sively multi-way input-output mapping task under strong constraints: a huge number of languages, different scripting systems, heavy data imbalance across languages and domains, and a practical limit on model capacity. Now let us take a look at the problem from a machine learning perspec- tive. Machine learning algorithms are built on induc- tive biases in order to enable better generaliza- tion (Mitchell, 1997). In the setting of multilin- gual NMT, the underlying inductive bias is that the learning signal from one language should bene- fit the quality of other languages (Caruana, 1997). Under this bias, the expectation is that as we in- crease the number of languages, the learner will generalize better due to the increased amount of information1 added by each language (or task). This positive transfer is best observed for low- resource languages (Zoph et al., 2016; Firat et al., 2016a; Neubig and Hu, 2018). Unfortunately the above mentioned constraints prevent these gains 1Which can be sharing of semantic and/or syntactic struc- ture, or ease of optimization via shared error signals etc. 2 from being progressively applicable: as we in- crease the number of languages with a fixed model capacity2, the positive/negative transfer boundary becomes salient, and high resource languages start to regress due to a reduction in per-task capacity. From a function of mapping perspective, there are three common categories of multilingual NMT models in the literature, depending on the lan- guages covered on the source and target sides: many-to-one, one-to-many and many-to-many models (Dong et al., 2015; Firat et al., 2016a; Johnson et al., 2017). Many-to-one multilingual NMT models learn to map any of the languages in the source language set into the selected target language, usually chosen to be English due to the easy availability of parallel corpora with English on one side. Similarly, one-to-many multilingual NMT models aim to translate a single source lan- guage into multiple target languages. Many-to- one multilingual NMT can be categorized as a multi-domain3 learning problem (Dredze et al., 2010; Joshi et al., 2012; Nam and Han, 2015), where the task of translating into a selected lan- guage remains the same, but the input distribution is different across source languages (Arivazhagan et al., 2019). On the other hand, one-to-many multilingual NMT can be considered a multi-task problem (Caruana, 1997; Thrun and Pratt, 1998; Dong et al., 2015), where each source-target pair is a separate task. Many-to-many translation is the super-set of both these tasks. Regardless of the number of languages considered on the source or the target side, improvements in multilingual NMT are expected to arise from positive trans- fer between related domains and transferable tasks. Multi-domain and multi-task learning across a very large number of domains/tasks, with wide data imbalance, and very heterogeneous inter- task relationships arising from dataset noise and topic/style discrepancies, differing degrees of lin- guistic similarity, etc., make the path towards uni- versal MT highly challenging. These problems are typically approached individually and in con- 2Loosely measured in terms of the number of free param- eters for neural networks. 3Note that we use the machine learning notion of domain here where each domain refers to a particular distribution of the input, while the target distribution remain unchanged. strained settings. Here we approach this challenge from the opposite end of the spectrum, determin- ing what is possible with our current technology and understanding of NMT and ML, and probing for the effect of various design strategies in an ex- treme, real-world setting. Our desired features of a truly multilingual translation model can be characterized as: • Maximum throughput in terms of the num- ber of languages considered within a single model. • Maximum inductive (positive) transfer to- wards low-resource languages. • Minimum interference (negative transfer) for high-resource languages. • Robust multilingual NMT models that per- form well in realistic, open-domain settings. In the next sections we analyze different aspects of multilingual NMT, and investigate the implica- tions of scaling up the dataset size and the number of languages considered within a single model. In each section we first discuss approaches described in recent literature, followed by our experimental setup, findings, and analysis. We also highlight some challenges and open problems identified in recent literature or as a result of our analyses. We start by describing our data setup, followed by an analysis of transfer and interference in the mas- sively multilingual setting. We then analyze the pre-processing and vocabulary problems in mul- tilingual NLP, discuss modeling approaches and the effect of increasing model capacity. We close with a discussion of evaluation and open problems in massively multilingual NMT. # 3 Data and Baselines As with any machine learning problem, the qual- ity and the amount of the data has significant im- pact on the systems that can be developed (Good- fellow et al., 2016). Multilingual NMT is typi- cally studied using various public datasets or com- binations of them. The most commonly used (i) TED talks (Cettolo et al., datasets include: 2012) which includes 59 languages (Qi et al., 2018) with around 3k to 200k sentence pairs per 3 Data distribution over language pairs 1000000000 100000000 10000000 1000000 100000 High Resource — — Low Resource 10000 {French, German, Spanish, ...} {Yoruba, Sindhi, Hawaiian, ...} Figure 1: Per language pair data distribution of the training dataset used for our multilingual ex- periments. The x-axis indicates the language pair index, and the y-axis depicts the number of train- ing examples available per language pair on a log- arithmic scale. Dataset sizes range from 35k for the lowest resource language pairs to 2 billion for the largest. (ii) European parliamentary doc- language pair. uments (Koehn, 2005) which include versions in 21 European languages having 1M to 2M sentence pairs. (iii) The UNCorpus (Ziemski et al., 2016) is another multilingual dataset of parliamentary doc- uments of United Nations, consisting of around 11 million sentences in 6 languages. (iv) A com- pilation of the datasets used for the WMT News Translation shared task from 2005-19 (Bojar et al., 2016b, 2018a) covers a broader set of domains in around 15 languages, each containing between 10k to 50M sentence pairs. (v) Other smaller par- allel corpora for specific domains are indexed by OPUS (Tiedemann, 2012) for various language pairs. (vi) The Bible corpus is perhaps the most multilingual corpus publicly available, containing 30k sentences in over 900 languages (Tiedemann, 2018). The results of various works on these datasets have greatly contributed to the progress of mul- tilingual NMT. However, methods developed on these datasets and the consequent findings are not immediately applicable to real world settings out- side those datasets due to their narrow domains, the number of languages covered or the amount of training data used for training. # 3.1 Our Data Setup Our problem significantly extends those studied by these previous works: we study multilingual NMT on a massive scale, using an in-house cor- pus generated by crawling and extracting parallel sentences from the web (Uszkoreit et al., 2010). This corpus contains parallel documents for 102 languages, to and from English, containing a to- tal of 25 billion sentence pairs.4 The number of parallel sentences per language in our cor- pus ranges from around tens of thousands to almost 2 billion. Figure 1 illustrates the data distribution across language pairs for all 204 lan- guage pairs we study. The following specifics of our dataset distinguish our problem from previous work on multilingual NMT: • Scale: Even on our lowest resource lan- guages we often exceed the amount of data available in a majority of the previously stud- ied datasets. Given the amount of data, tech- niques developed in relatively low-resource setups may not be as effective. • Distribution: The availability of quality par- allel data follows a sharp power law, and data becomes increasingly scarce as we expand the scope of the system to more languages. There is a discrepancy of almost 5 orders of magnitude between our highest and our low- est resource languages. Balancing between these different language pairs now becomes a very challenging problem. • Domain and Noise: Having been mined from the web, our dataset spans a vast range of domains. However, such web crawled data is also extremely noisy; this problem gets worse in the multilingual setting where the level of noise might be different across dif- ferent languages. While clean sources of par- allel data are also available, they are often limited to narrow domains and high resource languages. To summarize, the training data used in our study is drawn from 102 languages (+ English), 4Limited to approximately this amount for experimenta- tion. 4 exhibits a power-law in terms of number of train- ing examples across language pairs, and spans a rich set of domains with varying noise levels— making our overall attempt as realistic as possible. Please see Table 8 in the Appendix for the full list of languages. # 3.2 Experiments and Baselines Throughout this paper we perform several exper- iments on the training dataset described above, to highlight challenges associated with different as- pects of multilingual models. We first train ded- icated bilingual models on all language pairs to ground our multilingual analyses. We perform all our experiments with variants of the Trans- former architecture (Vaswani et al., 2017), using the open-source Lingvo framework (Shen et al., 2019). For most bilingual experiments, we use a larger version of Transformer Big (Chen et al., 2018a) containing around 375M parameters, and a shared source-target sentence-piece model (SPM) (Kudo and Richardson, 2018) vocabulary with 32k tokens. We tune different values of regular- ization techniques (e.g. dropout (Srivastava et al., 2014)) depending on the dataset size for each lan- guage pair. For most medium and low resource languages we also experiment with Transformer Base. All our models are trained with Adafactor (Shazeer and Stern, 2018) with momentum factor- ization, a learning rate schedule of (3.0, 40k),5 and a per-parameter norm clipping threshold of 1.0. For Transformer Base models, we use a learning rate schedule of (2.0, 8k), unless otherwise neces- sary for low-resource experiments. In order to minimize confounding factors and control the evaluation set size and domain, we cre- ated our validation (development) and test sets as multi-way aligned datasets containing more than 3k and 5k sentence pairs respectively for all lan- guages. For our bilingual baselines, BLEU scores are computed on the checkpoint with the best val- idation set performance, while we compute test BLEU on the final checkpoint (after training for around 1M steps) for our multilingual models, on true-cased output and references6. For all our 5(3.0, 40k) schedule is the shorthand for a learning rate of 3.0, with 40k warm-up steps for the schedule, which is de- cayed with the inverse square root of the number of training steps after warm-up. 6We used an in-house implementation of mteval-v13a.pl Bilingual En—Any translation performance vs dataset size Bilingual Any—En translation performance vs dataset size Figure 2: Quality (measured by BLEU) of in- dividual bilingual models on all 204 supervised language pairs, measured in terms of BLEU (y- axes). Languages are arranged in decreasing order of available training data from left to right on the x-axes (pair ids not shown for clarity). Top plot reports BLEU scores for translating from English to any of the other 102 languages. Bottom plot reports BLEU scores for translating from any of the other 102 languages to English. Performance on individual language pairs is reported using dots and a trailing average is used to show the trend. baselines we use a batch size of 1M tokens per- batch, by using large scale data parallelism over 16 TPUv3 chips (Jouppi et al., 2017). We find that increasing the batch size offers noticeable im- provements in model quality, while also signifi- cantly speeding up convergence. We plot the BLEU scores for different lan- guage pairs in Figure 2. These results are also summarized in Table 1. For brevity, we plot two main directions separately in different plots. When the source language is in English and we are translating from English to any other language, En→Any is used for convenience, and similarly Any→En for the opposite directions. We notice from Moses to evaluate BLEU scores. 5 En→Any High 25 Med. 52 Low 25 Bilingual 11.72 Any→En High 25 Med. 52 Low 25 21.63 Bilingual Table 1: Average translation quality (BLEU) of bilingual models over different groups of lan- guages. High 25 refers to the top 25 languages by dataset size (left-most portion of Fig. 1), while low 25 refers to the bottom 25 (right-most portion of Fig. 1). that translation performance on both En→Any and Any→En falls as the size of the training dataset In the next section we decreases, as expected. empirically analyze how multilingual models fare on the transfer-interference trade-off by using and comparing against the baselines introduced in this section. # 4 Learning Multilingual NMT is one of the largest multi-task problems being studied in academia or industry (Neubig and Hu, 2018; Aharoni et al., 2019), with hundreds of tasks (one per language pair) being learned in a single model. As is evident from Figure 1, multilingual NMT suffers from a severe data imbalance problem when studied in an un- constrained realistic setting.7 While there is an abundance of data for some language pairs, mak- ing it difficult to go through even a single epoch over the entire dataset before model convergence, low resource languages suffer from data scarcity, making learning difficult. To make things worse, these learning problems might have varying levels of learning ‘difficulty’ due to the linguistic proper- ties of particular languages; designing a learning algorithm to train a single model on all of these tasks simultaneously is non-trivial. In this section we will study the learning as- pect of multilingual NMT, first examining the in- teraction between transfer and interference in our setup. Next, we will touch upon solutions to counter interference and lay out a set of future di- rections. And last, we will delve deeper into the 7The data-imbalance problem is also apparent in academ- ical settings when multiple datasets are mixed, e.g. mixing TED talks with UN corpus. transfer dynamics towards universal NMT. # 4.1 Transfer and Interference Multitask learning (Caruana, 1997) has been suc- cessfully applied to multiple domains, includ- ing NLP, speech processing, drug discovery and many others (Collobert and Weston, 2008; Deng et al., 2013; Ramsundar et al., 2015; Maurer et al., 2016; Ruder, 2017; Kaiser et al., 2017). Other problems closely related to multitask learning in- clude zero or few-shot learning (Lake et al., 2011; Romera-Paredes and Torr, 2015; Vinyals et al., 2016; Pan and Yang, 2010), meta-learning and life-long learning (Thrun and Mitchell, 1995; Sil- ver et al., 2013; Chen and Liu, 2016; Parisi et al., 2018; Golkar et al., 2019; Sodhani et al., 2018; Lopez-Paz et al., 2017). Although these learning paradigms make different assumptions about the underlying problem setting, they share the com- mon goal of leveraging the inductive bias and reg- ularization from a set of tasks to benefit another set of tasks. This inductive transfer could be par- allel or sequential. Multilingual NMT has been studied under many of these settings to various extents. Most existing literature deals with sequential transfer, where the goal is to leverage a set of high re- source tasks that are already mastered, to improve the performance on a new (predominantly) low re- source task (Zoph et al., 2016). We consider the parallel learning problem, where the goal is to learn a single multi-task model trained concur- rently on all tasks and hence is capable of per- forming all tasks simultaneously once the train- ing is accomplished. In this section we investigate the effect of data imbalance across languages on these learning dy- namics, particularly through the lens of trans- fer and interference (Caruana, 1997; Rosenstein et al., 2005). Reiterating from Section 2, two desired characteristics of our universal machine translation model are (1) maximum (positive) transfer to low-resource languages and (2) min- imum interference (negative transfer) for high- resource languages. Now let us examine the inter- action between variables considered. For the base- line experiment, we start by following common conventions in literature on multilingual NMT (Firat et al., 2017; Lee et al., 2017; Johnson et al., 6 2017). We compare the performance of two train- ing approaches against bilingual baselines follow- ing two strategies: (i) all the available training data is combined as it is, with the data distribution in Figure 1, (ii) we over-sample (up-sample) low-resource languages so that they appear with equal probability in the combined dataset. In order to guide the translation with the in- tended target language, we pre-pend a target lan- guage token to every source sequence to be trans- lated (Johnson et al., 2017). Further, to study the effect of transfer and interference at its limit, we shared a single encoder and decoder across all the language pairs. During training, mini- batches are formed by randomly sampling exam- ples from the aggregated dataset following strate- gies (i) or (ii), as described above. We train a single Transformer-Big with a shared vocabulary of 64k tokens, all Transformer dropout options turned on with probability 0.1, and the same val- ues as the bilingual baselines used for other hyper- parameters. We use batch sizes of 4M tokens for all our multilingual models to improve the rate of convergence. All Transformer-Big runs utilize data parallelism over 64 TPUv3 chips. The results are depicted in Figure 3. The performance of these two models high- lights the trade-off between transfer and interfer- ence. If we sample equally from all datasets by over-sampling low-resource languages (strategy (ii)), we maximize transfer (right-most portion of Figure 3) and beat our bilingual baselines by large margins, especially in the Any→En direction. However, this also has the side-effect of signifi- cantly deteriorated performance on high resource languages (left-most portion of Figure 3). On the other hand, sampling based on the true data dis- tribution (strategy (i)) retains more performance on high resource languages, at the cost of sacrific- ing performance on low resource languages. We also note that the transfer-interference trade- off is more pronounced in the Any→En direc- tion: the cost-benefit trade-off between low and high-resource languages is more severe than that for the En→Any direction. Another interesting finding is the performance deterioration on high resource languages when En—Any translation performance with multilingual baselines e- Data Distributior versampling @ — Origir Any—En translation performance with multilingual baselines @ = Oversampling @ = Original Data Distributior Figure 3: Effect of sampling strategy on the per- formance of multilingual models. From left to right, languages are arranged in decreasing order of available training data. While the multilingual models are trained to translate both directions, Any→En and En→Any, performance for each of these directions is depicted in separate plots to highlight differences. Results are reported rela- tive to those of the bilingual baselines (2). Per- formance on individual language pairs is reported using dots and a trailing average is used to show the trend. The colors correspond to the following sampling strategies: (i) Blue: original data distri- bution, (ii) Green: equal sampling from all lan- guage pairs. Best viewed in color. translating from Any→En, contrary to existing re- sults in multilingual NMT (Firat et al., 2016a; Johnson et al., 2017), exaggerated by the limited model capacity and the scale and imbalance of our dataset. All these observations again under- score the difficulty in multitask learning, espe- cially when hundreds of tasks are to be learned simultaneously, each of which may come from a different distribution. Although (Maurer et al., 2016) demonstrates the benefit of multitask learn- ing when invariant features can be shared across all tasks, such a premise is not guaranteed when 7 hundreds of languages from different families are jointly considered. # 4.2 Countering Interference: Baselines and Open Problems The results in Figure 3 indicate that, in a large multi-task setting, high resource tasks are starved for capacity while low resource tasks benefit sig- nificantly from transfer, and the extent of inter- ference and transfer are strongly related. How- ever, this trade-off could be controlled by applying proper data sampling strategies. To enable more control over sampling, we in- vestigate batch balancing strategies (Firat et al., 2017; Lee et al., 2017), along the lines of the tem- perature based variant used for training multilin- gual BERT (Devlin et al., 2018). For a given lan- guage pair, l, let Dl be the size of the available parallel corpus. Then if we adopt a naive strat- egy and sample from the union of the datasets, the probability of the sample being from language l is pl = Dl . However, this strategy would starve ΣkDk low resource languages. To control the ratio of samples from different language pairs, we sam- ple a fixed number of sentences from the training data, with the probability of a sentence belong- 1 ing to language pair l being proportional to p T , l where T is the sampling temperature. As a result, T = 1 corresponds to true data distribution and T = 100 corresponds to (almost) equal number of samples for each language (close to a uniform distribution with over-sampled low-resource lan- guages). Please see Figure 4 for an illustration of the effect of temperature based sampling overlaid on our dataset distribution. T=1 T=5 T=100 High Resource (HR) Medium Resource (MR) Low Resource (LR) Sampling Probability Figure 4: Temperature based data sampling strate- gies overlaid on the data distribution. We repeat the experiment in Section 4.1 with temperature based sampling, setting T = 5 for En—Any translation performance at sampling temperatures @ = Data Dist @ = DataDistT-100 @ = Data Dist Any—En translation performance at sampling temperatures Figure 5: Effect of varying the sampling temper- ature on the performance of multilingual models. From left to right, languages are arranged in de- creasing order of available training data. Results are reported relative to those of the bilingual base- lines (2). Performance on individual language pairs is reported using dots and a trailing aver- age is used to show the trend. The colors cor- respond to the following sampling strategies: (i) Green: True data distribution (T = 1) (ii) Blue: Equal sampling from all language pairs (T = 100) (iii) Red: Intermediate distribution (T = 5). Best viewed in color. a balanced sampling strategy, and depict our re- sults in Figure 5. Results over different language groups by resource size are also summarized in Table 2. We notice that the balanced sampling strategy improves performance on the high re- source languages for both translation directions (compared to T = 100), while also retaining high transfer performance on low resource languages. However, performance on high and medium re- source languages still lags behind their bilingual baselines by significant margins. We unveiled one of the factors responsible for interference while training massively multilingual NMT models under heavy dataset imbalance, and 8 En→Any High 25 Med. 52 Low 25 11.72 Bilingual 6.24 T=1 T=100 12.87 12.75 T=5 Any→En High 25 Med. 52 Low 25 21.63 Bilingual 18.14 T=1 27.32 T=100 26.96 T=5 Table 2: Average translation quality (BLEU) of multilingual models using different sampling tem- peratures, over different groups of languages. High 25 refers to the top 25 languages by dataset size, while low 25 refers to the bottom 25. hinted that an appropriate data sampling strategy can potentially mitigate the interference. But the imbalance in dataset across tasks is not the only variable interacting with the transfer - interfer- ence dilemma. In all the experiments described above, multilingual models have the same capac- ity as the baselines, a strategy which could be in- terpreted as reducing their per-task capacity. To highlight the exacerbating effect of interference with increasing multilinguality, we train three ad- ditional models on a growing subset of 10, 25, and 50 languages. The specific languages are chosen to get a mixed representation of data size, script, morphological complexity and inter-language re- latedness. Results for the 10 languages, with a data sampling strategy T = 5, that are com- mon across all subsets are reported in Figure 6 and clearly highlight how performance degrades for all language pairs, especially the high and medium resource ones, as the number of tasks grows. While simple data balancing/sampling strate- gies might reduce the effects of interference with- out reducing transfer, our experiments also high- light a few research directions worth further ex- ploration. Most notably, Should we be using the same learning algorithms for multilingual and single language pair models? Can we still rely on data-sampling heuristics when the number of tasks are excessively large? We highlight a few open problems along these lines: En—Any translation performance with increasing tasks e angs @ 2S5langs @ SOlangs @ Any—En translation performance with increasing tasks e angs @ 2Slangs @ SOlangs @ Figure 6: Effect of increasing the number of lan- guages on the translation performance of multi- lingual models. From left to right, languages are arranged in decreasing order of available training data. Results are reported relative to those of the bilingual baselines (2). The colors correspond to the following groupings of languages: (i) Blue: 10 languages ↔ En, (ii) Red: 25 languages ↔ En, (iii) Yellow: 50 languages ↔ En (yellow), and (iv) Green: 102 languages ↔ En. Note that, we only plot performance on the 10 languages common across all the compared models while keeping the x-axis intact for comparison with other plots. Best viewed in color. Task Scheduling: Scheduling tasks has been widely studied in the context of multitask learning and meta-learning, but remains relatively under- explored for multilingual NMT (Bengio et al., 2009; Pentina et al., 2015). The scheduling of the tasks, or the scheduling of the correspond- ing data associated with the task can be studied under two distinct categories, static and dynamic (curriculum learning). Temperature based sam- pling or co-training with related languages to im- prove adaptation (Zoph et al., 2016; Neubig and Hu, 2018) fall within the class of static strate- gies. On the other hand, dynamic or curriculum 9 learning strategies refine the ratio of tasks simul- taneously with training, based on metrics derived from the current state of the learner (Graves et al., 2017). In the context of NMT, (Kumar et al., 2019) learn a RL agent to schedule between dif- ferent noise levels in the training data, while (Pla- tanios et al., 2019) combined heuristics into a data curriculum. (Kiperwasser and Ballesteros, 2018) designed schedules favoring one target language and (Jean et al., 2018) learn an adaptive scheduler for multilingual NMT. Similar approaches could be extended in the context of learning different language pairs in massively multilingual settings. Optimization for Multitask Learning: While task scheduling alters the data distribution or the dynamics of the data distribution seen by the learner, optimization algorithms, regularization and loss formulations determine how these exam- ples effect the learning process. While most lit- erature in multilingual NMT, including our work, relies on the same monolithic optimization ap- proach for single and multitask models,8 this choice might be far from optimal. There is no dearth of literature exploring loss formulations or regularization techniques that unravel and exploit task relatedness, reformulate loss functions to ac- count for adaptation and exploit meta-learning ap- proaches in multitask models (Vilalta and Drissi, 2002; Zhang and Yang, 2017). Applying op- timization approaches designed specifically for multitask models to multilingual NMT might be a fruitful avenue for future research. # 4.3 Understanding Transfer From our experiments on data sampling, we no- tice that multilingual training with shared weights helps promote transfer to low-resource languages. However, these improvements are imbalanced in how they affect languages when translating to or from English. To better understand transfer in multilingual models we individually inspect three different settings: 1) translation from English (En→Any), 2) translation to English (Any→En), and 3) translation between non-English language pairs (Any→Any). We compare the performance of our model 8One adaptive optimizer e.g. Adam, Adafactor with shared accumulators for gradient moments across all tasks. trained on all language pairs against two mod- els: (i) An identical model trained on all En→Any tasks, the one-to-many setup, and (ii) An identi- cal model trained on all Any→En tasks, the many- to-one setup. We depict the performance of these models in Figure 7 (and summarize in Table 3). We notice that the many-to-one Any→En model achieves huge improvements over our bilingual baselines for all low-resource languages (right- most portion of Figure 7). On the other hand, for the one-to-many En→Any model, we notice lesser deterioration in the performance on high resource languages, while the performance on low resource languages does not improve by much. En→Any High 25 Med. 52 Low 25 11.72 Bilingual All→All 12.75 En→Any 12.98 Any→En High 25 Med. 52 Low 25 21.63 Bilingual All→All 26.96 Any→En 30.56 Table 3: Average translation quality (BLEU) of multilingual models trained on differing groups of languages. High 25 refers to the top 25 languages by dataset size, while low 25 refers to the bot- tom 25. All→All reports the performance of the multilingual model trained on all language pairs, En→Any was trained on all language pairs with English as the source and Any→En was trained on all language pairs with English as the target. This discrepancy between the transfer for Any→En and En→Any can be better understood under the characterization of the many-to-one Any→En model as a multi-domain model, where each source language constitutes a separate do- main, and the one-to-many En→Any model as a multi-task model, with each target language rep- resenting a separate task (Section 2). This for- mulation of multilingual NMT helps explain the aforementioned observations, which suggest that multilingual models might be more amenable to transfer across input domains than transfer across tasks. Simple joint training does not do much to benefit one-to-many multilingual NMT; while some improvements may arise from the model 10 En—Any translation performance with dedicated model n—Any Individual Model @ = Any + Any int Mode Any-—En translation performance with dedicated model @ == En—Any + Any—En Joint M @ == Any—En Individual Mode Figure 7: Results comparing the performance of models trained to translate English to and from all languages to two separate from and to English models. From left to right, languages are arranged in decreasing order of available training data. Re- sults are reported relative to those of the bilingual baselines (2). The colors correspond to the fol- lowing models: (i) Green: dedicated (individual) En→Any model for top plot, (ii) Blue: dedicated Any→En model for bottom plot, and (iii) Red: shared model for both Any→En and En→Any . Best viewed in color. seeing much more English source data, there is little to no transfer occurring at the task/decoder level. On the other hand, it is much easier for many-to-one multilingual models to reap the ben- efits of joint training when the inputs are from dif- ferent domains: the output distribution remains the same, hence learning a much stronger En- glish language model, without any interference from the other target languages or tasks. This is also reflected in other works on low-resource ma- chine translation where Any→En typically bene- fits the most (Zoph et al., 2016; Nguyen and Chi- ang, 2017; Gu et al., 2018a,b). Another strong indicator of transfer in multi- lingual models is the quality on zero-shot trans- De → F r Be → Ru Y i → De F r → Zh Hi → F i Ru → F i 10 langs 102 langs 11.15 14.24 36.28 50.26 8.97 20.00 15.07 11.83 2.98 8.76 6.02 9.06 Table 4: Effect of increasing the number of languages on the zero-shot performance of multilingual models. lation. Multilingual models possess a unique ad- vantage over single task models, in that they are capable of translating between any pair of sup- ported input and output languages, even when no direct parallel data is available (Firat et al., 2016b). However, without supervision from par- allel data between non-English language pairs, zero-shot translation quality often trails the per- formance of pivoting/bridging through a common language (Johnson et al., 2017). Given the lack of transfer across different En→Any translation tasks, it isn’t hard to imagine why transferring across Yy→En and En→Xx , to learn Yy→Xx , is an even more challenging task. creases as we move from the 10 language model to the 102 language model, possibly due to the regularization effect in the capacity constrained setting, similar to what was observed in (Aharoni et al., 2019). This also indicates why methods that explicitly force languages to share the same rep- resentation space (Arivazhagan et al., 2019; Gu et al., 2019; Lu et al., 2018) may be key to im- proving zero-shot translation performance. We next delve into pre-processing and vocab- ulary generation when dealing with hundreds of languages. # 5 Pre-processing and Vocabularies Enabling direct translation between arbitrary languages has been widely studied, and has the potential to obviate the need for two-step pivoting which suffers from higher latency and accumu- lated errors. The most effective approach has been to simply synthesize parallel data (Firat et al., 2016b; Chen et al., 2017, 2018b) and incorporate that into the training process. However, this two- stage approach becomes intractable when dealing with a large number of languages; the amount of synthesized data required to enable zero-shot translation grows quadratically with the number of languages. More recent work has demon- strated that direct translation quality may be im- proved even in a zero-shot setting, by incorporat- ing more languages (Aharoni et al., 2019), adding regularization to promote cross-lingual transfer (Arivazhagan et al., 2019; Gu et al., 2019; Al- Shedivat and Parikh, 2019), and modeling strate- gies that encourage shared cross-lingual represen- tations (Lu et al., 2018; Escolano et al., 2019). Pre-processing and vocabulary construction lay central to the generalization ability of natural lan- guage processing systems. To generalize to unseen examples, machine learning algorithms must decompose the input data into a set of basic units (e.g. pixels, char- acters, phonemes etc.) or building blocks. For text, this set of basic units or building blocks is referred to as the vocabulary. Given some text and a vocabulary, a segmentation algorithm or a pre-processor is applied to fragment the text to its building blocks upon which the learner may then apply inductive reasoning. In essence, a properly defined vocabulary needs to (i) maintain a high coverage by being able to compose most text and produce minimal number of Out-of-Vocabulary (OOV) tokens during seg- mentation, (ii) have tractable size to limit compu- tational and spatial costs, and (iii) operate at the right level of granularity to enable inductive trans- fer with manageable sequence lengths which in- crease computational costs. We report the zero-shot performance of our 10 language and 102 language models, discussed in Section 4.2, on selected language pairs in Table 4. We observe that zero-shot performance on sim- ilar languages, in this case Be→Ru and Yi→De , is extremely high. We also notice that the zero- shot performance for most language pairs in- Early NMT models operated at the word level (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2014). Coverage issues arising from diffi- culty capturing all words of a language within a limited vocabulary budget promoted the develop- ment of character level systems (Ling et al., 2015; 11 Luong and Manning, 2016; Chung et al., 2016; Costa-jussà and Fonollosa, 2016; Lee et al., 2017; Cherry et al., 2018). These trivially achieve high coverage, albeit with the downside of increased computational and modeling challenges due to in- creased sequence lengths. Sub-word level vocab- ularies (Sennrich et al., 2016) have since found a middle ground and are used in most state-of-the- art NMT systems. Constructing vocabularies that can model hun- dreds of languages with vast number of charac- ter sets, compositional units, and morphological variance is critical to developing massively multi- lingual systems, yet remains challenging. Early multilingual NMT models utilized separate vo- cabularies for each language (Dong et al., 2015; Luong et al., 2015; Firat et al., 2016a); later ones used shared multilingual vocabularies (Sennrich et al., 2016; Ha et al., 2016c; Johnson et al., 2017) which are shared across languages. Recently, hy- brid (Shared + Separate) multilingual vocabular- ies and approaches for adding new languages to existing vocabularies (Lakew et al., 2018) have also been explored. In this section we describe the simplistic ap- proach to multilingual vocabulary construction used in our setup, inspect implicit characteris- tics of such vocabularies, and finally evaluate the downstream effects on multilingual machine translation. # 5.1 Vocabulary Construction and Characteristics We construct all our vocabularies using Sen- tence Piece Model (SPM) (Kudo and Richard- son, 2018) to remove complexity that may arise to- from language specific pre-processing (e.g. kenization, special character replacements etc.). The large number of languages used in our setup makes separate per-language vocabularies infea- sible. Therefore, we picked shared sub-word vo- cabularies. While using a single shared vocabu- lary across all languages reduces complexity, it in- troduces other challenges that we discuss below. With the large number of scripts introduced in a multilingual setting, the chance of a vocabu- lary producing unknowns (OOV) increases. Note that since only unrecognized characters will be encoded as OOV, if the vocabulary provides suf- 12 Tokens/Sentence (vary vocabulary size) BB monolingual size=32k | multilingual T=5 size=32k [™ multlingual T=5 size=64k BB multiingual T=5 size=128k En Fr De Ru Zz Fi Th Hi Be Yi Yo Tokens/Sentence (vary vocabulary sampling temperature) BB monolingual size=32k | multilingual T=1 size=64k {™ multlingual T=5 size=64k multilingual T=100 size=64k Figure 8: Average number of sentence-piece to- kens per sentence on random samples drawn from the training set. We compare the increase in num- ber of tokens per sentence for different languages, when moving from a standard monolingual vo- cabulary with 32k tokens to a multilingual vo- cabulary for which we vary the vocabulary size (size={32k, 64k, 128k} tokens) and the vocabu- lary sampling temperature (T = {1, 5, 100}). ficient coverage over the alphabet from various scripts, OOV rates will be low. For SPMs, this is tuned using the character_coverage option and we to a high value of 1 − (5 ∗ 10−6) which yields an alphabet size of around 12000, ensuring very low unknown rates for all the languages in our study. While ensuring low unknown rates, the shift to- wards a vocabulary which largely consists of char- acters or short sub-words (in terms of the number of characters that they consist of) results in longer sequences after tokenization (Chung et al., 2016; Costa-jussà and Fonollosa, 2016). Longer se- quence lengths, however, increase computational complexity and may also introduce optimization challenges due to longer range dependencies and require the learner to model a more complex map- ping function from a finer grained sequence to meaning. To avoid exceedingly long sequence En→Any 32k Vocab 64k Vocab Any→En 32k Vocab 64k Vocab High 25 Med. 52 Low 25 12.90 16.84 27.69 12.75 16.91 28.03 High 25 Med. 52 Low 25 26.18 29.40 33.24 26.96 30.25 33.85 Table 5: Average translation quality (BLEU) of multilingual models using different SPM vocab- ulary sizes, over different groups of languages. High 25 refers to the top 25 languages by dataset size, while low 25 refers to the bottom 25. lengths we need to increase the vocabulary size so that it may include longer tokens (sub-sequence that consist of longer number of characters). Finally, a shared multilingual vocabulary runs the risk of favoring some languages over others, due to the imbalance of the dataset size the vocab- ulary is extracted. To reduce the effect of imbal- anced dataset size we apply the same temperature sampling strategy discussed in Section 4 to Vocab- ulary Sampling. In Figure 8 we report the effect of varying the vocabulary size (size={32k, 64k, 128k} tokens) and sampling temperature (T = {1, 5, 100}) on the average number of tokens per sentence on 10 indicative languages. We see that it is important to balance different languages by using a higher sampling temperature of T = 5 or T = 100, in or- der to avoid exceedingly long sequence lengths for low resource languages. Orthogonally, increasing the vocabulary size also helps to reduce sequence lengths, especially on low resource languages. Here we continue experiments with TV = 5 to stay aligned with our sampling approach during NMT training. We next train and evaluate a few multilingual models to study the effect of different vocabulary sizes and sampling strategies on overall transla- tion quality. Following the experiments in Sec- tion 4, we train single multilingual models on all language pairs, using a data sampling temperature of T = 5. All our models are Transformer-Big, trained with the same set of hyper-parameters, dif- ferent only in terms of the vocabulary used. Table 5 compares the quality of two models trained using vocabularies of size 32k and 64k. 13 27.81 27.83 28.03 33.82 33.70 33.85 Table 6: Average translation quality (BLEU) of multilingual models using different sampling tem- peratures for vocabulary generation. High 25 refers to the top 25 languages by dataset size, while low 25 refers to the bottom 25. We notice that the model with the smaller 32k to- ken vocab does noticeably worse on high resource languages when translating in both directions, and on Any→En translation in all resource set- tings. On the other hand, the smaller vocab model performs marginally better when translating into low resource languages on En→Any . For other medium resource languages, increased vocabulary size appears to be better on all directions. Our results here agree with existing literature (Cherry et al., 2018; Kreutzer and Sokolov, 2018) suggest- ing that using smaller sub-word tokens, as is the case for smaller vocabularies, performs better in low resource settings due to improved generaliza- tion. Notable languages where the smaller vocab- ulary performs better include Corsican (co) and Uzbek (uz), both low resource languages which have known similar high resource languages to aid with generalization. We compare the translation quality of models that vary only in their vocabulary sampling tem- perature in Table 6. While not very pronounced, we do notice some differences in quality based on the vocabulary sampling temperature. Languages that perform better with a higher temperature of TV = 5 or TV = 100 include low-medium re- source languages like Mongolian (mn), Zulu (zu), Corsican (co) and Uzbek (uz). These gains may (i) have originated from two potential factors: Smaller tokens for high resource languages result in better transfer to related low resource languages (ii) Better character coverage for languages with distinct character sets. While the effects of vocabulary are much smaller than the trends observed for data sam- pling, failure to ensure careful character coverage and fair representation from all languages could nonetheless significantly impact translation qual- ity. So far in our study, we have presented our anal- ysis and experimentation with data, training and vocabulary in multilingual scenarios. We next an- alyze the effect of several architectural choices on the quality of our massively multilingual NMT model. # 6 Modeling The quality of any neural network is largely de- pendent on its architecture and parametrization. The choice of the model, and the constraints on its parameters determine the extent of the transfer- interference trade-off, the learning dynamics and, ultimately, the performance limits of the model. In this section we analyze how choices regarding parameter sharing and model capacity can impact translation quality in the setting of massively mul- tilingual NMT. # 6.1 Architecture In recent years, several revolutionary architec- tures have been developed to improve MT quality (Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018a; Wu et al., 2019). However, in the context of multilingual NMT, the most common prior imposed on models is typically in the form of (hard or soft) parameter sharing constraints across In designing a multi- different languages pairs. ingual NMT model, we would like to take ad- vantage of the common structures and features shared by multiple languages, which imposes con- straints on the model architecture. These con- straints range from sharing a fixed length repre- sentation across all source and target pairs (Lu- ong et al., 2015), sharing small network mod- ules, for example the attention parameters (Firat et al., 2016a), and sharing everything except the attention parameters (Ha et al., 2016b; Blackwood et al., 2018), to sharing all parameters in the model across all language pairs (Johnson et al., 2017; Ha et al., 2016c). Some studies also explore partial parameter sharing strategies tailored for specific architectures (Sachan and Neubig, 2018; Wang 14 et al., 2018b). More recently, with the increased number of languages (Qi et al., 2018; Aharoni et al., 2019), there has been work along the lines of applying soft sharing for MT, in the form of a shared set of meta-learned parameters used to generate task level parameters (Platanios et al., 2018). All of these sharing strategies come paired with their associated set of transfer-interference trade-offs, and regardless of the sharing strategy, in the absence of a complete and accurate atlas of task relatedness, more transferrability also implies more interference. This phenomenon is known to be a common problem for multitask learning, and is also related to the stability vs plasticity dilemma (Carpenter and Grossberg, 1987; Gross- berg, 1988). See (Parisi et al., 2018) for a catego- rization of such work. Considering the scale of our experimental setup and end-goal, combinatorial approaches for pa- rameter sharing do not emerge as plausible so- lutions. While “learning to share” approaches (Kang et al., 2011; Ruder et al., 2017; Platanios et al., 2018) are promising alternatives, a more straightforward solution could be implicitly in- creasing the per-task capacity by increasing over- all model capacity. Next we look into a brute- force approach to mitigate interference by enhanc- ing model capacity. # 6.2 Capacity Over the last few years, scaling up model capac- ity has been shown to demonstrate huge improve- ments on several supervised and transfer learning benchmarks, including MT (Brock et al., 2018; Devlin et al., 2018; Radford et al.; Shazeer et al., 2018). Scale often comes bundled with new hard- ware, infrastructure, and algorithmic approaches meant to optimize accelerator memory utilization and benefit faster computation, including meth- ods like gradient checkpointing and low preci- sion training (Courbariaux et al., 2014; Ott et al., 2018), memory efficient optimizers (Shazeer and Stern, 2018; Gupta et al., 2018) and frameworks supporting model parallelism (Shazeer et al., 2018; Harlap et al.; Huang et al., 2018). While massive models have been shown to im- prove performance in single task settings (Shazeer et al., 2018), we would expect the gains to be even larger on a capacity-constrained massively multi- En—Any_ | High 25 | Med. 52 | Low 25 Bilingual 29.34 17.50 11.72 400M 28.03 16.91 12.75 1.3B Wide | 28.36 16.66 11.14 1.3B Deep | 29.46 17.67 12.52 Any—En | High 25 | Med. 52 | Low 25 Bilingual 37.61 31.41 21.63 400M 33.85 30.25 26.96 1.3B Wide | 37.13 33.21 27.75 1.3B Deep | 37.47 34.63 31.21 Table 7: Average translation quality (BLEU) of multilingual models with increasing capacity. High 25 refers to the top 25 languages by dataset size, while low 25 refers to the bottom 25. lingual task. We next try to quantify how the per- formance of our model scales with capacity. Ad- ditionally, we also compare two dimensions along which model capacity can be increased—depth and width—and compare how performance across different tasks is affected when scaling along these two dimensions. We start with our Transformer-Big baseline with a 64k vocabulary, trained with a data sam- pling temperature of T = 5. This model has around 400M parameters, including the embed- dings and the softmax layers. We compare the performance of two scaled up models with around 1.3B parameters. Our first model is the wide model, with 12 layers in both the encoder and the decoder (24 layers in total), feed-forward hidden dimensions set to 16384, 32 attention heads and an attention hidden dimension set to 2048 (Shazeer et al., 2018). The deep model has 24 layers in the encoder and the decoder (48 layers in total), with all other hyper-parameters being equivalent to the Transformer-Big. To avoid trainability hur- dles, both these models are trained with transpar- ent attention (Bapna et al., 2018). Further, in order to enable training these massive models, we utilize GPipe (Huang et al., 2018) to incorporate efficient model parallelism. Both 1.3B param wide and deep models are trained with 128 TPUv3 chips, parallelized over 2 and 4 cores respectively.9 We use the same 4M token batch size used for all our multilingual experiments. 9Note, each TPUv3 chip has 2 cores. 15 En—Any translation performance with model size @ = Transformr-Big 24-Deep (1.38) @ = Transformer-Big (400M) @ == Transformer-Wide (1.3B) Any-—En translation performance with model size )(1.3B) @ = Transformer-Big (400M) for Wide (1.3B) Figure 9: Effect of increasing capacity on the per- formance of multilingual models. From left to right, languages are arranged in decreasing or- der of available training data. Results are re- ported relative to those of the bilingual baselines (2). The plots correspond to the following mod- els: blue: 400M param ‘Transformer-Big’, green: 1.3B param, 12 layer wide model and red: 1.3B param, 24 layer model. Best viewed in color. The performance of these two models and the baseline Transformer-Big is depicted in Figure 9 (also summarized in Table 7). We notice that both these models improve performance by sig- nificant amounts on the high resource languages, when compared against the capacity constrained massively multilingual baseline (blue curves in the deep model handily Figure 9). However, beats both, the baseline and the equivalent capac- ity wide model, by significant margins on most of the language pairs. We also notice that, unlike the wide model, the deep model does not overfit in low resource languages and, in fact, significantly enhances transfer to low resource languages on the Any→En translation tasks. Our results suggest that model capacity might be one of the most important factors determin- ing the extent of the transfer-interference trade- off. However, naively scaling capacity might result in poor transfer performance to low re- source languages. For example, our wide Trans- former while significantly improving performance on the high resource languages, fails to show similar gains in the low resource setting. While deeper models show great performance improve- ments, they come bundled with high decoding la- tency, a significantly larger computational foot- print, and trainability concerns including van- ishing/exploding gradients, early divergence, ill- conditioned initial conditions etc. (Hestness et al., 2017). Further research into various aspects of scalability, trainability and optimization dynamics is expected to be a fruitful avenue towards univer- sal NMT. We next delve into evaluation challenges posed by multilingual models. # 7 Evaluation Metrics for automatic quality evaluation (Pap- ineni et al., 2002) have been critical to the rapid progress in machine translation, by making eval- uation fast, cheap and reproducible. For multilin- gual NMT, new concerns arise due to the multi- objective nature of the problem and the inherent quality trade-offs between languages. Inter-language quality trade-offs arise due to various decisions made while constructing, train- ing and selecting a model. When constructing the model, the vocabulary may be constructed to favor a certain script or group of languages, or the language specific parameters may be un- evenly distributed across languages. During train- ing, the optimization settings or the rate at which training data from different languages are sam- pled strongly influence the eventual performance on different languages. Finally, when selecting a checkpoint10, the model may perform better on high resource languages at later checkpoints but may have regressed on low resource languages by that time due to over-training (or under-training for the opposing case). Each of these choices nat- urally may favor certain languages over others. To choose between the aforementioned trade- offs, we need a translation quality metric that is 10Certain snapshot of the model parameters. 16 both effective and comparable across languages. This in and of itself is a hard problem with an ever growing list of hundreds of metrics to choose from. Oftentimes these metrics vary in their effec- tiveness across languages; WMT shared tasks (Ma et al., 2018b) report that the specific language, dataset, and system significantly affect which met- ric has the strongest correlation with human rat- ings. When metrics are sufficiently effective across languages they are not always comparable. N-gram based metrics (Papineni et al., 2002; Dod- dington, 2002; Wang et al., 2016; Popovi´c, 2015) that measure lexical overlap require tokenization which is highly affected by language specific fac- tors such as alphabet size and morphological com- plexity. In fact, even within the same language, tokenization discrepancies pose significant chal- lenges to reliable evaluation (Post, 2018). Embed- ding based approaches (Stanojevic and Sima’an, 2014) may be language agnostic and help address these issues. to choosing a metric is choosing an evaluation set. Most existing met- rics are not consistent (Banerjee and Lavie, 2005) and for the same model vary based on the domain or even the specific sentences that they are being evaluated on. For example, there are significant differences of 3-5 BLEU between the WMT dev and test sets for the same language pair. Such consistency issues may further exacerbate the dif- ficulty of comparing system quality across lan- guages if we use different test sets for each lan- guage. This may be addressed by ensuring that evaluation is performed on the same corpora that has been translated to all the languages, i.e. multi- way parallel data. Even so, attention must be paid to the original language (Freitag et al., 2019) and domain such data is collected from. # 8 Open Problems in Massively Multilingual NMT Data and Supervision Whereas we focus solely on supervised learning, for many low resource languages it becomes essential to learn from There has been a lot of monolingual data. recent work on incorporating monolingual data to improve translation performance in low and including research on zero resource settings, back-translation (Sennrich et al., 2015; Edunov et al., 2018), language model fusion (Gulcehre et al., 2015; Sriram et al., 2017), self-supervised pre-training (Dai and Le, 2015; Ramachandran et al., 2016; Zhang and Zong, 2016; Song et al., 2019) and unsupervised NMT (Lample et al., 2017; Artetxe et al., 2017). Languages where large swathes of monolingual data are not eas- ily available might require more sample effi- cient approaches for language modeling, includ- ing grounding with visual modalities or sources of information (Huang et al., 2016), and learning from meta-data or other forms of context. Be- ing able to represent and ground information from multiple modalities in the same representational space is the next frontier for ML research. The scope of our study is limited to 103 lan- guages, a minuscule fraction of the thousands of existing languages. The heuristics and approaches we develop will be less and less applicable as we include more languages and incorporate other forms of data. As we scale up model sizes and the number of languages learned within a single model, approaches that require multi-stage train- ing or inference steps will likely become infea- sible. Work towards better integration of self- supervised learning with the supervised training process is more likely to scale well with larger model capacities and increasing number of lan- guages. Learning Developing learning approaches that work well for multitask models is essential to im- proving the quality of multilingual models. Our analysis from Section 4 demonstrates that even simple heuristics for data sampling/balancing can significantly improve the extent of transfer and interference observed by individual tasks. How- ever, our heuristic strategy only takes dataset size into account when determining the fraction of per-task samples seen by the model. Research on exploiting task-relatedness (Lee et al., 2016; Neubig and Hu, 2018), curriculum learning on noisy data (van der Wees et al., 2017; Wang et al., 2018a), and automated curriculum learn- ing from model state (Bengio et al., 2009; Graves et al., 2017) have demonstrated success in mul- titask learning, including for NMT. Other rele- vant threads of work include research on meta- learning to learn model hyper-parameters (Nichol et al., 2018; Baydin et al., 2017), model parame- 17 ters (Ha et al., 2016a; Platanios et al., 2018) and models that learn new tasks with high sample ef- ficiency (Finn et al., 2017) without forgetting ex- isting tasks or languages (Rusu et al., 2016; Kirk- patrick et al., 2017; Lakew et al., 2018). When scaling to thousands of languages, approaches that can automatically learn data sampling, curricula, model hyper-parameters and parameters, to train models that quickly adapt to new tasks, is again expected to become increasingly important. Increasing Capacity Increasing the model ca- pacity has been demonstrated to be a sure-shot approach to improving model quality in the pres- ence of supervised data for several tasks, includ- ing Image Generation (Brock et al., 2018), Lan- guage Modeling and transfer learning (Radford et al.; Devlin et al., 2018) and NMT (Shazeer et al., 2018). Validating this trend, one key re- sult from our study is the need for sufficient model capacity when training large multitask net- works. Other than the systems and engineering challenges (Jouppi et al., 2017; Huang et al., 2018; Shazeer et al., 2018), training deep and high ca- pacity neural networks poses significant trainabil- ity challenges (Montavon et al., 2018). A bet- ter theoretical understanding of the generalization ability of deep and wide models (Raghu et al., 2017; Arora et al., 2018), trainability challenges including exploding and vanishing gradients (Glo- rot and Bengio, 2010; Balduzzi et al., 2017) and empirical approaches to counter these challenges (Bapna et al., 2018; Zhang et al., 2019) are critical to further scaling of model capacity. Architecture and Vocabulary Extending exist- ing approaches for vocabulary construction and neural modeling to work better in multitask set- tings (Ma et al., 2018a; Houlsby et al., 2019) in or- der to strike the right balance between shared and task-specific capacity, or learning network struc- ture in order to maximally exploit task relatedness (Li et al., 2019) are exciting avenues for future research. As we scale up to thousands of lan- guages, vocabulary handling becomes a signifi- cantly harder challenge. Successfully represent- ing thousands of languages might require charac- ter (Lee et al., 2017; Cherry et al., 2018), byte (Gillick et al., 2015) or even bit level modeling. Modeling extremely long sequences of bit/byte- level tokens would present its own set of model- ing challenges. Finally, while scaling up existing approaches is one way to improve model quality, some approaches or architectures are more effi- cient to train (Shazeer et al., 2017; Vaswani et al., 2017; Wu et al., 2019), more sample efficient, or faster at inference (Gu et al., 2017; Roy et al., 2018). As models get larger, improving training and inference efficiency become more important to keep training and inference times within rea- sonable limits from both a practical and environ- mental perspective (Strubell et al., 2019). # 9 Conclusion Although we believe that we have achieved a mile- stone with the present study, building on five years of multilingual NMT research, we still have a long way to go towards truly universal machine trans- lation. In the open problems and future directions enumerated above, many promising solutions ap- pear to be interdisciplinary, making multilingual NMT a plausible general test bed for other ma- chine learning practitioners and theoreticians. # Acknowledgments We would like to thank the Google Translate and Google Brain teams for their useful input and discussion, and the entire Lingvo development team for their foundational contributions to this project. We would also like to thank Katherine Lee, Thang Luong, Colin Raffel, Noam Shazeer, and Geoffrey Hinton for their insightful com- ments. # References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. CoRR, abs/1903.00089. Maruan Al-Shedivat and Ankur P Parikh. 2019. Consistency by agreement in zero-shot neu- arXiv preprint ral machine translation. arXiv:1904.02338. Naveen Arivazhagan, Ankur Bapna, Orhan Fi- rat, Roee Aharoni, Melvin Johnson, and Wolf- gang Macherey. 2019. The missing ingredient in zero-shot neural machine translation. arXiv preprint arXiv:1903.07091. Sanjeev Arora, Nadav Cohen, and Elad Hazan. 2018. On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised arXiv preprint neural machine translation. arXiv:1710.11041. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. 2017. The shattered gradients problem: If resnets are the answer, then what is the question? In Proceedings of the 34th In- ternational Conference on Machine Learning- Volume 70, pages 342–350. JMLR. org. Satanjeev Banerjee and Alon Lavie. 2005. Me- teor: An automatic metric for mt evaluation with improved correlation with human judg- In Proceedings of the acl workshop ments. on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Train- ing deeper neural machine translation mod- els with transparent attention. arXiv preprint arXiv:1808.07561. Robert Cornish, David Martinez Rubio, Mark Schmidt, and Frank Wood. 2017. Online learning rate adaptation with hypergradient descent. arXiv preprint arXiv:1703.04782. Yoshua Bengio, Jérôme Louradour, Ronan Col- lobert, and Jason Weston. 2009. Curriculum In Proceedings of the 26th annual learning. international conference on machine learning, pages 41–48. ACM. 18 Graeme Blackwood, Miguel Ballesteros, and Todd Ward. 2018. Multilingual neural machine translation with task-specific attention. arXiv preprint arXiv:1806.03280. Ondrej Bojar, Rajen Chatterjee, Christian Feder- mann, et al. 2018a. Proceedings of the third conference on machine translation: Research papers. Belgium, Brussels. Association for Computational Linguistics. Ondˇrej Bojar, Rajen Chatterjee, Christian Feder- mann, et al. 2017. Findings of the 2017 confer- ence on machine translation (wmt17). In Pro- ceedings of the Second Conference on Machine Translation, pages 169–214. Ondˇrej Bojar, Rajen Chatterjee, Christian Feder- mann, et al. 2016a. Findings of the 2016 con- In ACL 2016 ference on machine translation. FIRST CONFERENCE ON MACHINE TRANS- LATION (WMT16), pages 131–198. The Asso- ciation for Computational Linguistics. Ondˇrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018b. Findings of the 2018 conference on machine translation (wmt18). In Proceedings of the Third Confer- ence on Machine Translation: Shared Task Pa- pers, pages 272–303, Belgium, Brussels. Asso- ciation for Computational Linguistics. Ondrej Bojar, Christian Federmann, Barry Had- dow, Philipp Koehn, Matt Post, and Lucia Spe- cia. 2016b. Ten years of wmt evaluation cam- paigns: Lessons learnt. In Proceedings of the LREC 2016 Workshop â ˘AIJTranslation Evalua- tion â ˘A¸S From Fragmented Tools and Data Sets to an Integrated Ecosystemâ ˘A˙I, pages 27–34. Andrew Brock, Jeff Donahue, and Karen Si- monyan. 2018. Large scale gan training for arXiv high fidelity natural image synthesis. preprint arXiv:1809.11096. Gail A Carpenter and Stephen Grossberg. 1987. Art 2: Self-organization of stable category recognition codes for analog input patterns. Ap- plied optics, 26(23):4919–4930. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. 19 Mauro Cettolo, C Girardi, and M Federico. 2012. Wit3: Web inventory of transcribed and trans- lated talks. Proceedings of EAMT, pages 261– 268. Mia Xu Chen, Orhan Firat, Ankur Bapna, et al. 2018a. The best of both worlds: Combin- ing recent advances in neural machine transla- tion. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 76–86, Melbourne, Australia. Association for Compu- tational Linguistics. Yun Chen, Yang Liu, Yong Cheng, and Vic- tor OK Li. 2017. A teacher-student frame- work for zero-resource neural machine transla- tion. arXiv preprint arXiv:1705.00753. Yun Chen, Yang Liu, and Victor OK Li. 2018b. Zero-resource neural machine translation with In Thirty- multi-agent communication game. Second AAAI Conference on Artificial Intelli- gence. Zhiyuan Chen and Bing Liu. 2016. Lifelong ma- chine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 10(3):1– 145. Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisit- ing character-based neural machine translation with capacity and compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295– 4305, Brussels, Belgium. Association for Com- putational Linguistics. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine trans- lation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguis- tics. Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder with- out explicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147. Ronan Collobert and Jason Weston. 2008. A uni- fied architecture for natural language process- ing: Deep neural networks with multitask learn- In Proceedings of the 25th International ing. Conference on Machine Learning, pages 160– 167. Marta R. Costa-jussà and José A. R. Fonollosa. 2016. Character-based neural machine transla- tion. CoRR, abs/1603.00810. Matthieu Courbariaux, Yoshua Bengio, and Jean- Pierre David. 2014. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024. Josep Maria Crego, Jungi Kim, Guillaume Klein, Systran’s pure neural machine et al. 2016. translation systems. CoRR, abs/1610.05540. Andrew M Dai and Quoc V Le. 2015. Semi- supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087. L. Deng, G. Hinton, and B. Kingsbury. 2013. New types of deep neural network learning for speech recognition and related applications: an overview. In 2013 IEEE International Confer- ence on Acoustics, Speech and Signal Process- ing, pages 8599–8603. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Bert: Pre- transformers arXiv preprint George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the sec- ond international conference on Human Lan- guage Technology Research, pages 138–145. Morgan Kaufmann Publishers Inc. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for In Proceedings multiple language translation. of the 53rd Annual Meeting of the Association 20 for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), volume 1, pages 1723–1732. Mark Dredze, Alex Kulesza, and Koby Crammer. 2010. Multi-domain learning by confidence- Mach. weighted parameter combination. Learn., 79(1-2):123–149. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding arXiv preprint back-translation at scale. arXiv:1808.09381. Carlos Escolano, Marta R Costa-jussà, and José AR Fonollosa. 2019. Towards interlin- gua neural machine translation. arXiv preprint arXiv:1905.06831. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org. Orhan Firat, Kyunghyun Cho, and Yoshua Ben- gio. 2016a. Multi-way, multilingual neural ma- chine translation with a shared attention mech- anism. arXiv preprint arXiv:1601.01073. Baskaran Firat, Kyunghyun Cho, Sankaran, Fatos T. Yarman Vural, and Yoshua Bengio. 2017. Multi-way, multilingual neural machine translation. Computer Speech & Language, 45:236 – 252. Orhan Firat, Baskaran Sankaran, Yaser Al- and Onaizan, Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neural machine the 2016 translation. Conference on Empirical Methods in Natural Language Processing, pages 268–277. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. Text repair model for neural machine translation. arXiv preprint arXiv:1904.04790. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. CoRR, abs/1705.03122. Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual lan- guage processing from bytes. arXiv preprint arXiv:1512.00103. Xavier Glorot and Yoshua Bengio. 2010. Under- standing the difficulty of training deep feedfor- In Proceedings of the ward neural networks. thirteenth international conference on artificial intelligence and statistics, pages 249–256. Siavash Golkar, Michael Kagan, and Kyunghyun Cho. 2019. Continual learning via neural prun- ing. CoRR, abs/1903.04476. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. The MIT Press. Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. 2017. Automated curriculum learning for neural net- works. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1311–1320. JMLR. org. Neurocomputing: Foundations of research. chapter How Does the Brain Build a Cognitive Code?, pages 347–399. MIT Press, Cambridge, MA, USA. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281. Jiatao Gu, Hany Hassan, Jacob Devlin, and Vic- tor O.K. Li. 2018a. Universal neural machine translation for extremely low resource lan- guages. In Proceedings of the 2018 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. 2018b. Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437. 21 Jiatao Gu, Yong Wang, Kyunghyun Cho, and Vic- tor OK Li. 2019. Improved zero-shot neural machine translation via ignoring spurious cor- relations. arXiv preprint arXiv:1906.01181. Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535. Vineet Gupta, Tomer Koren, and Yoram Singer. Shampoo: Preconditioned stochas- arXiv preprint 2018. tic tensor optimization. arXiv:1802.09568. David Ha, Andrew Dai, and Quoc V Le. arXiv preprint 2016a. arXiv:1609.09106. Hypernetworks. and Alexander Waibel. 2016b. Toward multilingual neural ma- chine translation with universal encoder and de- coder. arXiv preprint arXiv:1611.04798. Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016c. Toward multilingual neural ma- chine translation with universal encoder and de- coder. CoRR, abs/1611.04798. Aaron Harlap, Deepak Narayanan, Amar Phan- ishayee, Vivek Seshadri, Gregory R Ganger, and Phillip B Gibbons. Pipedream: Pipeline parallelism for dnn training. Hany Hassan, Anthony Aue, Chang Chen, et al. 2018. Achieving human parity on automatic arXiv chinese to english news translation. preprint arXiv:1803.05567. Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kian- inejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. 2017. Deep learning scal- ing is predictable, empirically. arXiv preprint arXiv:1712.00409. and Demian Gholipour. 2019. Evaluating the super- vised and zero-shot performance of multi- arXiv preprint lingual arXiv:1906.09675. Neil Houlsby, Andrei Giurgiu, Stanislaw Jas- trzebski, Bruna Morrone, Quentin de Larous- silhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient arXiv preprint transfer arXiv:1902.00751. Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention- based multimodal neural machine translation. In Proceedings of the First Conference on Ma- chine Translation: Volume 2, Shared Task Pa- pers, volume 2, pages 639–645. Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, and Zhifeng Chen. 2018. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965. Sébastien Jean, Orhan Firat, and Melvin John- son. 2018. Adaptive scheduling for multi-task In Advances in Neural Information learning. Processing Systems, Workshop on Continual Learning, Montreal, Canada. Melvin Johnson, Mike Schuster, Quoc V Le, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot transla- tion. Transactions of the Association of Com- putational Linguistics, 5(1):339–351. Mahesh Joshi, Mark Dredze, William W. Cohen, and Carolyn Rose. 2012. Multi-domain learn- In Proceed- ing: When do domains matter? ings of the 2012 Joint Conference on Empiri- cal Methods in Natural Language Processing and Computational Natural Language Learn- ing, pages 1302–1312, Jeju Island, Korea. As- sociation for Computational Linguistics. Norman P. Jouppi, Cliff Young, Nishant Patil, In-datacenter performance analy- et al. 2017. sis of a tensor processing unit. Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. CoRR, abs/1706.05137. Nal Kalchbrenner and Phil Blunsom. 2013. Re- current continuous translation models. In Pro- ceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA. Association for Computational Linguistics. Zhuoliang Kang, Kristen Grauman, and Fei Sha. 2011. Learning with whom to share in multi- In Proceedings of the task feature learning. 28th International Conference on International Conference on Machine Learning, ICML’11, pages 521–528, USA. Omnipress. Eliyahu Kiperwasser and Miguel Ballesteros. 2018. Scheduled multi-task learning: From syntax to translation. Transactions of the Asso- ciation for Computational Linguistics, 6:225– 240. James Kirkpatrick, Razvan Pascanu, Neil Ra- binowitz, et al. 2017. Overcoming catas- trophic forgetting in neural networks. Pro- ceedings of the national academy of sciences, 114(13):3521–3526. Philipp Koehn. 2005. Europarl: A Parallel Corpus for Statistical Machine Translation. In Confer- ence Proceedings: the tenth Machine Transla- tion Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. Julia Kreutzer and Artem Sokolov. 2018. inputs for NMT fa- CoRR, Learning to segment vors character-level processing. abs/1810.01480. Taku Kudo and John Richardson. 2018. Senten- cePiece: A simple and language independent subword tokenizer and detokenizer for neural In Proceedings of the 2018 text processing. Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Gaurav Kumar, George Foster, Colin Cherry, and Maxim Krikun. 2019. Reinforcement learning based curriculum optimization for neural ma- chine translation. CoRR, abs/1903.00041. Brenden M. Lake, Ruslan R. Salakhutdinov, Ja- son Gross, and Joshua B. Tenenbaum. 2011. One shot learning of simple visual concepts. In CogSci. 22 Surafel Melaku Lakew, Aliia Erofeeva, Matteo Negri, Marcello Federico, and Marco Turchi. 2018. Transfer learning in multilingual neural machine translation with dynamic vocabulary. CoRR, abs/1811.01137. and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Giwoong Lee, Eunho Yang, and Sung Hwang. 2016. Asymmetric multi-task learning based In International on task relatedness and loss. Conference on Machine Learning, pages 230– 238. Jason Lee, Kyunghyun Cho, and Thomas Hof- mann. 2017. Fully character-level neural ma- chine translation without explicit segmentation. Transactions of the Association for Computa- tional Linguistics, 5:365–378. Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, and Caiming Xiong. 2019. Learn to grow: A continual structure learning frame- work for overcoming catastrophic forgetting. Wang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. 2015. Character-based neu- arXiv preprint ral machine translation. arXiv:1511.04586. David Lopez-Paz et al. 2017. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pages 6467–6476. Yichao Lu, Phillip Keung, Faisal Ladhak, Vikas Bhardwaj, Shaonan Zhang, and Jason Sun. interlingua for multilin- A neural 2018. arXiv preprint gual machine translation. arXiv:1804.08198. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi- arXiv task sequence to sequence learning. preprint arXiv:1511.06114. Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural ma- chine translation with hybrid word-character models. arXiv preprint arXiv:1604.00788. 23 Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H. Chi. 2018a. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD International Con- ference on Knowledge Discovery &#38; Data Mining, KDD ’18, pages 1930–1939, New York, NY, USA. ACM. Qingsong Ma, Ondˇrej Bojar, and Yvette Graham. 2018b. Results of the wmt18 metrics shared task: Both characters and embeddings achieve good performance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671–688. and Bernardino Romera-Paredes. 2016. The benefit of multitask representation learning. Journal of Machine Learning Research, 17:81:1–81:32. Tom M. Mitchell. 1997. Machine learning. Mc- Graw Hill series in computer science. McGraw- Hill. Grégoire Montavon, Wojciech Samek, and Klaus- Robert Müller. 2018. Methods for interpreting and understanding deep neural networks. Digi- tal Signal Processing, 73:1–15. Hyeonseob Nam and Bohyung Han. 2015. Learning multi-domain convolutional neu- CoRR, ral networks for visual abs/1510.07945. Graham Neubig and Junjie Hu. 2018. Rapid adap- tation of neural machine translation to new lan- In Proceedings of the 2018 Confer- guages. ence on Empirical Methods in Natural Lan- guage Processing, pages 875–880. Association for Computational Linguistics. Toan Q. Nguyen and David Chiang. 2017. Trans- fer learning across low-resource, related lan- guages for neural machine translation. In Proc. IJCNLP, volume 2, pages 296–301. Alex Nichol, Joshua Achiam, and John Schulman. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. arXiv preprint arXiv:1806.00187. S. J. Pan and Q. Yang. 2010. A survey on trans- fer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10):1345–1359. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine transla- In Proceedings of 40th Annual Meeting tion. of the Association for Computational Linguis- tics, pages 311–318, Philadelphia, Pennsylva- nia, USA. Association for Computational Lin- guistics. German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kann, and Stefan Wermter. 2018. Continual lifelong learning with neural networks: A review. CoRR, abs/1802.07569. Anastasia Pentina, Viktoriia Sharmanska, and Christoph H. Lampert. 2015. Curriculum learn- ing of multiple tasks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). A. Phillips and M Davis. 2009. Tags for Identify- ing Languages. RFC 5646, RFC Editor. Emmanouil Antonios Platanios, Mrinmaya Sachan, Graham Neubig, and Tom Mitchell. 2018. Contextual parameter generation for arXiv universal neural machine translation. preprint arXiv:1808.08493. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabás Póczos, and Tom M. Mitchell. 2019. Competence-based curriculum learning for neural machine translation. CoRR, abs/1903.09848. Maja Popovi´c. 2015. chrf: character n-gram f- score for automatic mt evaluation. In Proceed- ings of the Tenth Workshop on Statistical Ma- chine Translation, pages 392–395. Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771. Ye Qi, Sachan Devendra, Felix Matthieu, Pad- manabhan Sarguna, and Neubig Graham. 2018. When and why are pre-trained word embed- dings useful for neural machine translation. In HLT-NAACL. Alec Radford, Karthik Narasimhan, Tim Sali- mans, and Ilya Sutskever. Improving language understanding by generative pre-training. Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha Sohl Dickstein. 2017. On the expressive power of deep neural networks. In Proceedings of the 34th International Con- ference on Machine Learning-Volume 70, pages 2847–2854. JMLR. org. Prajit Ramachandran, Peter J Liu, and Quoc V Le. 2016. Unsupervised pretraining for se- quence to sequence learning. arXiv preprint arXiv:1611.02683. Bharath Ramsundar, Steven Kearnes, Patrick Ri- ley, Dale Webster, David Konerding, and Vijay Pande. 2015. Massively multitask networks for drug discovery. arXiv:1502.02072. Bernardino Romera-Paredes and Philip H. S. Torr. 2015. An embarrassingly simple approach to zero-shot learning. In ICML. Michael T Rosenstein, Zvika Marx, Leslie Pack Kaelbling, and Thomas G Dietterich. 2005. To transfer or not to transfer. In NIPS 2005 work- shop on transfer learning, volume 898, pages 1–4. Aurko Roy, Ashish Vaswani, Arvind Neelakan- tan, and Niki Parmar. 2018. Theory and experi- ments on vector quantized autoencoders. arXiv preprint arXiv:1805.11063. Sebastian Ruder. 2017. An overview of multi- task learning in deep neural networks. CoRR, abs/1706.05098. Sebastian Ruder, Joachim Bingel, Isabelle Au- genstein, and Anders Søgaard. 2017. Learn- ing what to share between loosely related tasks. arXiv preprint arXiv:1705.08142. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671. 24 Devendra Singh Sachan and Graham Neubig. 2018. Parameter sharing methods for multilin- gual self-attentional translation models. Pro- ceedings of the Third Conference on Machine Translation. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Improving neural machine trans- lation models with monolingual data. arXiv preprint arXiv:1511.06709. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Noam Shazeer, Youlong Cheng, Niki Parmar, et al. 2018. Mesh-tensorflow: Deep learning for supercomputers. In Advances in Neural In- formation Processing Systems, pages 10414– 10423. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated arXiv preprint mixture-of-experts layer. arXiv:1701.06538. Noam Shazeer Adafactor: sublinear memory cost. arXiv:1804.04235. and Mitchell Stern. 2018. Adaptive learning rates with arXiv preprint Jonathan Shen, Patrick Nguyen, Yonghui Wu, et al. 2019. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. arXiv preprint arXiv:1902.08295. Daniel L. Silver, Qiang Yang, and Lianghao Li. 2013. Lifelong machine learning systems: Be- yond learning algorithms. In AAAI Spring Sym- posium: Lifelong Machine Learning. Shagun Sodhani, Sarath Chandar, and Yoshua On training recurrent neu- CoRR, Bengio. 2018. ral networks for lifelong learning. abs/1811.07017. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. 2017. Cold fusion: Training seq2seq models together with language models. arXiv preprint arXiv:1708.06426. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, and Ruslan Ilya Sutskever, Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Milos Stanojevic and Khalil Sima’an. 2014. In Pro- BEER: BEtter evaluation as ranking. ceedings of the Ninth Workshop on Statisti- cal Machine Translation, pages 414–419, Bal- timore, Maryland, USA. Association for Com- putational Linguistics. Emma Strubell, Ananya Ganesh, and Andrew Mc- Callum. 2019. Energy and policy considera- tions for deep learning in nlp. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neu- In Advances in neural informa- ral networks. tion processing systems, pages 3104–3112. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2019. Multilingual neural ma- chine translation with knowledge distillation. arXiv preprint arXiv:1902.10461. Sebastian Thrun and Tom M. Mitchell. 1995. Robotics and Au- Lifelong robot learning. tonomous Systems, 15(1):25 – 46. The Biol- ogy and Technology of Intelligent Autonomous Agents. Sebastian Thrun and Lorien Pratt, editors. 1998. Learning to Learn. Kluwer Academic Publish- ers, Norwell, MA, USA. Jörg Tiedemann. 2012. Parallel data, tools and In Proceedings of the interfaces in OPUS. Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2214–2218, Istanbul, Turkey. European Lan- guage Resources Association (ELRA). 25 Emerging language spaces learned from massively multilingual cor- pora. arXiv preprint arXiv:1802.00273. Jakob Uszkoreit, Jay M Ponte, Ashok C Popat, and Moshe Dubiner. 2010. Large scale par- allel document mining for machine transla- the 23rd Interna- tion. tional Conference on Computational Linguis- tics, pages 1101–1109. Association for Compu- tational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in Neural Information Processing Systems, pages 5998– 6008. Ricardo Vilalta and Youssef Drissi. 2002. A per- spective view and survey of meta-learning. Ar- tificial intelligence review, 18(2):77–95. Oriol Vinyals, Charles Blundell, Timothy Lilli- crap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching networks for one shot learning. In Proceedings of the 30th International Con- ference on Neural Information Processing Sys- tems, NIPS’16, pages 3637–3645, USA. Curran Associates Inc. Wei Wang, Taro Watanabe, Macduff Hughes, Tet- suji Nakagawa, and Ciprian Chelba. 2018a. Denoising neural machine translation training with trusted data and online data selection. arXiv preprint arXiv:1809.00068. Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. Char- acter: Translation edit rate on character level. In Proceedings of the First Conference on Ma- chine Translation: Volume 2, Shared Task Pa- pers, volume 2, pages 505–510. Yining Wang, Jiajun Zhang, Feifei Zhai, Jing- fang Xu, and Chengqing Zong. 2018b. Three strategies to improve one-to-many multilingual In Proceedings of the 2018 Con- translation. ference on Empirical Methods in Natural Lan- guage Processing, pages 2955–2960, Brussels, Belgium. Association for Computational Lin- guistics. 26 Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic data selection for neural machine translation. arXiv preprint arXiv:1708.00712. Felix Wu, Angela Fan, Alexei Baevski, Yann N Dauphin, and Michael Auli. 2019. Pay less at- tention with lightweight and dynamic convolu- tions. arXiv preprint arXiv:1901.10430. Yonghui Wu, Mike Schuster, Zhifeng Chen, et al. 2016. Google’s neural machine trans- lation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144. Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. 2019. Fixup initialization: Residual learn- arXiv preprint ing without normalization. arXiv:1901.09321. Jiajun Zhang and Chengqing Zong. 2016. Ex- ploiting source-side monolingual data in neu- ral machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 1535–1545. Yu Zhang and Qiang Yang. 2017. vey on multi-task learning. arXiv:1707.08114. A sur- arXiv preprint Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep recurrent models with fast-forward connections for neural ma- chine translation. Transactions of the Associa- tion for Computational Linguistics, 4:371–383. MichaÅ ´C Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united na- tions parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Associ- ation (ELRA). Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low- In Pro- resource neural machine translation. ceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1568–1575. Language Polish Portuguese Punjabi Romanian Russian Samoan Scots_Gaelic Serbian Sesotho Shona Sindhi Sinhalese Slovak Slovenian Somali Spanish Sundanese Swahili Swedish Tajik Tamil Telugu Thai Turkish Ukrainian Urdu Uzbek Vietnamese Id pl pt pa ro ru sm gd sr st sn sd si sk sl so es su sw sv tg ta te th tr uk ur uz vi cy xh yi yo zu Table 8: List of BCP-47 language codes used throughout this paper (Phillips and Davis, 2009). . 27
{ "id": "1902.10461" }
1907.04840
Sparse Networks from Scratch: Faster Training without Losing Performance
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.
http://arxiv.org/pdf/1907.04840
Tim Dettmers, Luke Zettlemoyer
cs.LG, cs.NE, stat.ML
9 page NeurIPS 2019 submission
null
cs.LG
20190710
20190823
9 1 0 2 g u A 3 2 ] G L . s c [ 2 v 0 4 8 4 0 . 7 0 9 1 : v i X r a # Sparse Networks from Scratch: Faster Training without Losing Performance # Tim Dettmers & Luke Zettlemoyer University of Washington {dettmers, lsz}@cs.washington.edu # Abstract We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse mo- mentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum mag- nitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Fur- thermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use. # Introduction Current state-of-the-art neural networks need extensive computational resources to be trained and can have capacities of close to one billion connections between neurons (Vaswani et al., 2017; Devlin et al., 2018; Child et al., 2019). One solution that nature found to improve neural network scaling is to use sparsity: the more neurons a brain has, the fewer connections neurons make with each other (Herculano-Houzel et al., 2010). Similarly, for deep neural networks, it has been shown that sparse weight configurations exist which train faster and achieve the same errors as dense networks (Frankle and Carbin, 2019). However, currently, these sparse configurations are found by starting from a dense network, which is pruned and re-trained repeatedly – an expensive procedure. In this work, we demonstrate the possibility of training sparse networks that rival the performance of their dense counterparts with a single training run – no re-training is required. We start with random initializations and maintain sparse weights throughout training while also speeding up the overall training time. We achieve this by developing sparse momentum, an algorithm which uses the exponentially smoothed gradient of network weights (momentum) as a measure of persistent errors to identify which layers are most efficient at reducing the error and which missing connections between neurons would reduce the error the most. Sparse momentum follows a cycle of (1) pruning weights with small magnitude, (2) redistributing weights across layers according to the mean momentum magnitude of existing weights, and (3) growing new weights to fill in missing connections which have the highest momentum magnitude. We compare the performance of sparse momentum to compression algorithms and recent methods that maintain sparse weights throughout training. We demonstrate state-of-the-art sparse performance on Preprint. Under review. MNIST, CIFAR-10, and ImageNet-1k. For CIFAR-10, we determine the percentage of weights needed to reach dense performance levels and find that AlexNet, VGG16, and Wide Residual Networks need between 35-50%, 5-10%, and 20-30% weights to reach dense performance levels. We also estimate the overall speedups of training our sparse convolutional networks to dense performance levels on CIFAR-10 for optimal sparse convolution algorithms and naive dense convolution algorithms compared to dense baselines. For sparse convolution, we estimate speedups between 2.74x and 5.61x and for dense convolution speedups between 1.07x and 1.36x. In your analysis, we show that our method is relatively robust to choices of prune rate and momentum hyperparameters. Furthermore, ablations demonstrate that the momentum redistribution and growth components are increasingly important as networks get deeper and larger in size – both are critical for good ImageNet performance. # 2 Related Work From Dense to Sparse Neural Networks: Work that focuses on creating sparse from dense neural networks has an extensive history. Earlier work focused on pruning via second-order derivatives (LeCun et al., 1989; Karnin, 1990; Hassibi and Stork, 1992) and heuristics which ensure efficient training of networks after pruning (Chauvin, 1988; Mozer and Smolensky, 1988; Ishikawa, 1996). Recent work is often motivated by the memory and computational benefits of sparse models that enable the deployment of deep neural networks on mobile and low-energy devices. A very influential paradigm has been the iterative (1) train-dense, (2) prune, (3) re-train cycle introduced by Han et al. (2015). Extensions to this work include: Compressing recurrent neural networks and other models (Narang et al., 2017; Zhu and Gupta, 2018; Dai et al., 2018), continuous pruning and re-training (Guo et al., 2016), joint loss/pruning-cost optimization (Carreira-Perpinán and Idelbayev, 2018), layer-by-layer pruning (Dong et al., 2017), fast-switching growth-pruning cycles (Dai et al., 2017), and soft weight-sharing (Ullrich et al., 2017). These approaches often involve re-training phases which increase the training time. However, since the main goal of this line of work is a compressed model for mobile devices, it is desirable but not an important main goal to reduce the run-time of these procedures. This is contrary to our motivation. Despite the difference in motivation, we include many of these dense-to-sparse compression methods in our comparisons. Other compression algorithms include L0 regularization (Louizos et al., 2018), and Bayesian methods (Louizos et al., 2017; Molchanov et al., 2017). For further details, see the survey of Gale et al. (2019). Interpretation and Analysis of Sparse Neural Networks: Frankle and Carbin (2019) show that “winning lottery tickets” exist for deep neural networks – sparse initializations which reach similar predictive performance as dense networks and train just as fast. However, finding these winning lottery tickets is computationally expensive and involves multiple prune and re-train cycles starting from a dense network. Followup work concentrated on finding these configurations faster (Frankle et al., 2019; Zhou et al., 2019). In contrast, we reach dense performance levels with a sparse network from random initialization with a single training run while accelerating training. Sparse Neural Networks Throughout Training: Methods that maintain sparse weights throughout training through a prune-redistribute-regrowth cycle are most closely related to our work. Bellec et al. (2018) introduce DEEP-R, which takes a Bayesian perspective and performs sampling for prune and regrowth decisions – sampling sparse network configurations from a posterior. While theoretically rigorous, this approach is computationally expensive and challenging to apply to large networks and datasets. Sparse evolutionary training (SET) (Mocanu et al., 2018) simplifies prune-regrowth cycles by using heuristics: (1) prune the smallest and most negative weights, (2) grow new weights in random locations. Unlike our work, where many convolutional channels are empty and can be excluded from computation, growing weights randomly fills most convolutional channels and makes it challenging to harness computational speedups during training without specialized sparse algorithms. SET also does not include the cross-layer redistribution of weights which we find to be critical for good performance, as shown in our ablation study. The most closely related work to ours is Dynamic Sparse Reparameterization (DSR) by Mostafa and Wang (2019), which includes the full prune-redistribute-regrowth cycle. However, DSR requires some specific layers to be dense. Our method works in a fully sparse setting and is thus more generally applicable. More distantly related is Single-shot Network Pruning (SNIP) (Lee et al., 2019), which aims to find the best sparse network from a single pruning decision. The goal of SNIP is simplicity, while our goal is maximizing predictive and run-time performance. In our experiments, we compare against all four methods: DEEP-R, SET, DSR, and SNIP. 2 # 3 Method # 3.1 Sparse Learning We define sparse learning to be the training of deep neural networks which maintain sparsity through- out training while matching the predictive performance of dense neural networks. To achieve this, intuitively, we want to find the weights that reduce the error most effectively. This is challenging since most deep neural network can hold trillions of different combinations of sparse weights. Additionally, during training, as feature hierarchies are learned, efficient weights might change gradually from shallow to deep layers. How can we find good sparse configurations? In this work, we follow a divide-and-conquer strategy that is guided by computationally efficient heuristics. We divide sparse learning into the following sub-problems which can be tackled independently: (1) pruning weights, (2) redistribution of weights across layers, and (3) regrowing weights, as defined in more detail below. Sparse Momentum 1. Calculate mean magnitude of 2. Prune the smallest 50% of weights 3. Regrow weights according to momentum for existing weights for each layer momentum of missing weights Remove Regrow 16 Weights 16 Weights ——> 8 Weights 8 Weights ——> 16 Weights 2 x x 2 Py Py Hed a Bl A Ds Mean Momentum Redistribute 8 Removed Weights ———————_—> g Removed Weights f Contribution per layer Figure 1: Sparse Momentum is applied at the end of each epoch: (1) take the magnitude of the exponentially smoothed gradient (momentum) of each layer and normalize to 1; (2) for each layer, remove p = 50% of the weights with the smallest magnitude; (3) across layers, redistribute the removed weights by adding weights to each layer proportionate to the momentum of each layer; within a layer, add weights starting from those with the largest momentum magnitude. Decay p. # 3.2 Sparse Momentum We use the mean magnitude of momentum Mi of existing weights Wi in each layer i to estimate how efficient the average weight in each layer is at reducing the overall error. Intuitively, we want to take weights from less efficient layers and redistribute them to weight-efficient layers. The sparse momentum algorithm is depicted in Figure 1. In this section, we first describe the intuition behind sparse momentum and then present a more detailed description of the algorithm. The gradient of the error with respect to a weight ∂E ∂W yields the directions which reduce the error at the highest rate. However, if we use stochastic gradient descent, most weights of ∂E ∂W oscillate between small/large and negative/positive gradients with each mini-batch (Qian, 1999) – a good change for one mini-batch might be a bad change for another. We can reduce oscillations if we take the average gradient over time, thereby finding weights which reduce the error consistently. However, we want to value recent gradients, which are closer to the local minimum, more highly than the distant past. This can be achieved by exponentially smoothing ∂E ∂W – the momentum Mi: t Mt+1 i = αMt i + (1 − α) ∂E ∂Wi , where α is a smoothing factor, Mi is the momentum for the weight Wi in layer i; Mi is initialized with 0. 3 Momentum is efficient at accelerating the optimization of deep neural networks by identifying weights which reduce the error consistently. Similarly, the aggregated momentum of weights in each layer should reflect how good each layer is at reducing the error consistently. Additionally, the momentum of zero-valued weights – equivalent to missing weights in sparse networks – can be used to estimate how quickly the error would change if these weights would be included in a sparse network. The details of the full training procedure of our algorithm are shown in Algorithm 1. See Algorithm 2 in the Appendix for a more detailed, source-code-like description of sparse momentum. Before training, we initialize the network with a certain sparsity s: we initialize the network as usual and then remove a fraction of s weights for each layer. We train the network normally and mask the weights after each gradient update to enforce sparsity. We apply sparse momentum after each epoch. We can break the sparse momentum into three major parts: (a) redistribution of weights, (b) pruning weights, (c) regrowing weights. In step (a), we we take the mean of the element-wise momentum magnitude m, that belongs to all nonzero weights for each layer i and normalize the value by the total momentum magnitude of all layers an m,. The resulting proportion is the momentum magnitude contribution for each layer. The number of weights to be regrow in each layer is the total number of removed weights multiplied by each layers momentum contribution: Regrow, = Total Removed -m;. In step (b), we prune a proportion of p (prune rate) of the weights with the lowest magnitude for each layer. In step (c), we regrow weights by enabling the gradient flow of zero-valued (missing) weights which have the largest momentum magnitude. Additionally, there are two edge-cases which we did not include in Algorithm 1 for clarity. (1) If we allocate more weights to be regrown than is possible for a specific layer, for example regrowing 100 weights for a layer of maximum 10 weights, we redistribute the excess number of weights equally among all other layers. (2) For some layers, our algorithm will converge in that the average weight in layer i has much larger momentum magnitude than weights in other layers, but at the same time, this layer is dense and cannot grow further. We do not want to prune weights from such important layers. Thus, for these layers, we reduce the prune rate pi proportional to the sparsity: pi = min(p, sparsityi). After each epoch, we decay the prune rate in Algorithm 1 in the same way learning rates are decayed. We use a cosine decay schedule that anneals the prune rate to zero on the last epoch, but in our sensitivity analysis in Section 5.2 we find that cosine and linear schedules work similarly well and our algorithm is insensitive to the choice of the starting prune rate. # 3.3 Experimental Setup For comparison, we follow three different experimental settings, one from Lee et al. (2019) and two settings follow Mostafa and Wang (2019): For MNIST (LeCun, 1998), we use a batch size of 100, decay the learning rate by a factor of 0.1 every 25000 mini-batches. For CIFAR-10 (Krizhevsky and Hinton, 2009), we use standard data augmentations (horizontal flip, and random crop with reflective padding), a batch size of 128, and decay the learning rate every 30000 mini-batches. We train for 100 and 250 epochs on MNIST and CIFAR-10, use a learning rate of 0.1, stochastic gradient descent with Nesterov momentum of α = 0.9, and we use a weight decay of 0.0005. We use a fixed 10% of the training data as the validation set and train on the remaining 90%. We evaluate the test set performance of our models on the last epoch. For all experiments on MNIST and CIFAR-10, we report the standard errors. Our sample size is generally between 10 and 12 experiments per method/architecture/sparsity level with different random seeds for each experiment. We use the modified network architectures of AlexNet, VGG16, and LeNet-5 as introduced by Lee et al. (2019). We consider two different variations of the experimental setup of Mostafa and Wang (2019) for ImageNet and CIFAR-10. The first follows their procedure closely, in that we run the networks in a partially dense setting where the first convolutional layer and downsampling convolutional layers are dense. Additionally, for CIFAR-10 the last fully connected layer is dense. In the second setting, we compare in a fully sparse setting – no layer is dense at the beginning of training. For the fully sparse setting we increase overall number of weights according to the extra parameters in the dense layers and distribute them equally among the network. The parameters in the dense layers make up 5.63% weights of the ResNet-50 network. We refer to these two settings as the partially dense and fully sparse settings. On ImageNet (Deng et al., 2009), we use ResNet-50 (He et al., 2016) with a stride of 2 for the 3x3 convolution in the bottleneck layers. We use a batch size of 256, input size of 224, momentum of 4 Algorithm 1: Sparse momentum algorithm. Data: Layer i to k with: Momentum Mi, Weight Wi, binary Maski prune rate pi, density d 1 for i ← 0 to k do 2 Wi ← xavierInit(Wi) 3 Maski ← createMaskForWeight(Wi, d) applyMask(Wi, Maski) 4 5 end 6 for epoch ← 0 to numEpochs do /* Normal training. for j ← 0 to numBatches do batch ← getBatch(j) ∂E ∂W = computeGradients(W, batch) UpdateMomentum( ∂E UpdateWeights(M) for i ← 0 to k do Mask after each mini-batch. 7 8 9 ∂W ) 10 11 12 13 applyMask(Wi, Maski) 14 end 15 16 17 18 end /* Determine momentum contribution, prune weights, then regrow them. totalMomentum ← getTotalMomentum(M) totalPruned ← getTotalPrunedWeights(W, p) for i ← 0 to k do 19 20 21 22 23 mi ← getMomentumContribution(Mi, Maski, totalMomentum) magnitudePruneWeight(Wi, Maski, pi) regrowWeights(Wi, Maski, mi · totalPruned) pi ← decayPrunerate(pi) applyMask(Wi, Maski) 24 25 end end */ */ α = 0.9, and weight decay of 10−4. We train for 100 epochs and report validation set performance after the last epoch. We report results for the fully sparse and the partially dense setting. For all experiments, we keep biases and batch normalization weights dense. We tuned the prune rate p and momentum rate α searching the parameter space {0.2, 0.3, 0.4, 0.5, 0.6, 0.7} and {0.5, 0.6, 0.7, 0.8, 0.9, 0.95, 0.99} on MNIST and CIFAR-10 and found that p = 0.2 and α = 0.9 work well for most architectures. We use this prune and momentum rate throughout all experiments. In our sensitivity analysis in Section 5.2 we find that all prune rates between 0.2 and 0.5 and momentum rates between 0.7 and 0.9 work equally well. ImageNet experiments were run on 4x RTX 2080 Ti and all other experiments on individual GPUs. Our software builds on PyTorch (Paszke et al., 2017) and is a wrapper for PyTorch neural networks with a modular architecture for growth, redistribution, and pruning algorithms. Currently, no GPU- accelerated libraries that utilize sparse tensors exist, and as such we use masked weights to simulate sparse neural networks. Using our software, any PyTorch neural network can be adapted to be a sparse momentum network with less than 10 lines of code. We will open-source our software along with trained models and individual experimental results.1 # 4 Results Results in Figure 2 and Table 1 show a comparison with model compression methods. On MNIST, sparse momentum is the only method that provides consistent strong performance across both LeNet 300-100 and LeNet-5 Caffe models. Soft-weight sharing (Ullrich et al., 2017) and Layer-wise Brain Damage (Dong et al., 2017) are competitive with sparse momentum for one model, but under- # 1https://github.com/TimDettmers/sparse_learning 5 LeNet-5 Caffe on MNIST LeNet 300-100 on MNIST Dong) 2017) (same TS © Layer-wise Brain Damage Compression via optimization Single-shot Net. Pruning Soft weight-sharing Dyn. Network Surgery Learn weights&connections Variational Dropout © Targeted Dropout — Sparse Momentum dee 2019 — Dense (100% Weights) 2.0 Gomez 2918 35 Garreira-Perpinan 2018 us so comes Soe 25 Test Error Lee 2019 © Carreira-Perpinan 2018 rich 2017 uo 2016 ee 2019 dllrich 2017 08 Molchatiov 203 Han 2015 ee 2019 Han 2p15 35 = =t 15 0.6 a = ee o 2 4 6 8 10 2 4 6 8 10 Weights (%) Weights (%) # Test Error Figure 2: Parameter sensitivity analysis for prune rate and momentum with 95% confidence intervals. performs for the other model. For 1-2% of weights, variational dropout is more effective – but this method also uses dropout for further regularization while we only use weight decay. We can see that sparse momentum achieves equal performance to the LeNet-5 Caffe dense baseline with 8% weights. On CIFAR-10 in Table 1, we can see that sparse momentum outperforms Single-shot Network Pruning (SNIP) for all models and can achieve the same performance level as a dense model for VGG16-D with just 5% of weights. Figure 3 and Table 2 show comparisons of sparse learning methods on MNIST and CIFAR that follows the experimental procedure of Mostafa and Wang (2019) where some selected layers are dense. For LeNet 300-100 on MNIST, we can see that sparse momentum outperforms all other methods. For CIFAR-10, sparse momentum is better than dynamic sparse in 4 out of 5 cases. However, in general, the confidence intervals for most methods overlap – this particular setup for CIFAR-10 with specifically selected dense layers seems to be too easy to determine difference in performance between methods and we do not recommend this setup for future work. Table 2 shows that sparse momentum outperforms all other methods on ImageNet (ILSVRC2012) for the Top-1 accuracy measure. Dynamic sparse is better for the Top-5 accuracy with 20% weights. In the fully sparse setting, sparse momentum remains competitive and seems to find a weight distribution which works equally well for the 10% weights case. For 20% weights, the performance decreases slightly. LeNet 300-100 on MNIST WRN 28-2 on CIFAR-10 99.0 95.0 98.8 94.5 ES 98.6 94.0 > > 98.4 935 3 3 < 98.2 < 93.0 98.0 92.5 — Full Dense —— Dynamic Sparse 97.8 — Sparse Momentum 92.0 — SET 97.6 —— DEEP-R 915 1 2 3 4 5 10 10 20 30 40 50 Weights (%) Weights (%) Figure 3: Test set accuracy with 95% confidence intervals on MNIST and CIFAR at varying sparsity levels for LeNet 300-100 and WRN 28-2. # 4.1 Speedups and Weights Needed for Dense Performance Levels We analyzed how many weights are needed to achieve dense performance for our networks on CIFAR-10 and how much faster would we able to train such a sparse network compared to a dense one. We do this analysis by increasing the number of weights by 5% until the sparse network trained with sparse momentum reaches a performance level that overlaps with a 95% confidence interval 6 Table 1: CIFAR-10 test set error (±standard error) for dense baselines, Sparse Momentum and SNIP. Sparse Error (%) Model Dense Error (%) SNIP Momentum Weights (%) AlexNet-s AlexNet-b 12.95±0.056 12.85±0.068 14.99 14.50 14.27±0.123 13.56±0.094 10 10 VGG16-C VGG16-D VGG16-like 6.49±0.038 6.59±0.050 6.50±0.054 7.27 7.09 8.00 7.00±0.054 6.69±0.049* 7.00±0.077 5 5 3 4.57±0.022 4.45±0.040 4.26±0.032 6.63 6.43 5.85 5.62±0.056 5.24±0.052 4.93±0.056 5 5 5 WRN-16-8 WRN-16-10 WRN-22-8 * 95% confidence intervals overlap with dense model. Table 2: Results for ResNet-50 on ImageNet. Accuracy (%) Model Top-1 Top-5 Top-1 Top-5 Dense ResNet-50 (He et al.}|2016) 74.9 92.4 74.9 92.4 Fully Sparse 10% weights 20% Weights DeepR (Bellec et al.|/2018) 70.2 90.0 71.7 90.6 SET (Mocanu et al.||20T8) 70.4 901 726 91.2 71.6 90.5 73.3 92.4 72.3 91.00 74.2 91.9 72.3 91.0 73.8 91.8 Dynamic Sparse ( (Mostafa and Wang! Sparse momentum \x| <x of the dense performance. We then measure the speedup of the model. For each network-density combination we perform ten training runs with different random seeds to calculate the mean test error and its standard error. To estimated the speedups that could be obtained using sparse momentum for these dense networks we follow two approaches: Theoretical speedups for sparse convolution algorithms which are proportional to reductions in FLOPS and practical speedups using dense convolutional algorithms which are proportional to empty convolutional channels. For our sparse convolution estimates, we calculate the FLOPS saved for each convolution operation throughout training as well as the runtime for each convolution. To receive the maximum speedups for sparse convolution, we then scale the runtime for each convolution operation by the FLOPS saved. While a fast sparse convolution algorithm for coarse block structures exist for GPUs (Gray et al., 2017), optimal sparse convolution algorithms for fine-grained patterns do not and need to be developed to enable these speedups. The second method measures practical speedups that can be obtained with naive, dense convolution algorithms which are available today. Dense convolution is unsuitable for the training of sparse networks but we include this measurement to highlight the algorithmic gap that exists to efficiently train sparse networks. For dense convolution algorithms, we estimate speedups as follows: If a convolutional channel consists entirely of zero-valued weights we can remove these channels from the computation without changing the outputs and obtain speedups. To receive the speedups for dense convolution we scale each convolution operation by the proportion of empty channels. Using these measures, we estimated the speedups for our models on CIFAR-10. The resulting speedups and dense performance levels can be seen in Table 3. We see that VGG16 networks can achieve dense performance with relatively few weights while AlexNet requires the most weights. Wide Residual Networks need an intermediate level of weights. Despite the large number of weights for AlexNet, sparse momentum still yields large speedups around 3.0x for sparse convolution. Sparse convolution speedups are particularly pronounced for Wide Residual Networks (WRN) with speedups as high as 5.61x. Dense convolution speedups are much lower and are mostly dependent on width, with wider networks receiving larger speedups. These results highlight the importance to develop optimized algorithms for sparse convolution. 7 Beyond speedups, we also measured the overhead of our sparse momentum procedure to be equivalent of a slowdown to 0.973x±0.029x compared to a dense baseline. Table 3: Dense performance equivalents and speedups for sparse networks on CIFAR-10. Speedups Model Weights (%) Error(%) Dense Convolution (Empty Channels) AlexNet-s AlexNet-b 50 35 13.15±0.065 13.00±0.065 1.31x 1.21x 3.01x 2.74x VGG16-C VGG16-D VGG16-like 10 5 5 6.64±0.040 6.49±0.045 6.46±0.036 1.32x 1.36x 1.32x 3.85x 3.51x 3.48x WRN 16-8 WRN 16-10 WRN 22-8 30 25 20 4.72±0.051 4.56±0.037 4.40±0.037 1.07x 1.07x 1.21x 4.59x 4.41x 5.61x # 5 Analysis # 5.1 Ablation Analysis Our method differs from previous methods like Sparse Evolutionary Training and Dynamic Sparse Reparameterization in two ways: (1) redistribution of weights and (2) growth of weights. To better understand how these components contribute to the overall performance, we ablate these components on CIFAR-10 for VGG16-D and MNIST for LeNet 300-100 and LeNet-5 Caffe with 5% weights for all experiments. The ablations on ImageNet are for ResNet-50 with 10% weights in the fully sparse setting. The results can be seen in Table 4. Redistribution: Redistributing weights according to the momentum magnitude becomes increasingly important the larger a network is as can be seen from the steady increases in error from the small LeNet 300-100 to the large ResNet-50 when no momentum redistribution is used. Increased test error is particularly pronounced for ImageNet where the Top-1 error increases by 3.42% to 9.71% if no redistribution is used. Momentum growth: Momentum growth improves performance over random growth by a large margin for ResNet-50 on ImageNet, but for smaller networks the combination of redistribution and random growth seems to be sufficient to find good weights. Random growth without redistribution, however, cannot find good weights. These results suggest that with increasing network size a random search strategy becomes inefficient and smarter growth algorithms are required for good performance. Table 4: Ablation analysis for different growth and redistribution algorithm combinations for LeNet 300-100 and LeNet-5 Caffe on MNIST, VGG16-D on CIFAR-10, and ResNet-50 on ImageNet. Test error in % Redistribution Growth LeNet 300-100 LeNet-5 Caffe VGG16-D ResNet-50 momentum momentum 1.53±0.020 0.69±0.021 6.69±0.049 27.07 momentum None None +0.07±0.022 random momentum +0.01±0.018 +0.11±0.020 random −0.05±0.011 −0.19±0.040 +0.32±0.071 +1.54±0.101 +0.13±0.013 +1.49±0.147 +7.29 +3.42 +9.71 # 5.2 Sensitivity Analysis Sparse momentum depends on two hyperparameters: Prune rate and momentum. In this section, we study the sensitivity of the accuracy of our models as we vary the prune rate and momentum. Since momentum parameter has an additional effect on the optimization procedure, we run control 8 experiments for fully dense networks thus disentangling the difference in accuracy accounted by our sparse momentum procedure. We run experiments for VGG-D and AlexNet-s with 5% and 10% weights on CIFAR-10. Results can be seen in Figure 4. We see that sparse momentum is highly robust to the choice of prune rate with results barely deviating when the prune rate is in the interval between 0.2 to 0.4. However, we can see a gradual linear trend that indicates that smaller prune rates work slightly better than larger ones. Cosine and linear prune rate annealing schedules do equally well. For momentum, confidence intervals for values between 0.7 and 0.9 overlap indicating that our procedure is robust to the choice of the momentum parameter. Sparse momentum is more sensitive to low momentum values (≤0.6) while it is less sensitive for large momentum values (0.95) compared to a dense control. Additionally, we test the null hypothesis that sparse momentum is equally sensitive to deviations from a momentum parameter value of 0.9 as a dense control. The normality assumption was violated and data transformations did not help. Thus we use the non-parametric Wilcoxon Signed-rank Test. We find no evidence that sparse momentum is more sensitive to the momentum parameter than a dense control, W (16) = 22.0, p = 0.58. Overall, we conclude that sparse momentum is highly robust to deviations of the pruning schedule and the momentum and prune rate parameters. # Test Error Prune Rate Parameter Sensitivity Momentum Parameter Sensitivity 7.8 —— VGG Sparse momentum poe eS — VGG Dense control 14 — Cosine annealing — Linear annealing Test Error 19 AlexNet-s VGG16-D et 0.2 03 0.4 os 0.6 07 0.8 0.50 0.60 0.70 0.80 0.90 0.95 Prune Rate Momentum Figure 4: Parameter sensitivity analysis for prune rate and momentum with 95% confidence intervals. # 6 Conclusion and Future Work We presented our sparse learning algorithm, sparse momentum, which uses the mean magnitude of momentum to grow and redistribute weights. We showed that sparse momentum outperforms other sparse algorithms on MNIST, CIFAR-10, and ImageNet. Additionally, sparse momentum can rival dense neural network performance while yielding speedups during training. In our analysis, we show that our algorithm is robust to the choice of its hyperparameters which makes it easy to use. Our analysis of speedups for dense and sparse convolution highlights that an important future research goal would be to develop specialized sparse convolution and sparse matrix multiplication algorithms to enable the benefits of sparse networks. # 7 Acknowledgements This work was funded by a Jeff Dean – Heidi Hopper Endowed Regental Fellowship. We thank Ofir Press, Jungo Kasai, Omer Levy, Sebastian Riedel, Yejin Choi, Judit Acs, Zoey Chen, Ethan Perez, and Mohit Shridhar for helpful discussions and for their helpful reviews and comments. 9 # References Bellec, G., Kappel, D., Maass, W., and Legenstein, R. A. (2018). Deep rewiring: Training very sparse deep networks. CoRR, abs/1711.05136. Carreira-Perpinán, M. A. and Idelbayev, Y. (2018). “learning-compression” algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8532–8541. Chauvin, Y. (1988). A back-propagation algorithm with optimal use of hidden units. In NIPS. Child, R., Gray, S., Radford, A., and Sutskever, I. (2019). Generating long sequences with sparse transformers. CoRR, abs/1904.10509. Dai, X., Yin, H., and Jha, N. K. (2017). Nest: A neural network synthesis tool based on a grow-and- prune paradigm. CoRR, abs/1711.02017. Dai, X., Yin, H., and Jha, N. K. (2018). Grow and prune compact, fast, and accurate lstms. CoRR, abs/1805.11797. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Dong, X., Chen, S., and Pan, S. J. (2017). Learning to prune deep neural networks via layer-wise optimal brain surgeon. In NIPS. Frankle, J. and Carbin, M. (2019). The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR 2019. Frankle, J., Dziugaite, G. K., Roy, D. M., and Carbin, M. (2019). The lottery ticket hypothesis at scale. CoRR, abs/1903.01611. Gale, T., Elsen, E., and Hooker, S. (2019). The state of sparsity in deep neural networks. CoRR, abs/1902.09574. Gray, S., Radford, A., and Kingma, D. P. (2017). Gpu kernels for block-sparse weights. Guo, Y., Yao, A., and Chen, Y. (2016). Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379–1387. Han, S., Pool, J., Tran, J., and Dally, W. (2015). Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143. Hassibi, B. and Stork, D. G. (1992). Second order derivatives for network pruning: Optimal brain surgeon. In NIPS. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778. Herculano-Houzel, S., Mota, B., Wong, P., and Kaas, J. H. (2010). Connectivity-driven white matter scaling and folding in primate cerebral cortex. Proceedings of the National Academy of Sciences of the United States of America, 107 44:19008–13. Ishikawa, M. (1996). Structural learning with forgetting. Neural Networks, 9:509–521. Karnin, E. D. (1990). A simple procedure for pruning back-propagation trained neural networks. IEEE transactions on neural networks, 1 2:239–42. Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, Citeseer. 10 LeCun, Y. (1998). Gradient-based learning applied to document recognition. LeCun, Y., Denker, J. S., and Solla, S. A. (1989). Optimal brain damage. In NIPS. Lee, N., Ajanthan, T., and Torr, P. H. S. (2019). Snip: Single-shot network pruning based on connection sensitivity. In ICLR 2019. Louizos, C., Ullrich, K., and Welling, M. (2017). Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3288–3298. Louizos, C., Welling, M., and Kingma, D. P. (2018). Learning sparse neural networks through l0 regularization. CoRR, abs/1712.01312. Mocanu, D. C., Mocanu, E., Stone, P., Nguyen, P. H., Gibescu, M., and Liotta, A. (2018). Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):2383. Molchanov, D., Ashukha, A., and Vetrov, D. P. (2017). Variational dropout sparsifies deep neural networks. In International Conference on MachineLearning (ICML). Mostafa, H. and Wang, X. (2019). Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In International Conference on Machine Learning (ICML). Mozer, M. C. and Smolensky, P. (1988). Skeletonization: A technique for trimming the fat from a network via relevance assessment. In NIPS. Narang, S., Diamos, G. F., Sengupta, S., and Elsen, E. (2017). Exploring sparsity in recurrent neural networks. CoRR, abs/1704.05119. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2017). Automatic differentiation in pytorch. Qian, N. (1999). On the momentum term in gradient descent learning algorithms. Neural networks : the official journal of the International Neural Network Society, 12 1:145–151. Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep inside convolutional networks: Visualis- ing image classification models and saliency maps. CoRR, abs/1312.6034. Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. A. (2014). Striving for simplicity: The all convolutional net. CoRR, abs/1412.6806. Ullrich, K., Meeds, E., and Welling, M. (2017). Soft weight-sharing for neural network compression. CoRR, abs/1702.04008. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Zeiler, M. D. and Fergus, R. (2014). Visualizing and understanding convolutional networks. In ECCV. Zhou, H., Lan, J., Liu, R., and Yosinski, J. (2019). Deconstructing lottery tickets: Zeros, signs, and the supermask. arXiv preprint arXiv:1905.01067. Zhu, M. and Gupta, S. (2018). To prune, or not to prune: Exploring the efficacy of pruning for model compression. CoRR, abs/1710.01878. # Appendices # A Updates to This Paper • 2019-07-10: First draft of the paper was uploaded. 11 • 2019-08-23: General overhaul of the work. – For all networks, we added results at which % of weight level sparse networks research dense performance. – Added sensitivity analysis for momentum and prune rate parameters, as well as the prune rate schedule. – We corrected an error where we reported a multi-crop accuracy for the baseline ResNet- 50 model. – We included ImageNet experiments for both the fully sparse setting and the partially dense setting. – Algorithm 1 now includes details of the full training procedure and the more detailed algorithm of sparse momentum was moved to the appendix. – The sparse vs dense feature analysis now includes statistical tests. It was also moved to the appendix, and is no longer considered a main result of our work. # B Additional Analysis # B.1 Dense vs Sparse Features Are there differences between feature representations learned by dense and sparse networks? The answer to this question can help with the design of sparse learning algorithms and sparse architectures. In this section, we look at the features of dense and sparse networks and how specialized these features are for certain classes. We test difference between sparse and dense network features statistically. For feature visualization, it is common to backpropagate activity to the inputs to be able to visualize what these activities represent (Simonyan et al., 2013; Zeiler and Fergus, 2014; Springenberg et al., 2014). However, in our case, we are more interested in the overall distribution of features for each layer within our network, and as such we want to look at the magnitude of the activity in a channel since – unlike feature visualization – we are not just interested in feature detectors but also discriminators. For example, a face detector would induce positive activity for a ‘person’ class but might produce negative activity for a ‘mushroom’ class. Both kinds of activity are useful. With this reasoning, we develop the following convolutional channel-activation analysis: (1) pass the entire training set through the network and aggregate the magnitude of the activation in each convolutional channel separately for each class; (2) normalize across classes to receive for each channel the proportion of activation which is due to each class; (3) look at the maximum proportion of each channel as a measure of class specialization: a maximum proportion of 1/Nc where Nc is the number of classes indicates that the channel is equally active for all classes in the training set. The higher the proportion deviates from this value, the more is a channel specialized for a particular class. We obtain results for AlexNet-s, VGG16-D, and WRN 28-2 on CIFAR-10 and use as many weights as needed to reach dense performance levels. We then test the null hypothesis, that there are no differences in class specialization between features from sparse networks and dense networks. Equal variance assumptions was violated for VGG-D and normality was violated for WRN-28-2, while all assumptions hold for AlexNet-s. For consistency reasons we perform non-parametric Kruskal-Wallis one-way analysis of variance tests for all networks. For AlexNet-s, we find some evidence that features of sparse networks have lower class specialization compared to dense networks χ2(5) = 4.43, p = 0.035, for VGG-D and WRN-28-2 we find strong evidence that features of sparse networks have lower class specialization than dense networks χ2(13) = 28.1, p < 0.001, χ2(12) = 36.2, p < 0.001. Thus we reject the null hypothesis. These results increase our confidence that sparse networks learn features which have lower class specialization than dense networks. Plots of the distributions of sparse vs. dense features for AlexNet-s, VGG16-D, and WRN 28-2 on CIFAR-10 in Figure 5. These plots were selected to highlight the difference in distribution in the first layers and last layers of each network. We see the convolutional channels in sparse networks have lower class-specialization indicating they learn features which are useful for a broader range of classes compared to dense networks. This trend intensifies with depth. Overall, we conclude that sparse networks might be able to rival dense networks by learning more general features that have lower class specialization. 12 AlexNet Conv2D Layer 3 AlexNet Conv2D Layer 5 Dense Sparse Dense Sparse 0.50 0.50 = 0.45 0.45 d _ 0.40 _ 0.40 § § 0.35 0.35 3 0.30 9 0.30 a a a a 1} 0.25 0.25 & & ° 0.20 ° 0.20 os os = 0.10 = 0.10 40 20 0 20 40 2 6 W 5 O 5 Ww 15 2 Channel Count Channel Count VGG Conv2D Layer 7 VGG Conv2D Layer 9 Dense Sparse Dense Sparse 0.50 0.50 a 0.45 0.45 0.40 = 0.40 B 0.35 B 0.35 8 0.30 8 0.30 Es Es G G 0.25 0.25 0.20 0.20 0.15 0.15 0.10 = 0.10 2 I Ww Ss oO 5S Ww 115 2 20 10 ° 10 20 Channel Count Channel Count WRN-28-2 Conv2D Layer 5 WRN-28-2 Conv2D Layer 23 Dense Sparse Dense Sparse 0.50 0.50 0.45 0.45 - 0.40 - 0.40 B 0.35 B 0.35 8 0.30 8 0.30 Es Es G G 0.25 0.25 0.20 0.20 0.15 0.15 0.10 0.10 10 5 ° 5 10 1s 10 5 0 5 10 pty Channel Count Channel Count AlexNet Conv2D Layer 3 Dense Sparse 0.50 0.45 _ 0.40 § 0.35 3 0.30 a a 1} 0.25 & ° 0.20 os 0.10 = 40 20 0 20 40 Channel Count AlexNet Conv2D Layer 5 Dense Sparse 0.50 = 0.45 d _ 0.40 § 0.35 9 0.30 a a 0.25 & ° 0.20 os = 0.10 2 6 W 5 O 5 Ww 15 2 Channel Count VGG Conv2D Layer 7 Dense Sparse 0.50 0.45 0.40 = B 0.35 8 0.30 Es G 0.25 0.20 0.15 0.10 = 2 I Ww Ss oO 5S Ww 115 2 Channel Count VGG Conv2D Layer 9 Dense Sparse 0.50 a 0.45 0.40 B 0.35 8 0.30 Es G 0.25 0.20 0.15 0.10 20 10 ° 10 20 Channel Count WRN-28-2 Conv2D Layer 5 Dense Sparse 0.50 0.45 - 0.40 B 0.35 8 0.30 Es G 0.25 0.20 0.15 0.10 10 5 ° 5 10 Channel Count WRN-28-2 Conv2D Layer 23 Dense Sparse 0.50 0.45 - 0.40 B 0.35 8 0.30 Es G 0.25 0.20 0.15 0.10 1s 10 5 0 5 10 pty Channel Count Figure 5: Dense vs sparse histograms of class-specialization for convolutional channels on CIFAR-10. A class-specialization of 0.5 indicates that 50% of the overall activity comes from a single class. # C Further Results # C.1 Tuned ResNet-50 on ImageNet We also tried a better version of the ResNet-50 in the fully sparse setting for which we use a cosine learning rate schedule, label smoothing of 0.9, and we warmup the learning rate. The results can be seen in Table 5. # Table 5: Fully sparse ImageNet results. Accuracy (%) Model Weights (%) Top-1 Top-5 Tuned ResNet-50 Sparse momentum 100 10 20 30 77.0 72.9 74.9 75.9 93.5 91.5 92.5 92.9 13 # D Detailed Sparse Momentum Algorithm For a detailed NumPy-style algorithmic description of sparse momentum see Algorithm 2. Algorithm 2: Sparse momentum algorithm in NumPy notation. Data: Layer i to k with: Momentum Mi, Weight Wi, binary Maski; prune rate p 19 20 21 22 23 24 25 Data: Layer i to k with: Momentum M;, Weight Wj, binary Mask;; prune rate p TotalMomentum + 0, TotalNonzero < 0 /* (a) Calculate mean momentum contributions of all layers. for i + 0tokdo MeanMomentum; «+ mean(abs(M, [W; # 0])) TotalMomentum < TotalMomentum + MeanMomentum; NonZero; = sum(W;, # 0) TotalNonzero + TotalNonzero + NonZero; end for i + 0 tok do LayerContribution, <- MeanMomentum, /TotalMomentum pi < getPruneRate(W;, p) weights by finding the NumRemoverh smallest weight. end for i — 0 tok do NumRemove; «+ NonZero; - p PruneThreshold + sort(abs(W; [W; 4 0])) [NumRemove;] Mask; [W; < PruneThreshold] <0 // Stop gradient flow. W,; [W; < PruneThreshold] + 0 end iw or i + 0 tok do RegrowthThreshold; + sort(abs(M; [W; == 0])) [NumRegrowth,] Z; =M,;-(W; == 0) // Only consider the momentum of missing weights. end p + decayPruneRate(p) applyMask() * (c) Enable gradient flow of weights with largest momentum magnitude. Mask; + Mask; | (Z; > RegrowthThreshold,) // | is the boolean OR operator */ */ 14
{ "id": "1905.01067" }
1907.04448
Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning
We present a multispeaker, multilingual text-to-speech (TTS) synthesis model based on Tacotron that is able to produce high quality speech in multiple languages. Moreover, the model is able to transfer voices across languages, e.g. synthesize fluent Spanish speech using an English speaker's voice, without training on any bilingual or parallel examples. Such transfer works across distantly related languages, e.g. English and Mandarin. Critical to achieving this result are: 1. using a phonemic input representation to encourage sharing of model capacity across languages, and 2. incorporating an adversarial loss term to encourage the model to disentangle its representation of speaker identity (which is perfectly correlated with language in the training data) from the speech content. Further scaling up the model by training on multiple speakers of each language, and incorporating an autoencoding input to help stabilize attention during training, results in a model which can be used to consistently synthesize intelligible speech for training speakers in all languages seen during training, and in native or foreign accents.
http://arxiv.org/pdf/1907.04448
Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran
cs.CL, cs.SD, eess.AS
5 pages, submitted to Interspeech 2019
null
cs.CL
20190709
20190724
9 1 0 2 l u J 4 2 ] L C . s c [ 2 v 8 4 4 4 0 . 7 0 9 1 : v i X r a # Learning to Speak Fluently in a Foreign Language: Multilingual Speech Synthesis and Cross-Language Voice Cloning Yu Zhang, Ron J. Weiss, Heiga Zen, Yonghui Wu, Zhifeng Chen, RJ Skerry-Ryan, Ye Jia, Andrew Rosenberg, Bhuvana Ramabhadran # Google {ngyuzh, ronw}@google.com Abstract We present a multispeaker, multilingual text-to-speech (TTS) synthesis model based on Tacotron that is able to produce high quality speech in multiple languages. Moreover, the model is able to transfer voices across languages, e.g. synthesize fluent Spanish speech using an English speaker’s voice, without train- ing on any bilingual or parallel examples. Such transfer works across distantly related languages, e.g. English and Mandarin. Inference Network Adversarial Loss Mel Residual |_ [Residual ‘spectrogram | Encoder Encoding Gradient |_.[ Speaker Reversal Classifier Text Text > Mel sequence Encoder [> (Bowed ‘spectrogram a Text ‘Speaker | [ Language Synthosizor Encoding | Embedding | | Embedding Critical to achieving this result are: 1. using a phonemic in- put representation to encourage sharing of model capacity across languages, and 2. incorporating an adversarial loss term to en- courage the model to disentangle its representation of speaker identity (which is perfectly correlated with language in the train- ing data) from the speech content. Further scaling up the model by training on multiple speakers of each language, and incorpo- rating an autoencoding input to help stabilize attention during training, results in a model which can be used to consistently synthesize intelligible speech for training speakers in all lan- guages seen during training, and in native or foreign accents. Index Terms: speech synthesis, end-to-end, adversarial loss Figure 1: Overview of the components of the proposed model. Dashed lines denote sampling via reparameterization [21] dur- ing training. The prior mean is always use during inference. in both languages using the same voice. [16] studied learning pronunciation from a bilingual TTS model. Most recently, [17] presented a multilingual neural TTS model which supports voice cloning across English, Spanish, and German. It used language- specific text and speaker encoders, and incorporated a secondary fine-tuning step to optimize a speaker identity-preserving loss, ensuring that the model could output a consistent voice regard- less of language. We also note that the sound quality is not on par with recent neural TTS systems, potentially because of its use of the WORLD vocoder [18] for waveform synthesis. # 1. Introduction Recent end-to-end neural TTS models [1–3] have been extended to enable control of speaker identity [4–7] as well as unlabelled speech attributes, e.g. prosody, by conditioning synthesis on la- tent representations [8–12] in addition to text. Extending such models to support multiple, unrelated languages is nontrivial when using language-dependent input representations or model components, especially when the amount of training data per lan- guage is imbalanced. For example, there is no overlap in the text representation between languages like Mandarin and English. Furthermore, recordings from bilingual speakers are expensive to collect. It is therefore most common for each speaker in the training set to speak only one language, so speaker identity is perfectly correlated with language. This makes it difficult to transfer voices across different languages, a desirable feature when the number of available training voices for a particular language is small. Moreover, for languages with borrowed or shared words, such as proper nouns in Spanish (ES) and English (EN), pronunciations of the same text might be different. This adds more ambiguity when a naively trained model sometimes generates accented speech for a particular speaker. Our work is most similar to [19], which describes a mul- tilingual TTS model based on Tacotron 2 [20] which uses a Unicode encoding “byte” input representation to train a model on one speaker of each of English, Spanish, and Mandarin. In this paper, we evaluate different input representations, scale up the number of training speakers for each language, and extend the model to support cross-lingual voice cloning. The model is trained in a single stage, with no language-specific compo- nents, and obtains naturalness on par with baseline monolingual models. Our contributions include: (1) Evaluating the effect of using different text input representations in a multilingual TTS model. (2) Introducing a per-input token speaker-adversarial loss to enable cross-lingual voice transfer when only one train- ing speaker is available for each language. (3) Incorporating an explicit language embedding to the input, which enables mod- erate control of speech accent, independent of speaker identity, when the training data contains multiple speakers per language. We evaluate the contribution of each component, and demonstrate the proposed model’s ability to disentangle speak- ers from languages and consistently synthesize high quality speech for all speakers, despite the perfect correlation to the original language in the training data. Zen et al. proposed a speaker and language factorization for HMM-based parametric TTS system [13], aiming to transfer a voice from one language to others. [14] proposed a multilingual parametric neural TTS system, which used a unified input repre- sentation and shared parameters across languages, however the voices used for each language were disjoint. [15] described a sim- ilar bilingual Chinese and English neural TTS system trained on speech from a bilingual speaker, allowing it to synthesize speech # 2. Model Structure We base our multilingual TTS model on Tacotron 2 [20], which uses an attention-based sequence-to-sequence model to gener- ate a sequence of log-mel spectrogram frames based on an input It text sequence. The architecture is illustrated in Figure 1. augments the base Tacotron 2 model with additional speaker and, optionally, language embedding inputs (bottom right), an adversarially-trained speaker classifier (top right), and a varia- tional autoencoder-style residual encoder (top left) which con- ditions the decoder on a latent embedding computed from the target spectrogram during training (top left). Finally, similar to Tacotron 2, we separately train a WaveRNN [22] neural vocoder. # 2.1. Input representations End-to-end TTS models have typically used character [2] or phoneme [8, 23] input representations, or hybrids between them [24, 25]. Recently, [19] proposed using inputs derived from the UTF-8 byte encoding in multilingual settings. We evaluate the effects of using these representations for multilingual TTS. # 2.1.1. Characters / Graphemes Embeddings corresponding to each character or grapheme are the default inputs for end-to-end TTS models [2, 20, 23], requir- ing the model to implicitly learn how to pronounce input words (i.e. grapheme-to-phoneme conversion [26]) as part of the syn- thesis task. Extending a grapheme-based input vocabulary to a multilingual setting is straightforward, by simply concatenating grapheme sets in the training corpus for each language. This can grow quickly for languages with large alphabets, e.g. our Man- darin vocabulary contains over 4.5k tokens. We simply concate- nate all graphemes appearing in the training corpus, leading to a total of 4,619 tokens. Equivalent graphemes are shared across languages. During inference all previously unseen characters are mapped to a special out-of-vocabulary (OOV) symbol. # 2.1.2. UTF-8 Encoded Bytes Following [19] we experiment with an input representation based on the UTF-8 text encoding, which uses 256 possible values as each input token where the mapping from graphemes to bytes is language-dependent. For languages with single-byte characters (e.g., English), this representation is equivalent to the grapheme representation. However, for languages with multi-byte char- acters (such as Mandarin) the TTS model must learn to attend to a consistent sequence of bytes to correctly generate the cor- responding speech. On the other hand, using a UTF-8 byte representation may promote sharing of representations between languages due to the smaller number of input tokens. # 2.1.3. Phonemes Using phoneme inputs simplifies the TTS task, as the model no longer needs to learn complicated pronunciation rules for lan- guages such as English. Similar to our grapheme-based model, equivalent phonemes are shared across languages. We concate- nate all possible phoneme symbols, for a total of 88 tokens. To support Mandarin, we include tone information by learn- ing phoneme-independent embeddings for each of the 4 possible tones, and broadcast each tone embedding to all phoneme em- beddings inside the corresponding syllable. For English and Spanish, tone embeddings are replaced by stress embeddings which include primary and secondary stresses. A special sym- bol is used when there is no tone or stress. # 2.2. Residual encoder Following [12], we augment the TTS model by incorporating a variational autoencoder-like residual encoder which encodes the latent factors in the training audio, e.g. prosody or background noise, which is not well-explained by the conditioning inputs: the text representation, speaker, and language embeddings. We follow the structure from [12], except we use a standard single Gaussian prior distribution and reduce the latent dimension to 16. In our experiments, we observe that feeding in the prior mean (all zeros) during inference, significantly improves stability of cross-lingual speaker transfer and leads to improved naturalness as shown by MOS evaluations in Section 3.4. # 2.3. Adversarial training One of the challenges for multilingual TTS is data sparsity, where some languages may only have training data for a few speakers. In the extreme case where there is only one speaker per language in the training data, the speaker identity is essentially the same as the language ID. To encourage the model to learn disentangled representations of the text and speaker identity, we proactively discourage the text encoding ts from also capturing speaker information. We employ domain adversarial training encourage t; to encode text in a speaker-independent manner by introducing a speaker classifier based on the text encoding and a gradient reversal layer. Note that the speaker classifier is optimized with a different objective than the rest of the model: Lspeaker(Ws; ti) = xN log p(s; | t;), where s; is the speaker label and ws are the parameters for speaker classifier. To train the full model, we insert a gradient reversal layer prior to this speaker classifier, which scales the gradient by —A. Following so explore inserting another adversarial layer on top of the variational autoencoder to encourage it to learn speaker- independent representations. However, we found that this layer has no effect after decreasing the latent space dimension. We impose this adversarial loss separately on each element of the encoded text sequence, in order to encourage the model to learn a speaker- and language-independent text embedding space. In contrast to [28], which disentangled speaker identity from background noise, some input tokens are highly language- dependent which can lead to unstable adversarial classifier gra- dients. We address this by clipping gradients computed at the reversal layer to limit the impact of such outliers. # 3. Experiments We train models using a proprietary dataset composed of high quality speech in three languages: (1) 385 hours of English (EN) from 84 professional voice actors with accents from the United States, Great Britain, Australia, and Singapore; (2) 97 hours of Spanish (ES) from 3 female speakers include Castilian and US Spanish; (3) 68 hours of Mandarin (CN) from 5 speakers. # 3.1. Model and training setup The synthesizer network uses the Tacotron 2 architecture [20], with additional inputs consisting of learned speaker (64-dim) and language embeddings (3-dim), concatenated and passed to the decoder at each step. The generated speech is represented as a sequence of 128-dim log-mel spectrogram frames, computed from 50ms windows shifted by 12.5ms. The variational residual encoder architecture closely fol- lows the attribute encoder in [12]. It maps a variable length mel spectrogram to two vectors parameterizing the mean and log variance of the Gaussian posterior. The speaker classifiers are fully-connected networks with one 256 unit hidden layer followed by a softmax predicting the speaker identity. The syn- thesizer and speaker classifier are trained with weight 1.0 and 0.02 respectively. As described in the previous section we apply Table 1: Speaker similarity Mean Opinion Score (MOS) com- paring ground truth audio from speakers of different languages. Raters are native speakers of the target language. Source Language EN Target Language ES CN EN ES CN 4.40±0.07 1.49±0.06 1.32±0.06 1.72±0.15 4.39±0.06 2.06±0.09 1.80±0.08 2.14±0.09 3.51±0.12 gradient clipping with factor 0.5 to the gradient reversal layer. The entire model is trained jointly with a batch size of 256, using the Adam optimizer configured with an initial learning rate of 10−3, and an exponential decay that halves the learning rate every 12.5k steps, starting at 50k steps. Waveforms are synthesized using a WaveRNN [22] vocoder which generates 16-bit signals sampled at 24 kHz conditioned on spectrograms predicted by the TTS model. We synthesize 100 samples per model, and have each one rated by 6 raters. # 3.2. Evaluation To evaluate synthesized speech, we rely on crowdsourced Mean Opinion Score (MOS) evaluations of speech naturalness via subjective listening tests. Ratings follow the Absolute Category Rating scale, with scores from 1 to 5 in 0.5 point increments. For cross-language voice cloning, we also evaluate whether the synthesized speech resembles the identity of the reference speaker by pairing each synthesized utterance with a reference utterance from the same speaker for subjective MOS evaluation of speaker similarity, as in [5]. Although rater instructions explicitly asked for the content to be ignored, note that this similarity evaluation is more challenging than the one in [5] because the reference and target examples are spoken in different languages, and raters are not bilingual. We found that low fidelity audio tended to result in high variance similarity MOS so we always use WaveRNN outputs.1 For each language, we chose one speaker to use for similarity tests. As shown in Table 1, the EN speaker is found to be dissimilar to the ES and CN speakers (MOS below 2.0), while the ES and CN speakers are slightly similar (MOS around 2.0). The CN speaker has more natural variability compared to EN and ES, leading to a lower self similarity. The scores are consistent when EN and CN raters evaluate the same EN and CN test set. The observation is consistent with [29]: raters are able to discriminate between speakers across languages. However, when rating synthetic speech, we observed that English speaking raters often considered “heavy accented” synthetic CN speech to sound more similar to the target EN speaker, compared to more fluent speech from the same speaker. This indicates that accent and speaker identity are not fully disentangled. We encourage readers to listen to samples on the companion webpage.2 # 3.3. Comparing input representations We first build and evaluate models comparing the performance of different text input representations. For all three languages, byte-based models always use a 256-dim softmax output. Mono- lingual character and phoneme models each use a different input 1Some raters gave low fidelity audio lower scores, treating "blurri- ness" as a property of the speaker. Others gave higher scores because they recognized such audio as synthetic and had lower expectations. # 2http://google.github.io/tacotron/publications/multilingual Table 2: Naturalness MOS of monolingual and multilingual models synthesizing speech of in different languages. Language Model Input EN ES CN Ground truth 4.60±0.05 4.37±0.06 4.42±0.06 Monolingual 4.24±0.12 4.21±0.11 3.48±0.11 char phone 4.59±0.06 4.39±0.04 4.16±0.08 4.23±0.14 4.23±0.10 3.42±0.12 3.94±0.15 4.33±0.09 3.63±0.10 phone 4.34±0.09 4.41±0.05 4.06±0.10 byte Multilingual 1EN 1ES 1CN char 4.11±0.14 4.21±0.12 3.67±0.12 4.26±0.13 4.23±0.11 3.46±0.11 phone 4.37±0.12 4.37±0.04 4.09±0.10 Multilingual byte 84EN 3ES 5CN char Table 3: Naturalness and speaker similarity MOS of cross- language voice cloning of an EN source speaker. Models which use different input representations are compared, with and with- out the speaker-adversarial loss. fail: raters complained that too many utterances were spoken in the wrong language. ES target CN target Input Naturalness Similarity Naturalness Similarity char byte 2.62±0.10 2.62±0.15 4.25±0.09 3.96±0.10 N/A N/A N/A N/A with adversarial loss 2.34±0.10 byte 3.20±0.09 phone 4.23±0.09 4.15±0.10 fail 2.75±0.12 3.85±0.11 3.60±0.09 vocabulary corresponding to the training language. Table 2 compares monolingual and multilingual model per- formance using different input representations. For Mandarin, the phoneme-based model performs significantly better than char- or byte-based variants due to rare and OOV words. Com- pared to the monolingual system, multilingual phoneme-based systems have similar performance on ES and CN but are slightly worse on EN. CN has a larger gap to ground truth (top) due to unseen word segmentation (for simplicity, we didn’t add word boundary during training). The multispeaker model (bottom) performs about the same as the single speaker per-language variant (middle). Overall, when using phoneme inputs all the languages obtain MOS scores above 4.0. # 3.4. Cross-language voice cloning We evaluate how well the multispeaker models can be used to clone a speaker’s voice into a new language by simply passing in speaker embeddings corresponding to a different language from the input text. Table 3 shows voice cloning performance from an EN speaker in the most data-poor scenario (129 hours), where only a single speaker is available for each training language (1EN 1ES 1CN) without using the speaker-adversarial loss. Us- ing byte inputs 3 it was possible to clone the EN speaker to ES with high similarity MOS, albeit with significantly reduced naturalness. However, cloning the EN voice to CN failed4, as did cloning to ES and CN using phoneme inputs. 3Using character or byte inputs led to similar results. 4We didn’t run listening tests because it was clear that synthesizing EN text using the CN speaker embedding didn’t affect the model output. Table 4: Naturalness and speaker similarity MOS of cross-language voice cloning of the full multilingual model using phoneme inputs. Source Language Model EN target ES target CN target Naturalness Similarity Naturalness Similarity Naturalness Similarity - Ground truth (self-similarity) 4.60±0.05 4.40±0.07 4.37±0.06 4.39±0.06 4.42±0.06 3.51±0.12 EN 84EN 3ES 5CN language ID fixed to EN 4.37±0.12 - 4.63±0.06 - 4.20±0.07 3.68±0.07 3.50±0.12 4.06±0.09 3.94±0.09 3.09±0.09 3.03±0.10 3.20±0.09 ES 84EN 3ES 5CN 4.28±0.10 3.24±0.09 4.37±0.04 4.01±0.07 3.85±0.09 2.93±0.12 CN 84EN 3ES 5CN 4.49±0.08 2.46±0.10 4.56±0.08 2.48±0.09 4.09±0.10 3.45±0.12 Adding the adversarial speaker classifier enabled cross- language cloning of the EN speaker to CN with very high simi- larity MOS for both byte and phoneme models. However, natu- ralness MOS remains much lower than using the native speaker identity, with the naturalness listening test failing entirely in the CN case with byte inputs as a result of rater comments that the speech sounded like a foreign language. According to rater comments on the phoneme system, most of the degradation came from mismatched accent and pronunciation, not fidelity. CN raters commented that it sounded like “a foreigner speaking Chinese”. More interestingly, few ES raters commented that “The voice does not sound robotic but instead sounds like an English native speaker who is learning to pronounce the words in Spanish.” Based on these results, we only use phoneme inputs in the following experiments since this guarantees that pronun- ciations are correct and results in more fluent speech. Table 4 evaluates voice cloning performance of the full mul- tilingual model (84EN 3ES 5CN), which is trained on the full dataset with increased speaker coverage, and uses the speaker- adversarial loss and speaker/language embeddings. Incorporat- ing the adversarial loss forces the text representation to be less language-specific, instead relying on the language embedding to capture language-dependent information. Across all language pairs, the model synthesizes speech in all voices with natural- ness MOS above 3.85, demonstrating that increasing training speaker diversity improves generalization. In most cases syn- thesizing EN and ES speech (except EN-to-ES) approaches the ground truth scores. In contrast, naturalness of CN speech is consistently lower than the ground truth. Table 5: Effect of EN speaker cloning with no residual encoder. Target Language Model EN ES CN 84EN 3ES 5CN - residual encoder 4.37±0.12 4.20±0.07 3.94±0.09 4.38±0.10 4.11±0.06 3.52±0.11 08 eee Native - Fluent 4¢@ Native - Accented 444 Cloned - Accented mmm Cloned - Fluent Speaker /Text / Lang e CN/CN/CN @ CN/CN/EN 4 CN/EN/CN CN/EN/EN EN/CN/CN a EN/CN/EN @ EN/EN/CN EN/EN/EN -0.4 08 06 04 -0.2 0.0 0.2 0.4 0.6 08 Figure 2: Visualizing the effect of voice cloning and accent con- trol, using 2D PCA of speaker embeddings [30] computed from speech synthesized with different speaker, text, and language ID combinations. Embeddings cluster together (bottom left and right), implying high similarity, when the speaker’s original lan- guage matches the language embedding, regardless of the text language. However, using language ID from the text (squares), modifying the speaker’s accent to speak fluently, hurts similarity compared to the native language and accent (circles). The high naturalness and similarity MOS scores in the top row of Table 4 indicate that the model is able to successfully transfer the EN voice to both ES and CN almost without accent. When consistently conditioning on the EN language embedding regardless of the target language (second row), the model pro- duces more English accented ES and CN speech, which leads to lower naturalness but higher similarity MOS scores. Also see Figure 2 and the demo for accent transfer audio examples. We see that cloning the CN voice to other languages (bottom row) has the lowest similarity MOS, although the scores are still much higher than different-speaker similarity MOS in the off- diagonals of Table 1 indicating that there is some degree of transfer. This is a consequence of the low speaker coverage of CN compared to EN in the training data, as well as the large distance between CN and other languages. unnatural pauses in the output speech. This indicates the VAE prior learns a mode which helps stabilize attention. # 4. Conclusions We describe extensions to the Tacotron 2 neural TTS model which allow training of a multilingual model trained only on monolingual speakers, which is able to synthesize high quality speech in three languages, and transfer training voices across languages. Furthermore, the model learns to speak foreign lan- guages with moderate control of accent, and, as demonstrated on the companion webpage, has rudimentary support for code switching. In future work we plan to investigate methods for scaling up to leverage large amounts of low quality training data, and support many more speakers and languages. Finally, Table 5 demonstrates the importance of training us- ing a variational residual encoder to stabilize the model output. Naturalness MOS decreases by 0.4 points for EN-to-CN cloning without the residual encoder (bottom row). In informal compar- isons of the outputs of the two models we find that the model without the residual encoder tends to skip rare words or inserts # 5. Acknowledgements We thank Ami Patel, Amanda Ritchart-Scott, Ryan Li, Siamak Tazari, Yutian Chen, Paul McCartney, Eric Battenberg, Toby Hawker, and Rob Clark for discussions and helpful feedback. 6. References [1] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “WaveNet: A generative model for raw audio,” CoRR abs/1609.03499, 2016. [2] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., “Tacotron: A fully end-to-end text-to-speech synthesis model,” arXiv preprint, 2017. [3] S. Arik, G. Diamos, A. Gibiansky, J. Miller, K. Peng, W. Ping, J. Raiman, and Y. Zhou, “Deep Voice 2: Multi-speaker neural text- to-speech,” in Advances in Neural Information Processing Systems (NIPS), 2017. [4] S. O. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou, “Neural voice cloning with a few samples,” in Advances in Neural Information Processing Systems, 2018. [5] Y. Jia, Y. Zhang, R. J. Weiss, Q. Wang, J. Shen, F. Ren, Z. Chen, P. Nguyen, R. Pang, I. L. Moreno, and Y. Wu, “Transfer learn- ing from speaker verification to multispeaker text-to-speech syn- thesis,” in Advances in Neural Information Processing Systems, 2018. [6] E. Nachmani, A. Polyak, Y. Taigman, and L. Wolf, “Fitting new speakers based on a short untranscribed sample,” in International Conference on Machine Learning (ICML), 2018. [7] Y. Chen, Y. Assael, B. Shillingford, D. Budden, S. Reed, H. Zen, Q. Wang, L. C. Cobo, A. Trask, B. Laurie et al., “Sample efficient adaptive text-to-speech,” arXiv preprint arXiv:1809.10460, 2018. [8] Y. Wang, D. Stanton, Y. Zhang, R. Skerry-Ryan, E. Battenberg, J. Shor, Y. Xiao, F. Ren, Y. Jia, and R. A. Saurous, “Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis,” in International Conference on Machine Learn- ing (ICML), 2018. [9] R. Skerry-Ryan, E. Battenberg, Y. Xiao, Y. Wang, D. Stanton, J. Shor, R. J. Weiss, R. Clark, and R. A. Saurous, “Towards end- to-end prosody transfer for expressive speech synthesis with Taco- tron,” in International Conference on Machine Learning (ICML), 2018. [10] K. Akuzawa, Y. Iwasawa, and Y. Matsuo, “Expressive speech synthesis via modeling expressions with variational autoencoder,” in Interspeech, 2018. [11] G. E. Henter, J. Lorenzo-Trueba, X. Wang, and J. Yamagishi, “Deep encoder-decoder models for unsupervised learning of con- trollable speech synthesis,” arXiv preprint arXiv:1807.11470, 2018. [12] W.-N. Hsu, Y. Zhang, R. J. Weiss, H. Zen, Y. Wu, Y. Wang, Y. Cao, Y. Jia, Z. Chen, J. Shen, P. Nguyen, and R. Pang, “Hierarchical generative modeling for controllable speech synthesis,” in ICLR, 2019. [13] H. Zen, N. Braunschweiler, S. Buchholz, M. Gales, K. Knill, S. Krstulović, and J. Latorre, “Statistical parametric speech syn- thesis based on speaker and language factorization,” IEEE Trans. Audio, Speech, Lang. Process., vol. 20, no. 6, pp. 1713–1724, 2012. [14] B. Li and H. Zen, “Multi-language multi-speaker acoustic model- ing for LSTM-RNN based statistical parametric speech synthesis,” in Proc. Interspeech, 2016, pp. 2468–2472. [15] H. Ming, Y. Lu, Z. Zhang, and M. Dong, “A light-weight method of building an LSTM-RNN-based bilingual TTS system,” in In- ternational Conference on Asian Language Processing, 2017, pp. 201–205. [16] Y. Lee and T. Kim, “Learning pronunciation from a for- eign language in speech synthesis networks,” arXiv preprint arXiv:1811.09364, 2018. [17] E. Nachmani and L. Wolf, “Unsupervised polyglot text to speech,” in ICASSP, 2019. [18] M. Morise, F. Yokomori, and K. Ozawa, “WORLD: a vocoder- based high-quality speech synthesis system for real-time applica- tions,” IEICE Transactions on Information and Systems, vol. 99, no. 7, pp. 1877–1884, 2016. [19] B. Li, Y. Zhang, T. Sainath, Y. Wu, and W. Chan, “Bytes are all you need: End-to-end multilingual speech recognition and synthesis with bytes,” in ICASSP, 2018. [20] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerry-Ryan et al., “Natural TTS synthesis by conditioning WaveNet on mel spectrogram predic- tions,” in ICASSP, 2018. [21] D. P. Kingma and M. Welling, “Auto-encoding variational Bayes,” in International Conference on Learning Representations (ICLR), 2014. [22] N. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart, F. Stimberg, A. van den Oord, S. Dieleman, and K. Kavukcuoglu, “Efficient neural audio synthesis,” in ICML, 2018. [23] J. Sotelo, S. Mehri, K. Kumar, J. F. Santos, K. Kastner, A. Courville, and Y. Bengio, “Char2wav: End-to-end speech syn- thesis,” in ICLR: Workshop, 2017. [24] W. Ping, K. Peng, A. Gibiansky, S. O. Arik, A. Kannan, S. Narang, J. Raiman, and J. Miller, “Deep Voice 3: Scaling text-to-speech with convolutional sequence learning,” in International Confer- ence on Learning Representations (ICLR), 2018. [25] K. Kastner, J. F. Santos, Y. Bengio, and A. C. Courville, “Repre- sentation mixing for TTS synthesis,” arXiv:1811.07240, 2018. [26] A. Van Den Bosch and W. Daelemans, “Data-oriented methods for grapheme-to-phoneme conversion,” in Proc. Association for Computational Linguistics, 1993, pp. 45–53. [27] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain- adversarial training of neural networks,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096–2030, 2016. [28] W.-N. Hsu, Y. Zhang, R. J. Weiss, Y. an Chung, Y. Wang, Y. Wu, and J. Glass, “Disentangling correlated speaker and noise for speech synthesis via data augmentation and adversarial factor- ization,” in ICASSP, 2019. [29] M. Wester and H. Liang, “Cross-lingual speaker discrimination using natural and synthetic speech,” in Twelfth Annual Conference of the International Speech Communication Association, 2011. [30] L. Wan, Q. Wang, A. Papir, and I. L. Moreno, “Generalized end- to-end loss for speaker verification,” in Proc. ICASSP, 2018.
{ "id": "1809.10460" }
1907.03693
Incorporating Query Term Independence Assumption for Efficient Retrieval and Ranking using Deep Neural Networks
Classical information retrieval (IR) methods, such as query likelihood and BM25, score documents independently w.r.t. each query term, and then accumulate the scores. Assuming query term independence allows precomputing term-document scores using these models---which can be combined with specialized data structures, such as inverted index, for efficient retrieval. Deep neural IR models, in contrast, compare the whole query to the document and are, therefore, typically employed only for late stage re-ranking. We incorporate query term independence assumption into three state-of-the-art neural IR models: BERT, Duet, and CKNRM---and evaluate their performance on a passage ranking task. Surprisingly, we observe no significant loss in result quality for Duet and CKNRM---and a small degradation in the case of BERT. However, by operating on each query term independently, these otherwise computationally intensive models become amenable to offline precomputation---dramatically reducing the cost of query evaluations employing state-of-the-art neural ranking models. This strategy makes it practical to use deep models for retrieval from large collections---and not restrict their usage to late stage re-ranking.
http://arxiv.org/pdf/1907.03693
Bhaskar Mitra, Corby Rosset, David Hawking, Nick Craswell, Fernando Diaz, Emine Yilmaz
cs.IR, cs.LG
null
null
cs.IR
20190708
20190708
2019 9 1 0 2 # l u J 8 arXiv:1907.03693v1 [cs.IR] ] # R # I . s c [ 1 v 3 9 6 3 0 . 7 0 9 1 : v i X r a # INCORPORATING QUERY TERM INDEPENDENCE ASSUMPTION FOR EFFICIENT RETRIEVAL AND RANKING USING DEEP NEURAL NETWORKS A PREPRINT # Bhaskar Mitra∗ Microsoft AI & Research [email protected] Corby Rosset∗ Microsoft AI & Research [email protected] # David Hawking [email protected] Nick Craswell Microsoft AI & Research [email protected] Fernando Diaz Microsoft AI & Research [email protected] Emine Yilmaz Microsoft AI & Research [email protected] October 31, 2021 # ABSTRACT Classical information retrieval (IR) methods, such as query likelihood and BM25, score documents independently w.r.t. each query term, and then accumulate the scores. Assuming query term in- dependence allows precomputing term-document scores using these models—which can be com- bined with specialized data structures, such as inverted index, for efficient retrieval. Deep neural IR models, in contrast, compare the whole query to the document and are, therefore, typically employed only for late stage re-ranking. We incorporate query term independence assumption into three state-of-the-art neural IR models: BERT, Duet, and CKNRM—and evaluate their per- formance on a passage ranking task. Surprisingly, we observe no significant loss in result quality for Duet and CKNRM—and a small degradation in the case of BERT. However, by operating on each query term independently, these otherwise computationally intensive models become amenable to offline precomputation—dramatically reducing the cost of query evaluations employing state-of- the-art neural ranking models. This strategy makes it practical to use deep models for retrieval from large collections—and not restrict their usage to late stage re-ranking. # Keywords Deep learning · Information retrieval · Indexing · Query evaluation # 1 Introduction Many traditional information retrieval (IR) ranking functions—e.g., [Robertson et al., 2009, Ponte and Croft, 1998]— manifest the query-term independence property—i.e., the documents can be scored independently w.r.t. each query term, and then the scores accumulated. Given a document collection, these term-document scores can be precomputed and combined with specialized IR data structures, such as inverted indexes [Zobel and Moffat, 2006], and clever organization strategies (e.g., impact-ordering [Anh et al., 2001]) to aggressively prune the set of documents that need to be assessed per query. This dramatically speeds up query evaluations enabling fast retrieval from large collections, containing billions of documents. Recent deep neural architectures—such as BERT [Nogueira and Cho, 2019], Duet [Mitra et al., 2017], and CKNRM [Dai et al., 2018]—have demonstrated state-of-the-art performance on several IR tasks. However, the superior retrieval effectiveness comes at the cost of evaluating deep models with tens of millions to hundreds of millions of parameters at query evaluation time. In practice, this limits the scope of these models to late stage re-ranking. Like traditional IR ∗Both authors contributed equally to this research. A PREPRINT - OCTOBER 31, 2021 models, we can incorporate the query term independence assumption into the design of the deep neural model—which would allow offline precomputation of all term-document scores. The query evaluation then involves only their linear combination—alleviating the need to run the computation intensive deep model at query evaluation time. We can further combine these precomputed machine-learned relevance estimates with an inverted index, to retrieve from the full collection. This significantly increases the scope of potential impact of neural methods in the retrieval process. We study this approach in this work. Of course, by operating independently per query term, the ranking model has access to less information compared to if it has the context of the full query. Therefore, we expect the ranking model to show some loss in retrieval effectiveness under this assumption. However, we trade this off with the expected gains in efficiency of query evaluations and the ability to retrieve, and not just re-rank, using these state-of-the-art deep neural models. In this preliminary study, we incorporate the query term independence assumption into three state-of-the-art neural ranking models—BERT [Nogueira and Cho, 2019], Duet [Mitra et al., 2017], and CKNRM [Dai et al., 2018]—and evaluate their effectiveness on the MS MARCO passage ranking task [Bajaj et al., 2016]. We surprisingly find that the two of the models suffer no statistically significant adverse affect w.r.t. ranking effectiveness on this task under the query term independence assumption. While the performance of BERT degrades under the strong query term independence assumption—the drop in MRR is reasonably small and the model maintains a significant performance gap compared to other non-BERT based approaches. We conclude that at least for a certain class of existing neural IR models, incorporating query term independence assumption may result in significant efficiency gains in query evaluation at minimal (or no) cost to retrieval effectiveness. # 2 Related work Several neural IR methods—e.g., [Ganguly et al., 2015, Kenter and De Rijke, 2015, Nalisnick et al., 2016, Guo et al., 2016]—already operate under query term independence assumption. However, recent performance breakthroughs on many IR tasks have been achieved by neural models [Hu et al., 2014, Pang et al., 2016, Mitra et al., 2017, Dai et al., 2018, Nogueira and Cho, 2019] that learn latent representations of the query or inspect interaction patterns between query and document terms. In this work, we demonstrate the potential to incorporate query term independence as- sumption in these recent representation learning and interaction focused models. Some neural IR models [Huang et al., 2013, Gao et al., 2011] learn low dimensional dense vector representations of query and document that can be computed independently during inference. These models are also amenable to precom- putation of document representations—and fast retrieval using approximate nearest neighbor search [Aumüller et al., 2017, Boytsov et al., 2016]. An alternative involves learning higher dimensional but sparse representations of query and document [Salakhutdinov and Hinton, 2009, Zamani et al., 2018a] that can also be employed for fast lookup. However, these approaches—where the document representation is computed independently of the query—do not allow for interactions between the query term and document representations. Early interaction between query and document representation is important to many neural architectures [Hu et al., 2014, Pang et al., 2016, Mitra et al., 2017, Dai et al., 2018, Nogueira and Cho, 2019]. The approach proposed in this study allows for interactions between individual query terms and documents. Finally, we refer the reader to [Mitra and Craswell, 2018] for a more general survey of neural methods for IR tasks. # 3 Neural Ranking Models with Query Term Independence Assumption IR functions that assume query term independence observe the following general form: Sq,d = X t∈q st,d (1) Where, s ∈ R|V |×|C| is the set of positive real-valued scores as estimated by the relevance model corresponding to documents d ∈ C in collection C w.r.t. to terms t ∈ V in vocabulary V —and Sq,d denotes the aggregated score of document d w.r.t. to query q. For example, in case of BM25 [Robertson et al., 2009]: tha: (k 1 sia = if, (hi +1) (2) that hy (1-b+b- Hh 2 A PREPRINT - OCTOBER 31, 2021 Where, tf and idf denote term-frequency and inverse document frequency, respectively—and k1 and b are the free parameters of the BM25 model. Deep neural models for ranking, in contrast, do not typically assume query term independence. Instead, they learn complex matching functions to compare the candidate document to the full query. The parameters of such a model φ is typically learned discriminatively by minimizing a loss function of the following form: L = Eq∼θq, d+∼θd+ ,d−∼θd− [ℓ(∆q,d+,d−)] where, ∆q,d+,d− = φq,d+ − φq,d− (3) (4) We use d+ and d− to denote a pair of relevant and non-relevant documents, respectively, w.r.t. query q. The instance loss ℓ in Equation 3 can take different forms—e.g., ranknet [Burges et al., 2005] or hinge [Herbrich et al., 2000] loss. ℓranknet(∆q,d+,d−) = log(1 + e−σ·∆q,d+ ,d− ) ℓhinge(∆q,d+,d−) = max{0, Ç« − ∆q,d+,d−} Cranknet(Ag,d,,d_) = log(1 + e 7 Sete) (5) Cninge(Ag,a,,a_) = max{0,¢— Aga, a_} (6) Given a neural ranking model φ, we define Φ—the corresponding model under the query term independence assumption—as: Φq,d = X t∈q φt,d (7) The new model Φ preserves the same architecture as φ but estimates the relevance of a document independently w.r.t. each query term. The parameters of Φ are learned using the modified loss: L = Eq∼θq, d+∼θd+ ,d−∼θd− [ℓ(δq,d+,d−)] φt,d+ − φt,d− where, (8) δq,d+,d− = X t∈q (9) Given collection C and vocabulary V , we precompute φt,d for all t ∈ V and d ∈ C. In practice, the total number of combinations of t and d may be large but we can enforce additional constraints on which ht, di pairs to evaluate, and assume no contributions from remaining pairs. During query evaluation, we can lookup the precomputed score φt,d without dedicating any additional time and resource to evaluate the deep ranking model. We employ an inverted index, in combination with the precomputed scores, to perform retrieval from the full collection using the learned relevance function Φ. We note that several IR data structures assume that φt,d be always positive which may not hold for any arbitrary neural architecture. But this can be addressed by applying a rectified linear unit activation on the model’s output. The remainder of this paper describes our empirical study and summarizes our findings. # 4 Experiments # 4.1 Task description We study the effect of the query term independence assumption on deep neural IR models in the context of the MS MARCO passage ranking task [Bajaj et al., 2016]. We find this ranking task to be suitable for this study for several reasons. Firstly, with one million question queries sampled from Bing’s search logs, 8.8 million passages extracted from web documents, and 400,000 positively labeled query-passage pairs for training, it is one of the few large datasets available today for benchmarking deep neural IR methods. Secondly, the challenge leaderboard2—with 18 entries as of March 3, 2019—is a useful catalog of approaches that show state-of-the-art performance on this task. Conveniently, several of these high-performing models include public implementations for the ease of reproducibility. The MS MARCO passage ranking task comprises of one thousand passages per query that the IR model, being evaluated, should re-rank. Corresponding to every query, one or few passages have been annotated by human editors as 2http://www.msmarco.org/leaders.aspx 3 (5) (6) A PREPRINT - OCTOBER 31, 2021 Table 1: Comparing ranking effectiveness of BERT, Duet, and CKNRM with the query independence assumption (denoted as “Term ind.”) with their original counterparts (denoted as “Full”). The difference between the median MRR for “full” and “term ind.” models are not statistically significant based on a student’s t-test (p < 0.05) for Duet and CKNRM. The difference in MRR is statistically significant based on a student’s t-test (p < 0.05) for BERT (single run). The BM25 baseline (single run) is included for reference. MRR@10 (± Std. dev) Model Mean Median BERT Full Term ind. Duet Full Term ind. CKNRM Full Term ind. BM25 0.356 0.333 0.356 0.333 0.239 0.244 (±0.002) (±0.002) 0.240 0.244 0.223 0.222 0.167 (±0.004) (±0.005) 0.224 0.221 0.167 containing the answer relevant to the query. The rank list produced by the model is evaluated using the mean reciprocal rank (MRR) metric against the ground truth annotations. We use the MS MARCO training dataset to train all baseline and treatment models, and report their performance on the publicly available development set which we consider—and hereafter refer to—as the test set for our experiments. This test set contains about seven thousand queries which we posit is sufficient for reliable hypothesis testing. Note that the thousand passages per query were originally retrieved using BM25 from a collection that is provided as part of the MS MARCO dataset. This allows us to also use this dataset in a retrieval setting—in addition to the re-ranking setting used for the official challenge. We take advantage of this in our study. # 4.2 Baseline models We begin by identifying models listed on the MS MARCO leaderboard that can serve as baselines for our work. We only consider the models with public implementations. We find that a number of top performing entries—e.g., [Nogueira and Cho, 2019]—are based on recently released large scale language model called BERT [Devlin et al., 2018]. The BERT based entries are followed in ranking by the Duet [Mitra et al., 2017] and the Convolutional Kernel- based Neural Ranking Model (CKNRM) [Dai et al., 2018]. Therefore, we limit this study to BERT, Duet, and CK- NRM. BERT Nogueira and Cho [2019] report state-of-the-art retrieval performance on the MS MARCO passage re- ranking task by fine tuning BERT [Devlin et al., 2018] pretrained models. In this study, we reproduce the results from their paper corresponding to the BERT Base model and use it as our baseline. Under the term independence assumption, we evaluate the BERT model once per query term—wherein we input the query term as sentence A and the passage as sentence B. Duet The Duet [Mitra et al., 2017] model estimates the relevance of a passage to a query by a combination of (i) examining the patterns of exact matches of query terms in the passage, and (ii) computing similarity between learned latent representations of query and passage. Duet has previously demonstrated state-of-the-art performance on TREC CAR [Nanni et al., 2017] and is an official baseline for the MS MARCO challenge. The particular implementation of Duet listed on the leaderboard includes modifications3 to the original model [Mitra and Craswell, 2019]. We use this provided implementation for our study. Besides evaluating the model once per query term, no additional changes were necessary to its architecture under the query term independence assumption. CKNRM The CKNRM model combines kernel pooling based soft matching [Xiong et al., 2017] with a convolu- tional architecture for comparing n-grams. CKNRM uses kernel pooling to extract ranking signals from interaction matrices of query and passage n-grams. Under the query term independence assumption, the model considers one # 3https://github.com/dfcf93/MSMARCO/blob/master/Ranking/Baselines/Duet.ipynb 4 A PREPRINT - OCTOBER 31, 2021 Table 2: Comparing Duet (with query term independence assumption) and BM25 under the full retrieval settings on a subset of MS MARCO dev queries. The differences in recall and MRR between Duet (term ind.) and BM25 are statistically significant according to student’s t-test (p < 0.01). Model BM25 Duet (term ind.) Recall@1000 MRR@10 0.80 0.85 0.169 0.218 query term at a time—and therefore we only consider the interactions between the query unigrams and passage n- grams. We base our study on the public implementation4 of this model. For all models we re-use the published hyperparameter values and other settings from the MS MARCO website. # 5 Results Table 1 compares the BERT, the Duet, and the CKNRM models trained under the query term independence assumption to their original counterparts on the passage re-ranking task. We train and evaluate the Duet and the CKNRM based models five and eight times, respectively, using different random seeds—and report mean and median MRR. For the BERT based models, due to long training time we only report results based on a single training and evaluation run. As table 1 shows, we observe no statistically significant difference in effectiveness from incorporating the query term independence assumptions in either Duet or CKNRM. The query term independent BERT model performs slightly worse than its original counterpart on MRR but the performance is still superior to other non-BERT based approaches listed on the public leaderboard. We posit that models with query term independence assumption—even when slightly less effective compared to their full counterparts—are likely to retrieve better candidate sets for re-ranking. To substantiate this claim, we conduct a small-scale retrieval experiment based on a random sample of 395 queries from the test set. We use the Duet model with the query term independence assumption to precompute the term-passage scores constrained to (i) the term appears at least once in the passage, and (ii) the term does not appear in more than 5% of the passage collection. Table 2 compares Duet and BM25 on their effectiveness as a first stage retrieval method in a potential telescoping setting [Matveeva et al., 2006]. We observe a 6.25% improvement in recall@1000 from Duet over the BM25 baseline. To perform similar retrieval from the full collection using the full Duet model, unlike its query-term-independent counterpart, is prohibitive because it involves evaluating the model on every passage in the collection against every incoming query. # 6 Discussion and conclusion The emergence of compute intensive ranking models, such as BERT, motivates rethinking how these models should be evaluated in large scale IR systems. The approach proposed in this paper moves the burden of model evaluation from the query evaluation stage to the document indexing stage. This may have further consequences on computational efficiency by allowing batched model evaluation that more effectively leverages GPU (or TPU) parallelization. This preliminary study is based on three state-of-the-art deep neural models on a public passage ranking benchmark. The original design of all three models—BERT, Duet, and CKNRM—emphasize on early interactions between query and passage representations. However, we observe that limiting the interactions to passage and individual query terms has reasonably small impact on their effectiveness. These results are promising as they support the possibility of dramatically speeding up query evaluation for some deep neural models, and even employing them to retrieve from the full collection. The ability to retrieve—and not just re-rank—using deep models has significant implications for neural IR research. Any loss in retrieval effectiveness due to incorporating strong query term independence assumptions may be further recovered by additional stages of re-ranking in a telescoping approach [Matveeva et al., 2006]. This study is focused on the passage ranking task. The trade-off between effectiveness and efficiency may be different for document retrieval and other IR tasks. Traditional IR methods in more complex retrieval settings—e.g., when the document is represented by multiple fields [Robertson et al., 2004]—also observe the query term independence assumption. So, studying the query term independence assumption in the context of corresponding neural models— e.g., [Zamani et al., 2018b]—may also be appropriate. We note these as important future directions for our research. # 4https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models 5 A PREPRINT - OCTOBER 31, 2021 The findings from this study may also be interpreted as pointing to a gap in our current state-of-the-art neural IR models that do not take adequate advantage of term proximity signals for matching. This is another finding that may hold interesting clues for IR researchers who want to extract more retrieval effectiveness from deep neural methods. # References In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 35–42. ACM, 2001. Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. Ann-benchmarks: A benchmarking tool for approxi- mate nearest neighbor algorithms. In International Conference on Similarity Search and Applications, pages 34–49. Springer, 2017. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016. Leonid Boytsov, David Novak, Yury Malkov, and Eric Nyberg. Off the beaten path: Let’s replace term-based retrieval with k-nn search. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 1099–1108. ACM, 2016. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89–96. ACM, 2005. Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. Convolutional neural networks for soft-matching n- In Proceedings of the eleventh ACM international conference on web search and data grams in ad-hoc search. mining, pages 126–134. ACM, 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Debasis Ganguly, Dwaipayan Roy, Mandar Mitra, and Gareth JF Jones. Word embedding based generalized language model for information retrieval. In Proc. SIGIR, pages 795–798. ACM, 2015. Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. Clickthrough-based latent semantic models for web search. In Proc. SIGIR, pages 675–684. ACM, 2011. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. A deep relevance matching model for ad-hoc retrieval. In Proc. CIKM, pages 55–64. ACM, 2016. Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Large margin rank boundaries for ordinal regression. Advances in large margin classifiers, 2000. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architectures for matching natural language sentences. In Proc. NIPS, pages 2042–2050, 2014. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using clickthrough data. In Proc. CIKM, pages 2333–2338. ACM, 2013. Tom Kenter and Maarten De Rijke. Short text similarity with word embeddings. In Proceedings of the 24th ACM international on conference on information and knowledge management, pages 1411–1420. ACM, 2015. Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 437–444. ACM, 2006. Bhaskar Mitra and Nick Craswell. An introduction to neural information retrieval. Foundations and Trends®) in Information Retrieval (to appear), 2018. Bhaskar Mitra and Nick Craswell. An updated duet model for passage re-ranking. arXiv preprint arXiv:1903.07666, 2019. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In Proc. WWW, pages 1291–1299, 2017. Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. Improving document ranking with dual word embeddings. In Proc. WWW, 2016. Federico Nanni, Bhaskar Mitra, Matt Magnusson, and Laura Dietz. Benchmark for complex answer retrieval. In Proc. ICTIR. ACM, 2017. 6 A PREPRINT - OCTOBER 31, 2021 Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085, 2019. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. Text matching as image recogni- tion. In Proc. AAAI, 2016. Jay M Ponte and W Bruce Croft. A language modeling approach to information retrieval. In Proc. SIGIR, pages 275–281. ACM, 1998. In Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 42–49. ACM, 2004. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389, 2009. Ruslan Salakhutdinov and Geoffrey Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50 (7):969–978, 2009. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 55–64. ACM, 2017. Hamed Zamani, Mostafa Dehghani, W Bruce Croft, Erik Learned-Miller, and Jaap Kamps. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In Proc. CIKM, pages 497–506. ACM, 2018a. Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary. Neural ranking models with multiple document fields. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 700–708. ACM, 2018b. Justin Zobel and Alistair Moffat. Inverted files for text search engines. ACM computing surveys (CSUR), 38(2):6, 2006. 7
{ "id": "1810.04805" }
1907.03670
From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network
3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-$A^2$ net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-$A^2$ net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. Code is available at https://github.com/sshaoshuai/PointCloudDet3D.
http://arxiv.org/pdf/1907.03670
Shaoshuai Shi, Zhe Wang, Jianping Shi, Xiaogang Wang, Hongsheng Li
cs.CV
Accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence 2020, code is available at https://github.com/sshaoshuai/PointCloudDet3D
null
cs.CV
20190708
20200316
0 2 0 2 r a M 6 1 ] V C . s c [ 3 v 0 7 6 3 0 . 7 0 9 1 : v i X r a 1 # From Points to Parts: 3D Object Detection from Point Cloud with Part-aware and Part-aggregation Network Shaoshuai Shi, Zhe Wang, Jianping Shi, Xiaogang Wang, Hongsheng Li Abstract—3D object detection from LiDAR point cloud is a challenging problem in 3D scene understanding and has many practical applications. In this paper, we extend our preliminary work PointRCNN to a novel and strong point-cloud-based 3D object detection framework, the part-aware and aggregation neural network (Part-A2 net). The whole framework consists of the part-aware stage and the part-aggregation stage. Firstly, the part-aware stage for the first time fully utilizes free-of-charge part supervisions derived from 3D ground-truth boxes to simultaneously predict high quality 3D proposals and accurate intra-object part locations. The predicted intra-object part locations within the same proposal are grouped by our new-designed RoI-aware point cloud pooling module, which results in an effective representation to encode the geometry-specific features of each 3D proposal. Then the part-aggregation stage learns to re-score the box and refine the box location by exploring the spatial relationship of the pooled intra-object part locations. Extensive experiments are conducted to demonstrate the performance improvements from each component of our proposed framework. Our Part-A2 net outperforms all existing 3D detection methods and achieves new state-of-the-art on KITTI 3D object detection dataset by utilizing only the LiDAR point cloud data. Code is available at https://github.com/sshaoshuai/PointCloudDet3D. Index Terms—3D object detection, point cloud, part location, LiDAR, convolutional neural network, autonomous driving. # 1 INTRODUCTION robotics, increasing attention has been paid to 3D object detection [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. Though significant achievements have been made in 2D object detection from images [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], directly extending these 2D detection methods to 3D detection might lead to inferior performance, since the point cloud data of 3D scenes has irregular data format and 3D detection with point clouds faces great challenges from the irregular data format and large search space of 6 Degrees-of- Freedom (DoF) of 3D objects. Existing 3D object detection methods have explored several ways to tackle these challenges. Some works [6], [25], [26] utilize 2D detectors to detect 2D boxes from the image, and then adopt PointNet [27], [28] to the cropped point cloud to directly regress the parameters of 3D boxes from raw point cloud. However, these methods heavily depend on the performance of 2D object detectors and cannot take the advantages of 3D information for generating robust bounding box proposals. Some other works [1], [4], [5], [10], [11] project the point cloud from the bird view to create a 2D bird-view point density map and apply 2D Convolutional Neural Networks (CNN) to these feature maps for 3D object detection, but the hand-crafted features cannot fully exploit the 3D information of raw point cloud and may not be optimal. There are also some one-stage 3D object detectors [7], [9], [29] that divide the 3D space into regular 3D voxels and apply 3D CNN or 3D sparse convolution [30], [31] to extract 3D features and finally compress = = S. Shi, X. Wang and H. Li are with the Department of Electrical Engineer- ing, The Chinese University of Hong Kong, Hong Kong, China. E-mail: {ssshi, xgwang, hsli}@ee.cuhk.edu.hk Z. Wang and J. Shi are with SenseTime Research. E-mail: {wangzhe, shijianping}@sensetime.com + _ Head @ lower left e@ lower right e upper left e upper right Tail (©) towerten — @ towersign upper) upper rit Fig. 1. Our proposed part-aware and aggregation network can accu- rately predict intra-object part locations even when objects are partially occluded. Such part locations can assist accurate 3D object detection. The predicted intra-object part locations by our proposed method are visualized by interpolated colors of eight corners. Best viewed in colors. to bird-view feature map for 3D object detection. These works do not fully exploit all available information from 3D box annotations for improving the performance of 3D detection. For instance, the 3D box annotations also imply the point distributions within each 3D objects, which are beneficial for learning more discriminative features to improve the performance of 3D object detection. Also, these works are all one-stage detection frameworks which cannot utilize the RoI-pooling scheme to pool specific features of each proposal for the box refinement in a second stage. In contrast, we propose a novel two-stage 3D object detec- tion framework, the part-aware and aggregation neural network (i.e. Part-A2 net), which directly operates on 3D point cloud and achieves state-of-the-art 3D detection performance by fully exploring the informative 3D box annotations from the training data. Our key observation is that, unlike object detection from 2D images, 3D objects in autonomous driving scenes are naturally and well separated by annotated 3D bounding boxes, which means the training data with 3D box annotations automatically provides free-of-charge semantic masks and even the relative location of each foreground point within the 3D ground truth bounding boxes (see Fig. 1 for illustration). In the remaining parts of this paper, the relative location of each foreground point w.r.t. the object box that it belongs to is denoted as the intra-object part locations. This is totally different from the box annotations in 2D images, since some parts of objects in the 2D images may be occluded. Using the ground-truth 2D bounding boxes would generate inaccurate and noisy intra-object part locations for each pixel within objects. These 3D intra-object part locations imply the 3D point distributions of 3D objects. Such 3D intra-object part locations are informative and can be obtained for free, but were never explored in 3D object detection. Motivated by this observation, our proposed Part-A2 net is designed as a novel two-stage 3D detection framework, which con- sists of the part-aware stage (Stage-I) for predicting accurate intra- object part locations and learning point-wise features, and the part- aggregation stage (Stage-II) for aggregating the part information to improve the quality of predicted boxes. Our approach produces 3D bounding boxes parameterized with (x, y, z, h, w, l, θ), where (x, y, z) are the box center coordinates, (h, w, l) are the height, width and length of each box respectively, and θ is the orientation angle of each box from the bird’s eye view. Specifically, in the part-aware stage-I, the network learns to segment the foreground points and estimate the intra-object part locations for all the foreground points (see Fig. 1), where the segmentation masks and ground-truth part location annotations are directly generated from the ground-truth 3D box annotations.In addition, it also generates 3D proposals from the raw point cloud simultaneously with foreground segmentation and part estimation. We investigate two strategies, i.e., anchor-free v.s. anchor-based strategies, for 3D proposal generation to handle to different scenar- ios. The anchor-free strategy is relatively light-weight and is more memory efficient, while the anchor-based strategy achieves higher recall rates with more memory and calculation costs. For the anchor-free strategy, we propose to directly generate 3D bounding box proposals in a bottom-up scheme by segmenting foreground points and generating the 3D proposals from the predicted fore- ground points simultaneously. Since it avoids using the large number of 3D anchor boxes in the whole 3D space as previous methods [4], [29] do, it saves much memory. For the anchor-based strategy, it generates 3D proposals from downsampled bird-view feature maps with pre-defined 3D anchor boxes at each spatial location. Since it needs to place multiple 3D anchors with different orientations and classes at each location, it needs more memory but can achieve higher object recall. In the second stage of existing two-stage detection methods, information within 3D proposals needs to be aggregated by certain pooling operations for the following box re-scoring and location refinement. However, the previous point cloud pooling strategy (as used in our preliminary PointRCNN [32]) result in ambiguous representations, since different proposals might end up pooling the same group of points, which lose the abilities to encode the geometric information of the proposals. To tackle this problem, we propose a novel differentiable RoI-aware point cloud pooling operation, which keeps all information from both non-empty and empty voxels within the proposals, to eliminate the ambiguity of previous point cloud pooling strategy. This is vital to obtain an effective representation for box scoring and location refinement, as the empty voxels also encode the box’s geometry information. The stage-II aims to aggregate the pooled part features from stage-I by the proposed RoI-aware pooling for improving the quality of the proposals. Our stage-II network adopts the sparse convolution and sparse pooling operations to gradually aggre- gate the pooled part features of each 3D proposal for accurate confidence prediction and box refinement. The experiments show that the aggregated part features could improve the quality of the proposals remarkably and our overall framework achieves state- of-the-art performance on KITTI 3D detection benchmark. Our primary contributions could be summarized into four- fold. (1) We proposed the Part-A2 net framework for 3D object detection from point cloud, which boosts the 3D detection perfor- mance by using the free-of-charge intra-object part information to learning discriminative 3D features and by effectively aggregating the part features with RoI-aware pooling and sparse convolutions. (2) We present two strategies for 3D proposal generation to handle different scenarios. The anchor-free strategy is more memory efficient while the anchor-based strategy results in higher object recall. (3) We propose a differentiable RoI-aware point cloud region pooling operation to eliminate the ambiguity in existing point cloud region pooling operations. The experiments show that the pooled feature representation benefits box refinement stage significantly. (4) Our proposed Part-A2 net outperforms all published methods with remarkable margins and ranks 1st with 14 FPS inference speed on the challenging KITTI 3D detection benchmark [33] as of August 15, 2019, which demonstrates the effectiveness of our method. # 2 RELATED WORK 3D object detection from 2D images. There are several existing works on estimating the 3D bounding box from images. [34], [35] leveraged the geometry constraints between 3D and 2D bounding box to recover the 3D object pose. [36], [37], [38], [39] exploited the similarity between 3D objects and the CAD models. Chen et al. [40], [41] formulated the 3D geometric information of objects as an energy function to score the predefined 3D boxes. Ku et al. [42] proposed the aggregate losses to improve the 3D localization accuracy from monocular image. Recently [43], [44] explored the stereo pair of images to improve the 3D detection performance from stereo cameras. These works can only generate coarse 3D detection results due to the lack of accurate depth information and can be substantially affected by appearance variations. 3D object detection from multiple sensors. Several existing methods have worked on fusing the information from multiple sensors (e.g., LiDAR and camera) to help 3D object detection. [1], [4] projected the point cloud to the bird view and extracted features from bird-view maps and images separately, which are then cropped and fused by projecting 3D proposals to the correspond- ing 2D feature maps for 3D object detection. [5] further explored the feature fusion strategy by proposing continuous fusion layer to fuse image feature to bird-view features. Different from projecting point cloud to bird-view map, [6], [25] utilized off-the-shelf 2D object detectors to detect 2D boxes first for cropping the point 2 3 Fig. 2. The overall framework of our part-aware and aggregation neural network for 3D object detection. It consists of two stages: (a) the part-aware stage-I for the first time predicts intra-object part locations and generates 3D proposals by feeding the point cloud to our encoder-decoder network. (b) The part-aggregation stage-II conducts the proposed RoI-aware point cloud pooling operation to aggregate the part information from each 3D proposal, then the part-aggregation network is utilized to score boxes and refine locations based on the part features and information from stage-I. cloud and then applied PointNet [27], [28] to extract features from the cropped point clouds for 3D box estimation. These methods may suffers from the time synchronization problem of multiple sensors in the practical applications. Unlike these sensor fusion methods, our proposed 3D detection frameworks Part-A2 net could achieve comparable or even better 3D detection results by using only point cloud as input. discriminative point-wise features, which is based on the 3D sparse convolution and 3D sparse deconvolution operations since they are more efficient and effective than the point-based backbone like PointNet++ [28]. The point-based backbone and voxel-based backbone are experimented and discussed in Sec. 4.1.1. 3D/2D instance segmentation. These approaches are often based on point-cloud-based 3D detection methods. Several approaches of 3D instance segmentation are based on the 3D detection bounding boxes with an extra mask branch for predicting the object mask. Yi et al. [46] proposed an analysis-by-synthesis strategy to generate 3D proposals for 3D instance segmentation. Hou et al. [47] combined the multi-view RGB images and 3D point cloud to better generate proposals and predict object instance masks in and end- to-end manner. 3D object detection from point clouds only. Zhou et al. [29] for the first time proposed VoxelNet architecture to learn dis- criminative features from point cloud and detect 3D object with only point cloud. [7] improved VoxelNet by introducing sparse convolution [30], [31] for efficient voxel feature extraction. [9], [10], [11] projected the point cloud to bird-view maps and applied 2D CNN on these maps for 3D detection. These methods do not fully exploit all available information from the informative 3D box annotations and are all one-stage 3D detection methods. In contrast, our proposed two-stage 3D detection framework Part-A2 net explores the abundant information provided by 3D box anno- tations and learns to predict accurate intra-object part locations to learn the point distribution of 3D objects, the predicted intra-object part locations are aggregated in the second stage for refining the 3D proposals, which significantly improves the performance of 3D object detection. Some other approaches first estimate the semantic segmenta- tion labels and then group the points into instances based on the learned point-wise embeddings. Wang et al. [48] calculated the similarities between points for grouping foreground points of each instance. Wang et al. [49] proposed a semantic-aware point-level instance embedding strategy to learn better features for both the semantic and instance point cloud segmentation. Lahoud et al. [50] proposed a mask-task learning framework to learn the feature embedding and the directional information of the instance’s center for better clustering the points into instances. However, they did not utilize the free-of-charge intra-object part locations as extra supervisions as our proposed method does. Point cloud feature learning for 3D object detection. There are generally three ways of learning features from point cloud for 3D detection. (1) [1], [4], [5], [10], [11] projected point cloud to bird-view map and utilized 2D CNN for feature extraction. (2) [6], [25] conducted PointNet [27], [28] to learn the point cloud features directly from raw point cloud. (3) [29] proposed VoxelNet and [7] applied sparse convolution [30], [31] to speed up the VoxelNet for feature learning. Only the second and third methods have the potential to extract point-wise features for segmenting the foreground points and predicting the intra-object part locations in our framework. Here we design an encoder-decoder point cloud backbone network similarly with UNet [45] to extract There are also some anchor-free approaches for 2D instance segmentation by clustering the pixels into instances. Brabandere et al. [51] adopted a discriminative loss function to cluster the pixels of the same instance in a feature space while Bai et al. [52] proposed to estimate a modified watershed energy landscape to separate the pixels of different instances. However, those methods only group foreground pixels/points into different instances and did not estimate the 3D bounding boxes. Different with the above methods, our proposed anchor-free approach estimates intra-object 4 part locations and directly generates 3D bounding box proposals from individual 3D points for achieving 3D object detection. Part models for object detection. Deformable Part-based Models (DPM) [53] achieved great success on 2D object detection before the deep learning models are utilized. [54], [55], [56] extended the DPM to 3D world to reason the parts in 3D and estimate the object poses, where [54] modeled the object as a 3D cuboid with both deformable faces and deformable parts, [55] proposed a 3D DPM that generates a full 3D object model with continuous appearance representation, and [56] presented the notion of 3D part sharing with 3D CAD models to estimate the fine poses. These DPM based approaches generally adopt several part templates trained with hand-crafted features to localize the objects and estimate the object pose. In contrast, we formulate the object parts as point-wise intra-object part locations in the context of point cloud, where the training labels of part locations could be directly generated from 3D box annotations and they implicitly encode the part distribution of 3D objects. Moreover, both the estimation and aggregation of intra-object part locations are learned by the more robust deep learning networks instead of the previous hand-crafted schemes. Fig. 3. Comparison of voxelized point cloud and raw point cloud in autonomous driving scenarios. The center of each non-empty voxel is considered as a point to form the voxelized point cloud. The voxelized point cloud is approximately equivalent to the raw point cloud and 3D shapes of 3D objects are well kept for 3D object detection. # 3.1 Stage-I: Part-aware 3D proposal generation The part-aware network aims to extract discriminative features from the point cloud by learning to estimate the intra-object part locations of foreground points, since these part locations implicitly encode the 3D object’s shapes by indicating the relative locations of surface points of 3D objects. Also, the part-aware stage learns to estimate the intra-object part locations of foreground points and to generate 3D proposals simultaneously. Two strategies for 3D proposal generation from point clouds, anchor-free and anchor- based schemes, are proposed to handle different scenarios. # 3 PART-A2 NET: 3D PART-AWARE AND AGGRE- GATION FOR 3D DETECTION FROM POINT CLOUD A preliminary version of this work was presented in [32], where we proposed PointRCNN for 3D object detection from raw point cloud. To make the framework more general and effective, in this paper, we extend PointRCNN to a new end-to-end 3D detection framework, the part-aware and aggregation neural network, i.e. Part-A2 net, to further boost the performance of 3D object detec- tion from point cloud. 3.1.1 Point-wise feature learning via sparse convolution For segmenting the foreground points and estimating their intra- object part locations, we first need to learn discriminative point- wise features for describing the raw point clouds. Instead of using point-based methods like [27], [28], [57], [58], [59], [60], [61] for extracting point-wise features from the point cloud, as show in the left part of Fig. 2, we propose to utilize an encoder-decoder network with sparse convolution and deconvolution [30], [31] to learn discriminative point-wise features for foreground point seg- mentation and intra-object part location estimation, which is more efficient and effective than the previous PointNet++ backbone as used in our preliminary work [32]. The key observation is that, the ground-truth boxes of 3D object detection not only automatically provide accurate segmen- tation mask because of the fact that 3D objects are naturally separated in 3D scenes, but also imply the relative locations for each foreground 3D point within the ground truth boxes. This is very different from 2D object detection, where 2D object boxes might only contain portion of an object due to occlusion and thus cannot provide accurate relative location for each 2D pixel. These relative locations of foreground points encode valuable information of foreground 3D points and is beneficial for 3D object detection. This is because foreground objects of the same class (like car class) generally have similar 3D shapes and point distributions. The relative locations of foreground points provide strong cues for box scoring and localization. We name the relative locations of the 3D foreground points w.r.t. to their corresponding boxes the intra-object part locations. Specifically, we voxelize the 3D space into regular voxels and extract the voxel-wise features of each non-empty voxel by stacking sparse convolutions and sparse deconvolutions, where the initial feature of each voxel is simply calculated as the mean values of point coordinates within each voxel in the LiDAR coordinate system. The center of each non-empty voxel is considered as a point to form a new point cloud with the point-wise features (i.e. the voxel-wise features), which is approximately equivalent to the raw point cloud as shown in Fig. 3, since the voxel size is much smaller (e.g., 5cm×5cm×10cm in our method) compared to the whole 3D space (∼70m×80m×4m). For each 3D scene in the KITTI dataset [33], there are generally about 16,000 non- empty voxels in the 3D space. The voxelized point cloud could not only be processed by the more efficient sparse convolution based backbone, but it also keeps approximately equivalent to the raw point cloud for 3D object detection. Those intra-object part locations provide rich information for learning discriminative 3D features from point cloud but were never explored in previous 3D object detection methods. With such rich supervisions, we propose a novel part-aware and aggre- gation 3D object detector, Part-A2 net, for 3D object detection from point cloud. Specifically, we propose to use the free-of- charge 3D intra-object part location labels and segmentation labels as extra supervisions to learn better 3D features in the first stage. The predicted 3D intra-object part locations and point-wise 3D features within each 3D proposal are then aggregated in the second stage to score the boxes and refine their locations. The overall framework is illustrated in Fig. 2. Our sparse convolution based backbone is designed based on the encoder-decoder architecture. The spatial resolution of input feature volumes is 8 times downsampled by a series of sparse convolution layers with stride 2, and is then gradually upsampled to the original resolution by the sparse deconvolutions for the voxel-wise feature learning. The detailed network structure is illustrated in Sec. 3.5 and Fig. 7. Our newly designed 3D sparse convolution based output features of ing the foreground locations. Both linearity function for of foreground points intra-object part defined and learned Since the number of than that of the we adopt the focal loss Leg to handle Lseg (pe) 5 convolution based backbone, two branches are appended to the output features of the encoder-decoder backbone for segment- ing the foreground points and predicting their intra-object part locations. Both branches utilize the sigmoid function as last non- linearity function for generating outputs. The segmentation scores of foreground points indicate the confidence of the predicted intra-object part locations since the intra-object part locations are defined and learned on foreground points only in the training stage. Since the number of foreground points is generally much smaller than that of the background points in large-scale outdoor scenes, we adopt the focal loss [21] for calculating point segmentation loss Lseg to handle the class imbalance issue Lseg (pe) = —au(1 — pr)” log (pe), (2) p for forground points, where p, = pe " —p otherwise. Fig. 4. Illustration of intra-object part locations for foreground points. Here we use interpolated colors to indicate the intra-object part location of each point. Best viewed in colors. where p is the predicted foreground probability of a single 3D point and we keep αt = 0.25 and γ = 2 as the original paper. All points inside the ground-truth boxes are utilized as positive points and others are considered as negative points for training. convolution based-backbone results in better 3D box recall than the PointNet++ based backbone in our preliminary PointRCNN framework [32] (as shown by experimental results in Table 1), which demonstrates the effectiveness of this new backbone for the point-wise feature learning. location (denoted as (x(part), y(part), z(part))) of each foreground point, since they are bounded between [0, 1], we apply the binary cross entropy losses to each foreground point as follows 3.1.2 Estimation of foreground points and intra-object part locations The segmentation masks help the network to distinguish fore- ground points and background, while the intra-object part loca- tions provide rich information for the neural network to recognize and detect 3D objects. For instance, the side of a vehicle is usually a plane parallel to a side of its corresponding bounding box. By learning to estimate not only the foreground segmentation mask but also the intra-object part location of each point, the neural network develops the ability of inferring the shape and pose of the objects, which is crucial for 3D object detection. Lpart(u(part)) = − u(part) log(˜u(part)) − (1 − u(part)) log(1 − ˜u(part)) for u ∈ {x, y, z}, (3) where ˜u(part) is the predicted intra-object part location from the network output, and u(part) is the corresponding ground-truth intra-object part location. Note that the part location estimation is only conducted for foreground points. 3.1.3 3D proposal generation from point cloud To aggregate the predicted intra-object part locations and the learned point-wise 3D features for improving the performance of 3D object detection in the second stage, we need to generate 3D proposals to group the foreground points that belong to the same object. Here we investigate two strategies for 3D proposal generation from point cloud, the anchor-free scheme and anchor- based scheme, to handle different scenarios. The anchor-free strategy is more memory efficient while the anchor-based strategy achieves higher recall with more memory cost. Formulation of intra-object part location. As shown in Fig. |4} we formulate intra-object part location of each foreground point as its relative location in the 3D ground-truth bound- ing box that it belongs to. We denote three continuous values (a(Part) ylrart) (part)) as the target intra-object part location of the foreground point (x), y?), z'»)), which can be calculated as follows (©) yO] = [22 20 yo) — yloy | 6089) sin(9) [pO yO] = [2-29 YM —y fg) cos() (©) yO] = [22 20 yo) — yloy | 6089) sin(9) yO] = [2-29 YM —y fg) cos() (part) — cae $0.5, yr) = J +0.5, w ; : gloat 220s a) h Anchor-free 3D proposal generation. Our model with this strategy is denoted as Part-A2-free. We propose a novel scheme similarly to our preliminary PointRCNN [32] to generate 3D proposals in a bottom-up manner. As shown in the left part of Fig. 2, we append an extra branch to the decoder of our sparse convolution backbone to generate 3D proposal from each point that is predicted as foreground. where (x(c), y(c), z(c)) is the box center, (h, w, l) is the box length), and θ is the box orientation in size (height, width, bird-view. The relative part location of a foreground point x(part), y(part), z(part) ∈ [0, 1] , and the part location of the object center is therefore (0.5, 0.5, 0.5). Note that intra-object lo- cation coordinate system follows the similar definition of KITTI’s global coordinate system, where the direction of z is perpendicular to the ground, and x and y are parallel to the horizontal plane. However, if the algorithm directly estimates object’s center locations from each foreground point, the regression targets would vary in a large range. For instance, for a foreground point at the corner of an object, its relative offsets to the object center is much larger than those of a foreground point on the side of an object. If directly predicting relative offsets w.r.t. each foreground point with conventional regression losses (e.g., L1 or L2 losses), the loss would be dominated by errors of corner foreground points. Learning foreground segmentation and intra-object part lo- cation estimation. As shown in Fig. 2, given the above sparse . object center e foreground point e interest foreground point 4 target bin y LiDAR coordinate system Fig. 5. Illustration of bin-based center localization. The surrounding area along X and Y axes of each foreground point is split into a series of bins to locate the object center. To solve the issue of large varying ranges of the regression targets, we propose the bin-based center regression loss. As shown in Fig. 5, we split the surrounding bird-view area of each foreground point into a series of discrete bins along the X and Y axes by dividing a search range S of each axis into bins of uniform length δ, which represents different object centers (x, y) on the X-Y plane. We observe that conducting bin-based classification with cross-entropy loss for the X and Y axes instead of direct regression with smooth-L1 loss [14] results in more accurate and robust center localization. To refine small localization after the assignment into each X-Y bin, a small residual is also estimated. The overall regression loss for the X or Y axes therefore consists of bin classification loss and the residual regression loss within the classified bin. For the center location z along the vertical Z axis, we directly utilize smooth-L1 loss for the regression since most objects’ z values are generally within a very small range. The object center regression targets could therefore be formulated as bin) = = 2) + S| bin’) ie —yP +8 | xc ? Yy ? 6 6 ) ~1(,0 4 in?) 6 +2 res? = — (ul? —u'? + S— [ bin??? -6 +=) }, (4) ue{a,y} 2 res?) = 29 — 20), where δ is the bin size, S is the search range, (x(p), y(p), z(p)) is the coordinates of a foreground point of interest, (x(c), y(c), z(c)) is the center coordinates of its corresponding object, bin(p) and x and res(p) bin(p) are y y the ground-truth residual for further location refinement within the assigned bin. Since our proposed bottom-up proposal generation strategy is anchor-free, it does not have an initial value for box orientation. Hence we directly divide the orientation 2π into discrete bins with bin size w, and calculate the bin classification target bin(p) and residual regression target res(p) # θ 0+ 2) mod 27 bin? = a (5) w res\?) = 2 ((@ + 5) mod 27) — (bin, swt 5)) , Thus the overall 3D bounding box regression loss Lbox could be formulated as L(p) LE = SD (Leelbin”” bin?) + Lomoom (8, res!)), u€{a,y,0} u€{a,y,0} LY) = > Lemoornet (728, , res\?)), (6) ve{z,h,wl} v∈{z,h,w,l} Lbox = L(p) bin + L(p) res , (p) u where bin” and bin” are the predicted bin assignments and residuals of the foreground point p, bin’? and res? ) ground-truth targets calculated as above, Fea? ) is the predicted center residual in vertical axis or the size residual with respect to the average object size of each class in the entire training set, res?) is the corresponding ground-truth target of res?) Lee denotes the cross-entropy classification loss, and Lemooth-L1 denotes the smooth-L1 loss. Note that the box regression loss Lyox is only applied to the foreground points. are the In the inference stage, regressed x, y and θ are obtained by first choosing the bin center with the highest predicted confidence and then adding the predicted residuals. Based on this anchor-free strategy, our method not only fully explores the 3D information from point cloud for 3D proposal generation, but also avoids using a large set of predefined 3D anchor boxes in the 3D space by constraining the 3D proposals to be only generated by foreground points. Anchor-based 3D proposal generation. Our model with this strategy is denoted as Part-A2-anchor. The stage-I is illustrated in Fig. 2, the sparse convolution based encoder takes a voxelized point cloud with shape M × N × H, and produces an 8-time X- and Y -axially downsampled M 8 2D bird-view feature map with H 16 denotes the feature volume being 16-time downsampled along the Z axis, D is the feature dimension of each encoded feature voxel, and “×D” denotes concatenating the features at each x−y bird-view location of different heights to obtain a 1D feature vector. We then append a Region Proposal Network (RPN) head similar with [7] to the above bird-view feature map for 3D proposal generation with pre- defined 3D anchors. Each class has 2× M 8 predefined anchors with the specified anchor size for each class, where each pixel on the bird-view feature map has one anchor parallel to the X axis and one anchor parallel to the Y axis. Each class will have its own predefined anchors since the object sizes of different classes vary significantly. For instance, we use (l = 3.9, w = 1.6, h = 1.56) meters for cars, (l = 0.8, w = 0.6, h = 1.7) meters for pedestrians and (l = 1.7, w = 0.6, h = 1.7) meters for cyclists on the KITTI dataset. The anchors are associated with the ground-truth boxes by calculating the 2D bird-view Intersection-over-Union (IoU), where the positive IoU thresholds are empirically set as 0.6, 0.5, 0.5, and the negative IoU thresholds are 0.45, 0.35, 0.35 for cars, pedestrians and cyclists, respectively. We append two convolution layers with kernel size 1 × 1 × 1 to the bird-view feature map for proposal classification and box regression. We use focal loss similarly to Eq. (2) for anchor scoring, and directly use the residual-based regression loss for the positive anchors. Here we directly adopt the commonly used smooth-L1 loss for regression since the center distances between anchors and their corresponding ground-truth boxes are generally within a smaller range than those of the anchor-free strategy due to the IoU thresholds. With the 6 7 candidate anchor (x(a), y(a), z(a), h(a), w(a), l(a), θ(a)) and the target ground truth (x(gt), y(gt), z(gt), h(gt), w(gt), l(gt), θ(gt)), the residual-based box regression targets for center, angle and size are defined as Ag) 22) a y(a) a w= zat) (a) : Ta (4) d n(a) , oron(ie), An) =10g 2(4), Aw =log (24). (7) Aa =sin(69) 9), where do =y/(U)2+(w)?, =p) Asa where the orientation target is encoded as sin(θ(gt) − θ(a)) to eliminate the ambiguity of cyclic values of orientation. However, this method encodes two opposite directions to the same value, so we adopt an extra convolution layer with kernel size 1 × 1 × 1 to the bird-view feature map as in [7] for classifying two opposite directions of orientation, where the direction target is calculated by the following approach: if θ(gt) is positive, the direction target is one, otherwise the direction target is zero (Note that θ(gt) ∈ [−π, π)). We use cross entropy loss similarly to Eq. (3) for the binary classification of orientation direction, which is denoted as term Ldir. Then the overall 3D bounding box regression loss Lbox could be formulated as Fig. 6. Illustration of the proposed RoI-aware point cloud feature pooling. The previous point cloud pooling approach could not effectively encode the proposal’s geometric information (blue dashed boxes). Our proposed RoI-aware point cloud pooling method could encode the box’s geometric information (the green box) by keeping the empty voxels, which could be efficiently processed by following sparse convolution. canonical coordinate systems of the corresponding 3D proposals. The canonical coordinate system for one 3D proposal denotes that (1) the origin is located at the center of the box proposal; (2) the local X’ and Y’ axes are approximately parallel to the ground plane with X’ pointing towards the head direction of proposal and the other Y’ axis perpendicular to X’; (3) the Z’ axis remains the same as that of the global coordinate system. All pooled points’ coordinates p of the box proposal should be transformed to the canonical coordinate system as p by proper rotation and translation. The positive 3D proposals and their corresponding ground-truth 3D boxes are also transformed to the canonical coordinate system to calculate the residual regression targets for box refinement. The proposed canonical coordinate system substantially eliminates much rotation and location variations of different 3D proposals and improves the efficiency of feature learning for later box location refinement. La= res€{x,y,2,,h,w,0} Lomoowis (Ares), Ares) + BL, (8) where Ares is the predicted residual for the candidate anchor, Ares“) is the corresponding ground-truth target calculated as Eq. (7), and the loss weight 3 = 0.1. Note that the box regression loss Lox is only applied to the positive anchors. Discussion of the two 3D proposal generation strategies. Both of these two 3D proposal generation strategies have their ad- vantages and limitations. The proposed anchor-free strategy is generally light-weight and memory efficient because it does not requires evaluating a large number of anchors at each spatial location in the 3D space. The efficiency is more obvious for multi- class object detection since different classes in 3D object detection generally require different anchor boxes, while the anchor-free scheme can share the point-wise feature for generating proposals for multiple classes. The second anchor-based proposal generation strategy achieves slightly higher recall by covering the whole bird-view feature map with its predefined anchors for each class, but has more parameters and requires more GPU memory. The detailed experiments and comparison are discussed in Sec. 4.1.4. RoI-aware point cloud feature pooling. The point cloud pooling operation in our preliminary work PointRCNN [32] simply pools the point-wise features from the 3D proposals whose correspond- ing point locations are inside the 3D proposal. All the inside points’ features are aggregated by the PointNet++ encoder for refining the proposal in the second stage. However, we observe that this operation loses much 3D geometric information and introduces ambiguity between different 3D proposals. This phe- nomenon is illustrated in Fig. 6, where different proposals result in the same pooled points. The same pooled features introduces adverse effects to the following refinement stage. # 3.2 RoI-aware point cloud feature pooling Given the predicted intra-object part locations and the 3D pro- posals, we aim to conduct box scoring and proposal refinement by aggregating the part information and learned point-wise features of all the points within the same proposal. In this subsection, we first introduce the canonical transformation to reduce the effects from the rotation and location variations of different 3D proposals, then we propose the RoI-aware point cloud feature pooling module to eliminate the ambiguity of previous point cloud pooling operation and to encode the position-specific features of 3D proposals for box refinement. Therefore, we propose the RoI-aware point cloud pooling module to evenly divide each 3D proposal into regular voxels with a fixed spatial shape (Lx × Ly × Lz), where Lx, Ly, Lz are the integer hyperparameters of the pooling resolution in each dimension of the 3D proposals (e.g., 14 × 14 × 14 is adopted in our framework) and independent of different 3D proposal sizes. let F = {fi ∈ RC0, i ∈ 1, · · · , n} de- note the point-wise features of all the inside points in a 3D proposal b, and they are scattered in the divided voxels of the 3D proposal according to their local canonical coordinates X = {(x(ct) ) ∈ R3, i ∈ 1, · · · , n}, where n is the number of inside points. Then the RoI-aware voxel-wise max Canonical transformation. We observe that if the box refinement targets are normalized in a canonical coordinate system, it can be better estimated by the following box refinement stage. We trans- form the pooled points belonging to each proposal to individual pooling and average pooling operations could be denoted as Q = RoIAwareMaxPool(X , F, b), Q ∈ RLx×Ly×Lz×C0, Q = RoIAwareAvgPool(X , F, b), Q ∈ RLx×Ly×Lz×C0 , (10) where Q is the pooled 3D feature volumes of proposal b. Specifi- cally, the feature vector at the kth voxel Qk of the voxel-wise max pooling and average pooling could be computed as Oh = max {f; EN} if |Ni| > 0, k™ \o if Nel = 0, or (dd) On = Wa Lili: fie Ne if Nel > 0, k™ lo if || = 0, where Nk is the set of points belonging to the kth voxel and k ∈ {1, · · · , Lx × Ly × Lz}. Note that here the features of empty voxels (|Nk| = 0) would be set to zeros and marked as empty for the following sparse convolution based feature aggregation. The proposed RoI-aware point cloud pooling module encodes different 3D proposals with the same local spatial coordinates, where each voxel encodes the features of a corresponding fixed grid in the 3D proposal box. This position-specific feature pooling better captures the geometry of the 3D proposal and results in an effective representation for the follow-up box scoring and location refinement. Moreover, the proposed RoI-aware pooling module is differentiable, which enables the whole framework to be end-to- end trainable. # 3.3 Stage-II: Part location aggregation for confidence prediction and 3D box refinement By considering the spatial distribution of the predicted intra- object part locations and the learned point-wise part features in a 3D box propsoal from stage-I, it is reasonable to aggregate all the information within a proposal for box proposal scoring and refinement. Based on the pooled 3D features, we train a sub- network to robustly aggregate information to score box proposals and refine their locations. Fusion of predicted part locations and semantic part features. As shown in the right part of Fig. 2, we adopt the proposed RoI-aware point cloud pooling module to obtain discriminative features of each 3D proposal. Let b denote a single 3D pro- posal, and for all of its inside points (with canonical coordinate , z(ct) X = {(x(ct) , y(ct) ) ∈ R3, i ∈ 1, · · · , n}), we denote i i , z(part) , y(part) F1 = {(x(part) , si) ∈ R4, i ∈ 1, · · · , n} as i i their predicted point-wise part locations and semantic scores from stage-I, and denote F2 = {f (sem) ∈ RC, i ∈ 1, · · · , n} as their point-wise semantic part features learned by backbone network. Here n is the total number of inside points of proposal b. Then the part feature encoding of proposal b could be formulated as follows Q(part) = RoIAwareAvgPool (X , F1, b) , Q(sem) = RoIAwareMaxPool (X , F2, b) , Q(roi) k where G denotes a submanifold sparse convolution layer to transform the pooled part locations to the same feature dimensions (9) (11) C to match Q(sem), and [·, ·] denotes feature concatenation. Here Q(part), Q(sem) and Q(roi) have the same spatial shape (14 × 14 × 14 by default). The fused features Q(roi) encode both geometric and semantic information of the box proposals by the backbone network. Note that here we use the average pooling for pooling the predicted intra-object part locations F1 to obtain representative predicted part location of each voxel of the proposal, while we use the max pooling for pooling the semantic part features F2. Sparse convolution for part information aggregation. For each 3D proposal, we need to aggregate fused features Q(roi) from all inside spatial locations of this proposal for robust box scoring and refinement. As shown in the right part of Fig. 2, we stack several 3D sparse convolutional layers with kernel size 3 × 3 × 3 to aggregate all part features of a proposal as the receptive filed increases. Here we also insert a sparse max-pooling with kernel size 2 × 2 × 2 and stride 2 between the sparse convolutional layers to down-sample the feature volume to 7 × 7 × 7 for saving the computation cost and parameters. Finally we vectorize it to a feature vector (empty voxels are kept as zeros) and feed it into two branches for box scoring and location refinement. Compared with the naive method to directly vectorize the pooled 3D feature volume to a feature vector, our sparse con- volution based part aggregation strategy could learn the spatial distribution of the predicted part locations effectively by aggregat- ing features from local to global scales. The sparse convolution strategy also enables a larger 14 × 14 × 14 pooling size by saving much computations, parameters and GPU memory. 3D IoU guided box scoring. For the box scoring branch of stage- II, inspired by [35], [62], we normalize the 3D Intersectoin-over- Union (IoU) between 3D proposal and its corresponding ground truth box as the soft label for proposal quality evaluation. The proposal quality q(a) is defined as q(a) = 1 0 2IoU − 0.5 otherwise, if IoU > 0.75, if IoU < 0.25, (13) which is also supervised by a binary cross entropy loss Lscore de- fined similarly to Eq. (3). Our experiments in Sec. 4.1.7 show that comparing with the traditional classification based box scoring, the IoU guided box scoring leads to slightly better performance. # 3.4 Overall loss Our whole network is end-to-end trainable and the overall loss function is consist of the part-aware loss and part-aggregation loss. Losses of part-aware stage-I. For the part-aware stage-I, the loss function consists of three terms with equal loss weights, including focal loss for foreground point segmentation, binary cross entropy loss for the regression of part locations and smooth-L1 loss for 3D proposal generation, 1 Npos where loss weight λ = 2.0, Npos is the total number of foreground points, Mpos = Npos for Part-A2-free model and Mpos is the total number of positive anchors for Part-A2-anchor model. For Lbox loss, as mentioned in Sec. 3.1.3, we adopt the bin-based box generation loss for Part-A2-free, and adopt the residual-based box regression loss for Part-A2-anchor model. 8 FC (K) Segmentation I FC (3) Part Regression Point Cloud 16 Skip—connected | si Feature (64)! 3 zt Previous-layer ! Feature (64) Channel q | Reduction } SparseConv | | Submanifold SparseUp- Sparse Elementwise Stride 2 SparseConv sampling & Inverse Addition Kernel 3 Stride 1 Refinement Conv Kernel 3 Block Fig. 7. The sparse convolution based encoder-decoder backbone net- work of part-aware stage-I of the Part-A2-anchor model. Losses of part-aggregation stage-II. For the part-aggregation stage-II, the loss function includes a binary cross entropy loss term for box quality regression and a smooth-L1 loss term for 3D box proposal refinement, Laggregation = Lscore + 1 Tpos Lbox refine, (15) where Tpos is the number of positive proposals, and we conducted the residual-based box regression loss for Lbox refine as used in Eq. (7), which includes box center refinement loss, size refinement loss and angle refinement loss. Besides that, we also add the corner regularization loss Lcorner as used in [6], and the final box refinement loss is as follows > res€ {x,y,z,l,h,w,O} Loox_tefine = Lemootr-it(Ates(”), Ares) + Leomer (16) where Ares(") is the predicted residual for the 3D proposal, Ares”) is the corresponding ground-truth target calculated simi- larly to Eq. (7). and all losses here have the same loss weights. Note that here the angle refinement target is directly encoded as Ag”) = (694) — 6(7)) since the angle difference between proposals and their corresponding ground-truth boxes are within a small range due to the IoU constraint for positive proposals. Overall loss. Hence the overall loss function of our Part-A? net for end-to-end training is calculated as Ltotal = Laware + Laggregation (17) where the losses of these two stages have equal loss weights. # 3.5 Implementation details We design a UNet-like architecture [45] for learning point-wise feature representations with 3D sparse convolution and 3D sparse deconvolution on the obtained sparse voxels. The spatial resolution is downsampled 8 times by three sparse convolutions of stride 2, each of which is followed by several submanifold sparse convolutions. As illustrated in Fig. 7, we also design a similar up-sampling block as that in [63] based on sparse operations for refining the fused features. Network details. As shown in Fig. 7, for the part-aware stage-I of Part-A2-anchor model, the spatial feature volumes have four scales with feature dimensions 16-32-64-64, and we use three 3D sparse convolution layers with kernel size 3 × 3 × 3 and stride 2 to downsample the spatial resolution by 8 times. We stack two submanifold convolution layers in each level with kernel size 3 × 3 × 3 and stride 1. There are four sparse up-sampling blocks (see Fig. 7 for network details) in the decoder to gradually increase feature dimensions as 64-64-32-16. Note that the stride of the last up-sampling block is 1 and the stride of other three up-sampling blocks is 2. For the Part-A2-free net, we increase the feature dimensions of decoder to 128 in each scale and use simple concatenation to fuse the features from the same level of encoder and the previous layer of decoder, since the learned point-wise features of decoder should encode more discriminative features for the bottom-up 3D proposal generation. For the part-aggregation stage, as shown in Fig. 2, the pooling size of RoI-aware point cloud pooling module is 14 × 14 × 14, which is downsampled to 7 × 7 × 7 after processed by the sparse convolutions and max-pooling with feature dimensions 128. We vectorize the downsampled feature volumes to a single feature vector for the final box scoring and location refinement. Training and inference details. We train the entire network end-to-end with the ADAM optimizer and a batch size 6 for 50 epochs. The cosine annealing learning rate strategy is used with an initial learning rate 0.001. We randomly select 128 proposals from each scene for training stage-II with 1 : 1 positive and negative proposals, where the positive proposals for box refinement have 3D IoU with their corresponding ground truth box of at least 0.55 for all classes, otherwise they are negative proposals, and the scoring target is defined as Eq. (13) for the confidence prediction. We conduct common data augmentation during training, in- cluding random flipping, global scaling with scaling factor uni- formly sampled from [0.95, 1.05], global rotation around the vertical axis by an angle uniformly sampled from [− π 4 ]. In order to simulate objects with various environment as [7], we also randomly “copy” several ground-truth boxes with their inside points from other scenes and “paste” them to current training scenes. The whole training process of our proposed part-aware and part-aggregation networks takes about 17 hours on a single NVIDIA Tesla V100 GPU. For inference, only 100 proposals are kept from part-aware stage-I with NMS threshold 0.85 for Part-A2-free and 0.7 for Part- A2-anchor, which are then scored and refined by the following part-aggregation stage-II. We finally apply the rotated NMS with threshold 0.01 to remove redundant boxes and generate the final 3D detection results. The overall inference time is about 70ms on a single Tesla V100 GPU card. # 3.6 Pros and cons. Our proposed 3D object detection framework has some advan- tages and disadvantages under different situations. Compared with the previous 3D object detection methods [1], [4], [5], [6], [7], [26], [29], [64], (1) our proposed method for the first time introduces the learning of intra-object part locations to improve the performance of 3D object detection from point cloud, where the predicted intra-object part locations effectively encode the point distribution of 3D objects to benefit the 3D object detection. (2) The proposed RoI-aware feature pooling module eliminates the ambiguity of previous point cloud pooling operations and transforms the sparse point-wise features to the regular voxel features to encode the position-specific geometry and 9 semantic features of 3D proposals, which effectively bridges the proposal generation network and the proposal refinement network, and results in high detection accuracy. (4) Besides, the learning process of part locations could also be adopted to other tasks for learning more discriminative point-wise features, such as the instance segmentation of point cloud. The proposed RoI-aware pooling module could also be flexibly utilized on transforming the point-wise features from point-based networks (such as Point- Net++) to the sparse voxel-wise features, that could be processed by more efficient sparse convolution networks. On the other hand, our method also has some limitations. Since our method aims at high performing 3D object detection in autonomous driving scenarios, some parts of our method could not be well applied for the 3D object detection of indoor scenes. This is because the 3D bounding boxes in indoor scenes may overlap with each other (such as chairs under the table), therefore the 3D bounding box annotations of indoor scenes could not provide the accurate point-wise segmentation labels. Also, there are some categories whose orientation is not well-defined (such as the round tables), hence we could not generate accurate labels of the proposed intra-object part locations. However, our proposed anchor-free proposal generation strategy still shows great potential on the 3D proposal generation of indoor scenes since the indoor objects do not always stay on the ground and our anchor-free strategy avoids to set 3D anchors in the whole 3D space. 4 EXPERIMENTS In this section, we evaluate our proposed method with exten- sive experiments on the challenging 3D detection benchmark of KITTI [33] dataset. In Sec. 4.1, we present extensive ablation studies and analysis to investigate individual components of our models. In Sec. 4.2, we demonstrate the main results of our methods by comparing with state-of-the-art 3D detection methods. Finally we visualize some qualitative results of our proposed 3D detection model in Sec. 4.3. Dataset. There are 7481 training samples and 7518 test samples in the dataset of KITTI 3D detection benchmark. The training samples are divided into the train split (3712 samples) and val split (3769 samples) as the frequently used partition of KITTI dataset. All models are only trained on the train split, and evaluated on the val and test splits. Models. There are three main models in our experiments, i.e., Part-A2-free model, Part-A2-anchor model and our preliminary PointRCNN model [32]. The network details of Part-A2-free and Part-A2-anchor models have been demonstrated in Sec. 3, and the whole framework is illustrated in Fig. 2. As discussed in Sec. 3.1.3, the key differences between these two versions of Part-A2 models is that Part-A2-free generates 3D proposals in the bottom-up (anchor-free) manner, while the Part-A2-anchor net generates 3D proposals with the proposed anchor-based scheme. PointRCNN is in the preliminary version of this work [32]. It utilizes PointNet++ [28] to extract point-wise features, which are used to generate 3D proposals in a bottom-up manner via segmenting the foreground points as demonstrated in Sec. 3.1.3. Furthermore, in the stage-II of PointRCNN, we pool the inside points and their point-wise features for each 3D proposals, which are then fed to a second PointNet++ encoder to extract features of each 3D proposal for proposal confidence prediction and 3D box proposal refinement. Method PointRCNN Part-A2-free Part-A2-anchor Backbone PointNet++ SparseConvUNet SparseConvUNet Proposal Scheme Anchor-free Anchor-free Anchors-based Recall 74.81 81.54 85.58 TABLE 1 Recall (with 100 proposals) of the proposal generation stage by different backbone network and different proposal generation strategy. The experiments are conducted on the car class at moderate difficulty of the val split of KITTI dataset, and the evaluation metric is the 3D rotated IoU with threshold 0.7. # 4.1 From points to parts: ablation studies for Part-A2 net In this section, we provide extensive ablation experiments and analysis to investigate the individual components of our proposed Part-A2 net models. 4.1.1 SparseConvUNet v.s. PointNet++ backbones for 3D point-wise feature learning As mentioned in Sec. 3.1, instead of utilizing PointNet++ as the backbone network, we design a sparse convolution based UNet (denoted as SparseConvUNet) for point-wise feature learning, and the network details are illustrated in Fig. 7. We first compare the PointRCNN with PointNet++ backbone and Part-A2-free with SparseConvUNet backbone with the same loss functions to test these two different backbone networks. Table 1 shows that our SparseConvUNet based Part-A2-free (2nd row) achieves 81.54% recall with 3D IoU threshold 0.7, which is 6.73% higher than the recall of PointNet++ based PointRCNN (1st row), and it demonstrates that our new designed SparseConvUNet could learn more discriminative point-wise fea- tures from the point cloud for the 3D proposal generation. As shown in Table 5, we also provide the recall values of different number of proposals for these two backbones. We could find the recall of the sparse convolution based backbone consistently outperforms the recall of the PointNet++ based backbone, which further validates that the sparse convolution based backbone is better than the PointNet++ based backbone for point-wise feature learning and 3D proposal generation. Table 1 and Table 5 also show that our Part-A2-anchor model achieves higher recall than the Part-A2-free model. Therefore, in our remaining experimental sections, we mainly adopt the Part- A2-anchor model for ablation studies and experimental compari- son unless specified otherwise. 4.1.2 Ablation studies for RoI-aware point cloud pooling In this section, we designed ablation experiments to validate the effectiveness of our proposed RoI-aware point cloud pooling module with the Part-A2-anchor model, and we also explored more pooling sizes to investigate the trend of performance when increasing the RoI pooling size. Effects of RoI-aware point cloud region pooling. As discussed in Sec. 3.2, the proposed RoI-aware point cloud pooling module normalizes different 3D proposals to the same coordinate system to encode geometric information of proposals. It solves the am- biguous encoding by previous 3D point cloud pooling schemes as shown in Fig. 6. The 3D proposals are divided into regular voxels to encode the position-specific features for each 3D proposal. To validate the effects of the RoI-aware pooling module, we conduct the following comparison experiments. (a) We replace 10 Pooling Scheme RoI fixed-sized pool RoI-aware pool RoI-aware pool Stage-II sparse conv FCs sparse conv APEasy APM od. APHard 78.61 79.32 79.47 88.78 89.46 89.47 78.05 78.77 78.54 TABLE 2 Effects of RoI-aware point cloud pooling by replacing the RoI-aware pooling or the sparse convolution, and the pooling sizes of all the settings are 14 × 14 × 14. The results are the 3D detection performance of car class on the val split of KITTI dataset. RoI Pooling Size APEasy APM od. APHard 6 × 6 × 6 8 × 8 × 8 10 × 10 × 10 12 × 12 × 12 14 × 14 × 14 16 × 16 × 16 89.02 89.09 89.44 89.61 89.47 89.52 78.85 78.97 79.15 79.35 79.47 79.45 78.04 78.15 78.42 78.50 78.54 78.56 TABLE 3 Effects of using different RoI-aware pooling sizes in our part-aggregation stage. The results are the 3D detection performance of car class on the val split of KITTI dataset. RoI-aware pooling by fixed-sized RoI pooling, i.e. pooling all 3D proposals with the same fixed-size (l = 3.9, w = 1.6, h = 1.56 meters for car) 3D box calculated from the mean object size of the training set with 14 × 14 × 14 grids. The center and orientation of the 3D grid are set as its corresponding 3D proposal’s center and orientation, respectively. This is very similar to the pooling scheme used in PointRCNN, where not all geometric information is well preserved during pooling. (b) We replace sparse convolutions of stage-II with several FC layers. As shown in Table 2, removing RoI-aware pooling substantially decreases detection accuracy, while replacing sparse convolutions of stage-II with FC layers achieves similar performance, which proves the effectiveness of our proposed RoI-aware pooling but not the sparse convolution contributes to the main improvements. The 14 × 14 pooling size was Effects of RoI pooling size. very commonly chosen for 2D object detection, and we follow the same setting to use 14 × 14 × 14 as the 3D RoI-aware pooling size. We also test different RoI pooling sizes as shown in Table 3. The pooling size shows robust performance for different 3D objects. Similar performances can be observed if the pooling sizes are greater than 12 × 12 × 12. 4.1.3 Sparse convolution v.s. fully-connected layers for part aggregation. In our Part-A2 net, after applying the RoI-aware point cloud pooling module, there are several ways to implement the part- aggregation stage. The simplest strategy is to directly vectorize the pooled feature volumes to a feature vector followed by several fully-connected layers for box scoring and refinement. From the 1st row of Table 4, we could see that this naive way already achieved promising results, which are benefited from the effective representations of our RoI-aware point cloud pooling since each position of the feature vector encodes a specific intra-object position of the object of interest to help learn the shape of the box better. In the 2nd row of Table 4, we further investigate using sparse convolution with kernel size 3 × 3 × 3 to aggregate the features from local to global scales gradually, which achieves slightly better results with the same pooling size 7 × 7 × 7. The 3rd row shows that fully-connected layers with larger pooling size 14 × 14 × 14 achieves improved performance, but this ae Stage-II pew Sowell. APrasy APuoa. APHara TxX7x7 FCs 89.17 TIAL 78.03 7x 7xX7__ sparse conv 89.24 79.21 78.11 14x 14x 14 FCs 89.46 79.32 78.77 14 x 14 x 14 sparse conv v 89.47 79.47 78.54 TABLE 4 Comparison of several different part-aggregation network structures. The results are the 3D detection performance of car class on the val split of KITTI dataset. design consumes much calculations and GPU memory. As we mentioned in Sec. 3.3, our proposed part-aggregation network adopts a large pooling size 14 × 14 × 14 to capture details and then use sparse max-pooling to downsample the feature volumes for feature encoding, which achieves the best performance in the easy and moderate difficulty levels as shown in Table 4 with lower computation and GPU memory cost than fully-connected layers. # 4.1.4 Ablation studies for 3D proposal generation We investigate the two strategies for 3D proposal generation from point cloud, one is the anchor-based strategy and the other is our proposed anchor-free strategy, in Part-A2-anchor and Part- A2-free models. In this section, we first experiment and discuss two proposal generation strategies in details to provide a reference to choose better strategy for different settings of 3D proposal generation. Then we compare the performance of different center regression looses for the two strategies. Anchor-free v.s. anchor-based 3D proposal generation. We validate the effectiveness of our proposal generation strategies with state-of-the-art two-stage 3D detection methods. As shown in Table 5, our preliminary PointRCNN with anchor-free proposal generation and PointNet++ backbone already achieve significantly higher recall than previous methods. With only 50 proposals, PointRCNN obtains 96.01% recall at IoU threshold 0.5, which outperforms recall 91% of AVOD [4] by 5.01% at the same number of proposals, note that the latter method uses both 2D image and point cloud for proposal generation while we only use point cloud as input. We also report the recall of 3D bounding box at IoU threshold 0.7 by our anchor-free and anchor-based strategies in Table 5. Part-A2-free model (with anchor-free proposal generation strat- egy) achieves 77.12% recall at IoU threshold 0.7 with only 50 proposals, which is much higher than the recall of our preliminary work PointRCNN since Part-A2-free model adopts better sparse convolution based backbone. Our Part-A2-anchor model (with anchor-based proposal generation strategy) further improves the recall to 83.71% at IoU threshold 0.7 with 50 proposals. This is because the anchor-based strategy has a large number of anchors to more comprehensively cover the entire 3D space to achieve a higher recall. However, the improvement comes with sacrifices, as it needs different sets of anchors for different classes at each spatial location. For instance, the anchor size of pedestrians is (l = 0.8m, w = 0.6m, h = 1.7m) while the anchor size of cars is (l = 3.9m, w = 1.6m, h = 1.56m). They are unlikely to share the same anchor. In contrast, our anchor-free strategy still generates a single 3D proposal from each segmented foreground point even for many classes, since we only need to calculate the 3D size residual with respect to the corresponding average object size based on its semantic label. 11 Recall (IoU=0.5) Recall (IoU=0.7) RoIs # 10 20 30 40 50 100 200 300 MV3D AVOD PointRCNN PointRCNN - - - - - - - 91.00 86.00 - - - 91.00 - - - 86.66 91.83 93.31 95.55 96.01 96.79 98.03 98.21 29.87 32.55 32.76 40.04 40.28 74.81 76.29 82.29 Part-A2 -free 66.31 74.46 76.47 76.88 77.12 81.54 84.93 86.03 Part-A2 -anchor 80.68 81.64 82.90 83.05 83.71 85.58 89.32 91.64 TABLE 5 Recall of generated proposals by compared methods with different numbers of RoIs and 3D IoU thresholds for the car class at moderate difficulty of the val split. Note that only MV3D [1] and AVOD [4] of previous methods reported the recall rates of proposals. Class Method Part-A2-free Car Part-A2-anchor Car Part-A2-free Cyclist Part-A2-anchor Cyclist Part-A2-free Pedestrian Part-A2-anchor Pedestrian IoU Thresh APEasy APM od. APHard 0.7 0.7 0.5 0.5 0.5 0.5 88.48 89.47 88.18 88.31 70.73 70.37 78.96 79.47 73.35 73.07 64.13 63.84 78.36 78.54 70.75 70.20 57.45 57.48 TABLE 6 3D object detection results of Part-A2-free net and Part-A2-anchor net on the KITTI val split set. The 3D detection results of the Part-A2-free and Part-A2- anchor models on cars, cyclists and pedestrians are reported in Table 6. We could see that the 3D detection results of cyclist and pedestrian by Part-A2-free are comparable to those by Part- A2-anchor model, while the results of cars by Part-A2-free are lower than those by Part-A2-anchor on the moderate and easy difficulties. Hence the bottom-up Part-A2-free model has better potential on multi-class 3D detection on small-size objects (such as cyclists and pedestrians) with lower memory cost, while the anchor-based Part-A2-anchor model may achieve a slightly better performance on 3D detection of large-size objects such as cars. That is because the predefined anchors are closer to the center locations of objects with large sizes, while the bottom-up proposal generation strategy suffers from difficulty of regressing large residuals from the object surface points to object centers. Center regression losses of 3D bounding box generation. We compare different center regression losses on our Part-A2-free net and our Part-A2-anchor net, including the proposed bin-based regression loss (Eq. (4)) and the residual-based regression loss (first row of Eq. (7)). As shown in Fig. 8, for the Part-A2-free net with anchor-free proposal generation strategy, the bin-based regression loss (solid blue line) converges faster than the residual- based regression loss (solid red line). In contrast, for the Part- A2-anchor net with anchor-based proposal generation scheme, the residual-based regression loss (dashed red line) converges faster and better than the bin-based regression loss (dashed blue line). It demonstrates that the proposed bin-based center regression loss is more suitable with the anchor-free proposal generation strategy to achieve better performance, since the center regression targets of anchor-free scheme (generally from a surface point to the object center) vary a lot and bin-based localization could better constrain the regression targets and make the convergence faster and more stable. Fig. 8 shows that the Part-A2-anchor net achieves better recall with the residual-based center regression loss and we also adopt residual-based center localization loss for the Part-A2- 12 Fig. 8. Recall v.s. training iterations for different center regression losses of Part-A2-free net and Part-A2-anchor net. The results are generated according to the generated proposals of Part-A2-free net and Part-A2- anchor net on the car class of the val split with IoU threshold 0.5. anchor net as mentioned in Sec. 3.1.3. 4.1.5 Benefits of intra-object part location prediction for 3D object detection To validate the effectiveness of utilizing the free-of-charge intra- object part locations for 3D detection, we test removing the part location supervisions from our Part-A2-anchor model. In the backbone network, we only remove the branch for predicting intra-object part locations and keep other modules unchanged. The point-wise part locations for RoI-aware pooling are replaced with the canonical coordinates of each point. As shown in Table 7, compared with the model trained without intra-object part location supervisions (the 3rd vs the 4th rows), the models with part location supervisions achieves better recall and average precision on all difficulty levels of the val split of the car class. The remarkable improvements on recall and precision indicate that the network learned better 3D features for scoring box and refining locations for detection with detailed and accurate supervisions of the intra-object part locations. 4.1.6 One-stage v.s. two-stage 3D object detection Table 7 shows that without the stage-II for box scoring and refinement, the proposal recalls of our first proposal stage are com- parable (80.90 v.s. 80.99). However, the performance improves significantly (82.92 v.s. 84.33) after the 100 proposals are refined by the part-aggregation stage. It demonstrates that the predicted intra-object part locations are beneficial for stage-II, and our part-aggregation stage-II could effectively aggregate the predicted intra-object part locations to improve the quality of the predicted 3D boxes. The performance gaps between stage-I and stage-II (1st row v.s. 3rd row, 2nd row v.s. 4th row) also demonstrate that our stage-II improves the 3D detection performance significantly by re-scoring the box proposals and refining their box locations. 4.1.7 Effects of IoU guided box scoring As mentioned in Sec. 3.3, we apply the normalized 3D IoU to estimate the quality of the predicted 3D boxes, which is used as the ranking score in the final NMS (non-maximum-suppression) operation to remove the redundant boxes. Table 8 shows that com- paring with the traditional classification score for NMS, our 3D 12 Stage-I Stage-II Prettion eee APrasy APytoa. APuara Vv 80.90 | 88487797 75.84 v v 80.99 | 88.90 7854 76.44 Vv 7 82.92 | 89.23 79.00 77.66 v v v 84.33 | 89.47 79.47 78.54 TABLE 7 Effects of intra-object part location supervisions and stage-II refinement module, and the evaluation metrics are the recall and average precision with 3D rotated IoU threshold 0.7. The results are the 3D detection performance of car class on the val split of KITTI dataset, and the detection results of stage-I are generated by directly applying NMS to the box proposals from stage-I. NMS Ranking Score classification 3D IoU guided scoring APEasy APM od. APHard 78.81 79.47 89.13 89.47 77.95 78.54 TABLE 8 Effects of 3D IoU guided box scoring for ranking the quality of the predicted 3D boxes. The results are the 3D detection performance of car class on the val split of KITTI dataset. IoU guided scoring method increases the performance marginally in all difficulty levels, which validates the effectiveness of using normalized 3D IoU to indicate the quality of predicted 3D boxes. 4.1.8 Memory cost of anchor-free and anchor-based pro- posal generation As shown in Table 9, we compare the model complexity of the anchor-free and anchor-based proposal generation strategies by calculating the number of parameters and the number of generated boxes with different number of object classes. Part-A2-free model (with anchor-free proposal generation strategy) consistently gener- ates ∼16k proposals (i.e., the number of points of the point cloud), which is independent with the number of classes, while the number of generated boxes (i.e., the predefined anchors) of Part-A2-anchor model (with anchor-based proposal generation), increases linearly with the number of classes since each class has its own anchors with specified object sizes for each class. The number of anchors of Part-A2-anchor model achieves 211.2k for detecting objects of 3 classes, which shows that our anchor-free proposal generation strategy is a relatively light-weight and memory efficient strategy especially for multiple classes. We also report the inference GPU memory cost for three classes detection (car, pedestrian and cyclist) on KITTI [33] dataset. The inference is conducted by PyTorch framework on a single NVIDIA TITAN Xp GPU card. For the inference of a single scene, Part-A2-free model consumes about 1.16GB GPU memory while Part-A2-anchor model consumes 1.63GB GPU memory. For the inference with six scenes simultaneously, Part-A2-free model consumes about 3.42GB GPU memory while Part-A2- anchor model consumes 5.46GB GPU memory. It demonstrates that the Part-A2-free model (with anchor-free proposal generation strategy) is more memory efficient than Part-A2-anchor model (with anchor-based proposal generation). 4.1.9 Analysis of false positive samples Fig. 9 shows the ratios of false positives of our best perfor- mance Part-A2-anchor model on the KITTI validation dataset with different score thresholds, which are caused by confusion with background, poor localization, and confusion with objects from Number of classes 1 3 Part-A2-free Number of parameters 1775269 1775527 Number of generated boxes ∼16k ∼16k Part-A2-anchor Number of parameters 4648588 4662952 Number of anchors 70.4k 211.2k TABLE 9 The number of parameters on proposal generation head, and the number of generated boxes with different number of classes for Part-A2-free model and Part-A2-anchor model. The parameters of (5, 10, 100) are counted by setting faked number of classes and the number of generated boxes are for the KITTI scene. Other objects Other objects Backgrount Localization Background Localization Score threshold: 0.5 Score threshold: 0.9 Fig. 9. Ratios of high-scored false positives for the car class on the val split of KITTI dataset that are due to poor localization, confusion with other objects, or confusion with background or unlabeled objects. other categories. It can be seen that, the majority of false positives are from background and poor localization. The confusion of background mainly comes from the fact that the sparse point cloud could not provide enough semantic information for some background like the flower terrace. The LiDAR-only 3D detection methods may mistakenly recognize them as foreground objects like car since they have similar geometry shape in the point cloud. The ratio of false positives from poor localization increases significantly with the increasing score threshold. This is because the evaluation requirement of 3D rotated IoU constraint for 3D detection is more strict than the evaluation metric of 2D detection. # 4.2 Main results and comparison with state-of-the-arts on KITTI benchmark In this section, we report the comparison results with state-of-the- art 3D detection methods on the KITTI benchmark. We mainly report the performance of our Part-A2-anchor model as it is able to reach higher accuracy in our ablation studies. Comparison with state-of-the-art 3D detection methods. We evaluate our methods on the 3D detection benchmark and the bird’s eye view detection benchmark of the KITTI test split, whose results are evaluated on KITTI’s offical test server. The results are shown in Table 10. For the 3D object detection benchmark, by only using LiDAR point clouds, our proposed Part-A2 net outperforms all previous peer-reviewed LiDAR only methods on all difficulty levels for all the three classes, and outperforms all previous multi-sensor methods on the most important “moderate“ difficulty level for both car and cyclist classes. For the bird’s view detection of car, pedestrian and cyclist, our method outperforms previous state-of- the-art methods by large margins on almost all the difficulty levels. As of August 15, 2019, our proposed Part-A2-anchor net ranks 1st place among all methods on the most important car class of 3D object detection leaderboard of KITTI 3D Object Detection Benchmark [65], while our method also ranks 1st among all LiDAR-only methods on the cyclist class. 13 Method Modality Mod. Easy Hard Mod. Easy Hard Mod. Easy Hard Mod. Easy Hard Mod. Easy Hard Mod. Easy Hard RGB + LiDAR 62.35 71.09 55.12 76.90 86.02 RGB + LiDAR 66.22 82.54 64.04 85.83 88.81 RGB + LiDAR 71.88 81.94 66.38 83.79 88.53 RGB + LiDAR 70.39 81.20 62.19 84.00 88.70 RGB + LiDAR 73.80 84.33 64.83 86.10 88.49 UberATG-MMF [64] RGB + LiDAR 76.75 86.81 68.41 87.47 89.49 65.11 77.47 57.73 79.26 89.35 73.66 83.13 66.20 79.37 88.07 74.99 79.05 68.30 86.10 88.35 75.76 85.94 68.32 85.68 89.47 77.86 85.94 72.00 84.76 89.52 MV3D [1] ContFuse [5] AVOD-FPN [4] F-PointNet [6] PC-CNN-V2 [8] VoxelNet [29] SECOND [7] PointPillars [9] PointRCNN (Ours) Part-A2-anchor (Ours) LiDAR only LiDAR only LiDAR only LiDAR only LiDAR only 68.49 77.33 77.90 75.33 77.26 79.10 77.39 77.95 79.83 79.10 81.47 - - 42.81 50.80 40.88 51.05 58.75 44.89 51.21 40.23 50.22 58.09 - - 33.69 39.48 31.51 40.74 46.13 42.56 51.07 37.29 46.27 55.10 43.53 52.08 41.49 50.23 58.66 41.78 49.43 38.63 47.53 55.92 44.50 54.49 42.36 51.12 59.72 - - - - - - - - - - - - - - - - - - 47.54 47.20 - - 38.11 44.76 47.19 44.67 48.04 - - 52.18 64.00 46.61 56.77 71.96 50.39 - - 48.36 61.22 44.37 53.85 70.51 40.90 59.07 75.78 52.92 59.60 73.93 53.59 62.73 78.58 57.74 - - - - - - - - - - 57.48 68.09 50.77 61.96 75.38 54.68 - - 54.76 66.70 50.55 56.04 73.67 48.78 62.25 79.14 56.00 66.77 81.52 60.78 68.12 81.91 61.92 - - - - - - - - TABLE 10 Performance evaluation on KITTI official test server (test split). The 3D object detection and bird’s eye view detection are evaluated by mean average precision with 11 recall positions. The rotated IoU threshold is 0.7 for car and 0.5 for pedestrian/cyclist. Method MV3D [1] ContFuse [5] AVOD-FPN [4] F-PointNet [6] VoxelNet [29] SECOND [7] PointRCNN (Ours) Part-A2-free (Ours) Part-A2-anchor (Ours) AP (IoU=0.7) Mod. Easy Hard CVPR 2017 RGB & LiDAR 62.68 71.29 56.56 ECCV 2018 RGB & LiDAR 73.25 86.32 67.81 IROS 2018 RGB & LiDAR 74.44 84.41 68.65 CVPR 2018 RGB & LiDAR 70.92 83.76 63.65 65.46 81.98 62.85 CVPR 2018 LiDAR only 76.48 87.43 69.10 Sensors 2018 LiDAR only 78.63 88.88 77.38 LiDAR only CVPR 2019 78.96 88.48 78.36 LiDAR only - 79.47 89.47 78.54 LiDAR only - Reference Modality TABLE 11 Performance comparison of 3D object detection on the car class of the KITTI val split set. Method MV3D [1] F-PointNet [6] ContFuse [5] VoxelNet [29] SECOND [7] PointRCNN (Ours) Part-A2-free (Ours) Part-A2-anchor (Ours) AP (IoU=0.7) Mod. Easy Hard CVPR 2017 RGB & LiDAR 78.10 86.55 76.67 CVPR 2018 RGB & LiDAR 84.02 88.16 76.44 ECCV 2018 RGB & LiDAR 87.34 95.44 82.43 84.81 89.60 78.57 CVPR 2018 LiDAR only 87.07 89.96 79.66 Sensors 2018 LiDAR only 87.89 90.21 85.51 LiDAR only CVPR 2019 88.05 90.23 85.85 LiDAR only - 88.61 90.42 87.31 LiDAR only - Reference Modality x-axis 0.468 y-axis 0.442 z-axis Overall 0.531 0.552 TABLE 15 Pearson correlation coefficient between the errors of the predicted intra-object part locations and the errors of the predicted 3D bounding boxes. 0.107 error_x lm error_y 0.08 error_z 0.06 0.04; 3.75% , 4, 3.60% 0.02 0.00 Easy Moderate TABLE 12 Performance comparison of bird-view object detection on the car class of the KITTI val split set. Fig. 10. Statistics of predicted intra-object part location errors for the car class on the val split of KITTI dataset. Method PointRCNN Part-A2-free Part-A2-anchor PointRCNN Part-A2-free Part-A2-anchor Class Cyclist Cyclist Cyclist Pedestrian Pedestrian Pedestrian AP (IoU=0.5) Easy 86.13 88.18 88.31 69.43 70.73 70.37 Mod. 69.70 73.35 73.07 63.70 64.13 63.84 Hard 65.40 70.75 70.20 58.13 57.45 57.48 Results on validation set. For the most important car category, our methods are compared with state-of-the-art methods on KITTI val split including both 3D object detection (shown in Table 11) and 3D object localization (shown in Table 12). We could see that on the most important “moderate” difficulty level, our Part-A2 net outperforms state-of-the-art methods on both two tasks with large margins by using only the point clouds as inputs. In addition, our Part-A2 net achieves new state-of-the-art performance on all difficulty levels of the KITTI 3D object detection val split, which demonstrates the effectiveness of our proposed methods for 3D object detection. TABLE 13 3D object detection results of cyclist and pedestrian of different models on the KITTI val split set. mAbsErrorx mAbsErrory mAbsErrorz mAbsError Overall False Positives 7.24% 12.97% 6.42% 12.09% 5.17% 7.71% 6.28% 10.92% TABLE 14 Mean distance error of predicted intra-object part locations by part-aware stage for the car class of KITTI val split. As shown in Fig. 4, here x, y, z are along the direction of width, length and height of the object, respectively. Note that here the false positives denotes the false positives samples caused by inaccurate localizations. As shown in Table 13, we also report the performance of our methods for cyclist and pedestrian on the validation set for reference. Note that compared with PointRCNN, our latest method Part-A2-anchor net improves the performance of cyclist significantly while achieves comparable results on pedestrian. The reason for slightly inferior performance on pedestrians might be that the orientation of pedestrian is hard to be recognized from the sparse point cloud, which is harmful for the prediction of part locations in our Part-A2-anchor net. Multi-sensor methods that integrate RGB images would have advantages for detecting small objects like pedestrians. Evaluation of Part-A2-anchor net for predicting intra-object 14 Fig. 11. Qualitative results of Part-A2-anchor Net on the KITTI test split. The predicted 3D boxes are drawn with green 3D bounding boxes, and the estimated intra-object part locations are visualized with interpolated colors as shown in Fig. 4. Best viewed in colors. part locations. The intra-object part locations predicted by our part-aware stage-I are crucial for the part-aggregation stage-II to accurately score the box and refine the box location. Here we evaluate the accuracy of predicted intra-object part locations by the following metric: AbsError,, = wl > ara") - Par), ue {x,y,z}, (18) iEG is the predicted part location, u(part) where ˜u(part) is the ground truth part location, and G is the set of foreground points for each sample. The final mAbsErroru is the mean value of AbsErroru for all samples. As shown in Table 14, for the most important car category, the mean error of our predicted intra-object part locations is 6.28%, which shows that the part-aware network accurately predicts the intra-object part locations since the average error is only ±6.28cm per meter for the cars. Based on this accurate intra-object part lo- cations, our part-aggregation stage-II could better score the boxes and refine the box locations by utilizing the predicted geometric information. Here we also report the detailed error statistics of predicted intra-object part locations on different difficulty levels of the KITTI val split in Fig. 10 for reference. utilize 1 − IoU to indicate the errors of the predicted 3D bounding boxes, where IoU is the 3D Intersection-over-Union (IoU) between the predicted 3D bounding box and its best matched ground-truth box. As shown in Table 15, we could see that the errors of intra- object part locations have obviously positive correlation with the errors of the predicted 3D bounding boxes. The overall correlation coefficient is 0.531 and the most correlated axis is the z-axis in the height direction where the correlation coefficient achieves 0.552, which demonstrates that accurate intra-object part locations are beneficial for predicting more accurate 3D bounding boxes. We also report the errors of intra-object part locations on the false positive samples which are caused by inaccurate localization (see row 2 of Table. 14), and we could see that the predicted part location errors increase significantly in all three axes, which indicate that inaccurately predicted intra-object part locations may lead to unsatisfactory 3D object localization and decrease the performance of 3D object detection. # 4.3 Qualitative results We further analyze the correlations between the errors of the predicted intra-object part locations and the errors of the pre- dicted 3D bounding boxes by calculating the Pearson correlation coefficient, which is [−1, 1] with 1 denotes fully positive linear correlation and −1 is fully negative linear correlation. Here we We present some representative results generated by our proposed Part-A2-anchor net on the test split of KITTI dataset in Fig. 11. From the figure we could see that our proposed part-aware network could estimate accurate intra-object part locations by using only point cloud as inputs, which are aggregated by our designed part- aggregation network to generate accurate 3D bounding boxes. 15 # 5 CONCLUSION In this paper, we extend our preliminary work PointRCNN to a novel 3D detection framework, the part-aware and aggregation neural network (Part-A2 net), for detecting 3D objects from raw point clouds. Our part-aware stage-I learns to estimate the accurate intra-object part locations by using the free-of-charge intra-object location labels and foreground labels from the ground-truth 3D box annotations. Meanwhile, the 3D proposals are generated by two alternative strategies, anchor-free scheme and anchor-based scheme. The predicted intra-object part locations of each object are pooled by the novel RoI-aware point cloud pooling scheme. The following part-aggregation stage-II can better capture the geometric information of object parts to accurately score the boxes and refine their locations. Our approach significantly outperforms existing 3D detection methods and achieves new state-of-the-art performance on the challenging KITTI 3D detection benchmark. Extensive experi- ments were carefully designed and conducted to investigate the individual components of our proposed framework. # REFERENCES [1] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, “Multi-view 3d object detection network for autonomous driving,” in CVPR, 2017. [2] S. Song and J. Xiao, “Sliding shapes for 3d object detection in depth Springer, 2014, images,” in European conference on computer vision. pp. 634–651. [3] ——, “Deep sliding shapes for amodal 3d object detection in rgb-d images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 808–816. J. Ku, M. Mozifian, J. Lee, A. Harakeh, and S. Waslander, “Joint 3d proposal generation and object detection from view aggregation,” IROS, 2018. [4] [5] M. Liang, B. Yang, S. Wang, and R. Urtasun, “Deep continuous fusion for multi-sensor 3d object detection,” in ECCV, 2018. [6] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum pointnets for 3d object detection from rgb-d data,” arXiv preprint arXiv:1711.08488, 2017. [7] Y. Yan, Y. Mao, and B. Li, “Second: Sparsely embedded convolutional detection,” Sensors, vol. 18, no. 10, p. 3337, 2018. [8] X. Du, M. H. Ang, S. Karaman, and D. Rus, “A general pipeline for 3d detection of vehicles,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), May 2018, pp. 3194–3200. [9] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” CVPR, 2019. [10] B. Yang, M. Liang, and R. Urtasun, “Hdnet: Exploiting hd maps for 3d object detection,” in 2nd Conference on Robot Learning (CoRL), 2018. [11] B. Yang, W. Luo, and R. Urtasun, “Pixor: Real-time 3d object detection from point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7652–7660. [12] W. Luo, B. Yang, and R. Urtasun, “Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3569–3577. [13] M. Simony, S. Milzy, K. Amendey, and H.-M. Gross, “Complex-yolo: an euler-region-proposal for real-time 3d object detection on point clouds,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0–0. [14] R. Girshick, “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1440–1448. [15] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99. [16] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in European conference on computer vision. Springer, 2016, pp. 21–37. [17] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779– 788. [18] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7263–7271. [19] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117–2125. [20] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei, “Deformable convolutional networks,” in Proceedings of the IEEE international con- ference on computer vision, 2017, pp. 764–773. [21] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar, “Focal loss for dense object detection,” IEEE transactions on pattern analysis and machine intelligence, 2018. [22] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 734–750. [23] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961–2969. [24] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6154–6162. [25] D. Xu, D. Anguelov, and A. Jain, “Pointfusion: Deep sensor fusion for 3d bounding box estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 244–253. [26] Z. Wang and K. Jia, “Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection,” in IROS. IEEE, 2019. [27] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 652–660. [28] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in Advances in Neural Information Processing Systems, 2017, pp. 5099–5108. [29] Y. Zhou and O. Tuzel, “Voxelnet: End-to-end learning for point cloud based 3d object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4490–4499. [30] B. Graham and L. van der Maaten, “Submanifold sparse convolutional networks,” arXiv preprint arXiv:1706.01307, 2017. [31] B. Graham, M. Engelcke, and L. van der Maaten, “3d semantic segmen- tation with submanifold sparse convolutional networks,” CVPR, 2018. [32] S. Shi, X. Wang, and H. Li, “Pointrcnn: 3d object proposal generation and detection from point cloud,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 770–779. [33] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [34] A. Mousavian, D. Anguelov, J. Flynn, and J. Koˇseck´a, “3d bounding box estimation using deep learning and geometry,” in Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017, pp. 5632–5640. [35] B. Li, W. Ouyang, L. Sheng, X. Zeng, and X. Wang, “Gs3d: An efficient 3d object detection framework for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1019–1028. [36] F. Chabot, M. Chaouch, J. Rabarisoa, C. Teuli`ere, and T. Chateau, “Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR), 2017, pp. 2040–2049. [37] M. Zhu, K. G. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang, C. Phillips, M. Lecce, and K. Daniilidis, “Single image 3d object detection and pose estimation for grasping,” in Robotics and Automation (ICRA), 2014 IEEE International Conference on. [38] R. Mottaghi, Y. Xiang, and S. Savarese, “A coarse-to-fine model for 3d pose estimation and sub-category recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 418–426. [39] F. Manhardt, W. Kehl, and A. Gaidon, “Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape,” in Computer Vision and Pattern Recognition (CVPR). [40] X. Chen, K. Kundu, Z. Zhang, H. Ma, S. Fidler, and R. Urtasun, “Monocular 3d object detection for autonomous driving,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2147–2156. 16 [41] X. Chen, K. Kundu, Y. Zhu, A. G. Berneshawi, H. Ma, S. Fidler, and R. Urtasun, “3d object proposals for accurate object class detection,” in Advances in Neural Information Processing Systems, 2015, pp. 424–432. [42] J. Ku*, A. D. Pon*, and S. L. Waslander, “Monocular 3d object detection leveraging accurate proposals and shape reconstruction,” in CVPR, 2019. [43] P. Li, X. Chen, and S. Shen, “Stereo r-cnn based 3d object detection for autonomous driving,” in CVPR, 2019. [44] Y. Wang, W.-L. Chao, D. Garg, B. Hariharan, M. Campbell, and K. Wein- berger, “Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving,” in CVPR, 2019. [45] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234–241. [46] L. Yi, W. Zhao, H. Wang, M. Sung, and L. J. Guibas, “Gspn: Generative shape proposal network for 3d instance segmentation in point cloud,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3947–3956. [47] J. Hou, A. Dai, and M. Nießner, “3d-sis: 3d semantic instance segmenta- tion of rgb-d scans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4421–4430. [48] W. Wang, R. Yu, Q. Huang, and U. Neumann, “Sgpn: Similarity group proposal network for 3d point cloud instance segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2569–2578. [49] X. Wang, S. Liu, X. Shen, C. Shen, and J. Jia, “Associatively segmenting instances and semantics in point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4096–4105. [50] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald, “3d in- stance segmentation via multi-task metric learning,” arXiv preprint arXiv:1906.08650, 2019. [51] B. D. Brabandere, D. Neven, and L. V. Gool, “Semantic instance function,” CoRR, vol. segmentation with a discriminative abs/1708.02551, 2017. [Online]. Available: http://arxiv.org/abs/1708. 02551 [52] M. Bai and R. Urtasun, “Deep watershed transform for instance segmen- tation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5221–5229. [53] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE transactions on pattern analysis and machine intelligence, vol. 32, no. 9, pp. 1627–1645, 2009. [54] S. Fidler, S. Dickinson, and R. Urtasun, “3d object detection and viewpoint estimation with a deformable 3d cuboid model,” in Advances in neural information processing systems, 2012, pp. 611–619. [55] B. Pepik, P. Gehler, M. Stark, and B. Schiele, “3d 2 pm–3d deformable Springer, part models,” in European Conference on Computer Vision. 2012, pp. 356–370. [56] J. J. Lim, A. Khosla, and A. Torralba, “Fpm: Fine pose parts-based model with 3d cad models,” in European conference on computer vision. Springer, 2014, pp. 478–493. [57] Q. Huang, W. Wang, and U. Neumann, “Recurrent slice networks for 3d segmentation of point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2626–2635. [58] Y. Li, R. Bu, M. Sun, W. Wu, X. Di, and B. Chen, “Pointcnn: Convolution on x-transformed points,” in Advances in Neural Information Processing Systems, 2018, pp. 820–830. [59] S. Wang, S. Suo, W.-C. Ma, A. Pokrovsky, and R. Urtasun, “Deep parametric continuous convolutional neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2589–2597. [60] H. Zhao, L. Jiang, C.-W. Fu, and J. Jia, “Pointweb: Enhancing local neighborhood features for point cloud processing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 5565–5573. [61] W. Wu, Z. Qi, and L. Fuxin, “Pointconv: Deep convolutional networks on 3d point clouds,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 9621–9630. [62] B. Jiang, R. Luo, J. Mao, T. Xiao, and Y. Jiang, “Acquisition of localization confidence for accurate object detection,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 784– 799. [63] S. Sun, J. Pang, J. Shi, S. Yi, and W. Ouyang, “Fishnet: A versatile backbone for image, region, and pixel level prediction,” in Advances in Neural Information Processing Systems, 2018, pp. 762–772. [64] M. Liang*, B. Yang*, Y. Chen, R. Hu, and R. Urtasun, “Multi-task multi- sensor fusion for 3d object detection,” in CVPR, 2019. board, http://www.cvlibs.net/datasets/kitti/eval object.php?obj benchmark=3d, Accessed on 2019-08-15. Shaoshuai Shi received his B.S. degree in Computer Science and Technology from Harbin Institute of Technology, China in 2017. He is currently a Ph.D. student in the Department of Electronic Engineering at The Chinese Univer- sity of Hong Kong. His research interests include computer vision, deep learning and 3D scene understanding. Zhe Wang received his B.S. degree in Optical Engineering of Zhejiang University in 2012, and the Ph.D. degree in the Department of Electronic Engineering at The Chinese University of Hong Kong. He is current a Research Vice Director of SenseTime. His research interests include com- puter vision and deep learning. ! | | Jianping Shi received her B.S. degree in Com- puter Science and Engineering from Zhejiang University, China in 2011, and the Ph.D. degree from the Computer Science and Engineering Department at The Chinese University of Hong Kong in 2015. She is currently the Executive Re- search Director of SenseTime. Her research in- terests include computer vision and deep learn- ing. Xiaogang Wang received the B.S. degree from the University of Science and Technology of China in 2001, the MS degree from The Chinese University of Hong Kong in 2003, and the PhD degree from the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology in 2009. He is currently an asso- ciate professor in the Department of Electronic Engineering at The Chinese University of Hong Kong. His research interests include computer vision and machine learning. Hongsheng Li received the bachelors degree in automation from the East China University of Science and Technology, and the masters and doctorate degrees in computer science from Lehigh University, Pennsylvania, in 2006, 2010, and 2012, respectively. He is currently an assis- tant professor in the Department of Electronic Engineering at The Chinese University of Hong Kong. His research interests include computer image analysis, and machine vision, medical learning. 17
{ "id": "1711.08488" }
1907.04650
Hardware/Software Co-Exploration of Neural Architectures
We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the neural architecture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. Experiments on ImageNet show that our co-exploration NAS can find the neural architectures and associated hardware design with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency and 136x reduced search time, compared with the state-of-the-art hardware-aware NAS.
http://arxiv.org/pdf/1907.04650
Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Sakyasingha Dasgupta, Yiyu Shi, Jingtong Hu
cs.LG, cs.NE
10 pages
null
cs.LG
20190706
20200111
0 2 0 2 n a J 1 1 ] G L . s c [ 2 v 0 5 6 4 0 . 7 0 9 1 : v i X r a IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS # Hardware/Software Co-Exploration of Neural Architectures Weiwen Jiang, Lei Yang, Edwin H.-M. Sha, Senior Member, IEEE, Qingfeng Zhuge, Shouzhen Gu, Sakyasingha Dasgupta, Member, IEEE, Yiyu Shi, Senior Member, IEEE, and Jingtong Hu, Member, IEEE Abstract—We propose a novel hardware and software co- exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which as- sumes a fixed hardware design and explores the neural architec- ture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. In this paper, we demonstrate that the co-exploration framework can effectively expand the search space to incorporate models with high accuracy, and we theoretically show that the proposed two- level optimization can efficiently prune inferior solutions to better explore the search space. Experimental results on ImageNet show that the co-exploration NAS can find solutions with the same accuracy, 35.24% higher throughput, 54.05% higher energy efficiency, compared with the hardware-aware NAS. Arch Search Space NN1 Hardware-Awareness Module fixed target platform train child network time meet time? Y accuracy NN2 N predict arch update controller … (a) Hardware-Aware NAS Arch Search Space NN1 Hardware Design Space Design 1 fast-level slow-level train child network select hardware time meet time? Y accuracy NN2 Design 2 N monetary cost, utilization, etc. … predict arch … update controller (b) Co-explore “Architecture Seach Space” and “Hardware Design Space” Figure 1. Comparison between (a) hardware-aware NAS; (b) the proposed hardware/software co-exploration NAS. The red rectangles convey the metrics that can be optimized in the exploration. Index Terms—Hardware-Software Co-Exploration, Neural Ar- chitecture Search, FPGA, Multi-Criteria Optimization # I. INTRODUCTION Neural architecture search (NAS) has achieved great success to liberate human labor in the design of neural architectures for various tasks including image classification, image segmenta- tion and language modeling [1], [2], [3], [4], [5]. Most recently, targeting a fixed hardware platform, the hardware-aware NAS [6], [7], [8] has been proposed to take into consideration the estimated timing performance (such as latency or throughput) in addition to accuracy (see Figure 1(a)). All of the existing NAS frameworks explore the architecture search space only, without considering the hardware design freedom available in many cloud and edge computing applica- tions. For instance, the cloud platforms (e.g. Amazon AWS [9] and Microsoft Azure [10]) employ Field Programmable Gate W. Jiang, L. Yang and Y. Shi are with the Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556 (e-mail: [email protected]; [email protected] [email protected]). E. H.-M. Sha, Q. Zhuge, and S. Gu are with the School of Computer Science and Software Engineering, East China Normal University, 200062 China S. Dasgupta is with Edgecortix Inc., Tokyo, Japan, 1410031. J. Hu is with the Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15261 (e-mail: [email protected]). Array (FPGA) for neural network acceleration, while the edge computing platforms typically take the programmable FPGAs [11], [12] or Application-Specific Integrated Circuit (ASIC) [13], [14]. In addition to neural architecture design, those hardware platforms can also be programmed or even fully customized for the best performance, expanding a hardware design space. Interestingly, the hardware design space is tightly coupled with the architecture search space, i.e., the best neural ar- chitecture depends on the hardware (hardware-aware NAS), and the best hardware depends on the neural architecture. It is therefore best to jointly explore both spaces to push forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. This can be clearly seen from the example in Table I, where three designs on CIFAR-10 and Xilinx XC7Z015 FPGAs are presented: an op- timized neural architecture for a fixed FPGA implementation through hardware-aware NAS (design A), the hardware of which is then further optimized through FPGA optimization (design B) [15], and a jointly optimized neural architecture and hardware through our co-exploration (design C). From the table, we can see that further optimizing the hardware for the architecture from hardware-aware NAS can lead to 45.45% higher throughput, 38.24% higher energy efficiency with the same accuracy. On the other hand, compared with such a 1 2 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS Table I ON CIFAR-10 AND XILINX XC7Z015 FPGA: COMPARISONS OF THREE NEURAL ARCHITECTURE AND HARDWARE DESIGN PAIRS IN ACCURACY, THROUGHPUT, AND ENERGY EFFICIENCY (E.-E): A) OPTIMAL ARCHITECTURE ON A FIXED HARDWARE IMPLEMENTATION THROUGH HARDWARE-AWARE NAS; B) THE SAME ARCHITECTURE BUT WITH FURTHER FPGA OPTIMIZATION, AND C) A JOINTLY OPTIMIZED NEURAL ARCHITECTURE AND FPGA IMPLEMENTATION THROUGH OUR CO-EXPLORATION. ID Approach Accuracy Throughput E.-E (FPS) (GOPS/W) A Hardware-Aware NAS 84.53% 16.2 0.84 B Sequential Optimization 84.53% 29.7 1.36 C Co-Exploration 85.19% 35.5 1.91 sequential optimization strategy, our co-exploration approach can identify an architecture with higher accuracy and its tailor- made hardware with 16.33% and 28.80% improvements in throughput and energy efficiency, respectively. # II. BACKGROUND AND PROBLEM DEFINITION # A. Neural Architecture Search Although the research on the automatic prediction of neural network architectures can trace back to the 1980s [25], after deep neural networks have achieved great success in AI domains, there have been growing interests in generating good neural architectures for the interested dataset recently. With the fact that the architectures are growing deeper, the search space expands exponentially, leading to more difficulties in exploring the search space. In the existing work, there are two mainstreams of architecture search: (1) employing rein- forcement learning [2], [16], [26], (2) applying evolutionary algorithms [3], [27], [28]. The basic idea is to iteratively update hyperparameters to generate better “child networks” in terms of accuracy. Specifically, our architecture search space and hardware de- sign space co-exploration framework is shown in Figure 1(b). The proposed co-exploration can be built on any existing NAS framework [16], [8], [17], [18] by expanding it to delve into the hardware design space, where a two-level (fast and slow) exploration is iteratively conducted. In the fast exploration, the best hardware design is identified for the sampled neural architectures without lengthy training. The architectures with inferior hardware efficiency will be quickly pruned, which significantly accelerates the search process. Thereafter, the superior candidates are trained in the slow exploration for controller update using policy gradient reinforcement learning to explore the coupled architecture search space. The optimiza- tion objectives in the hardware design space can be varied according to the design specifications, such as area, monetary cost, energy efficiency, reliability, resource utilization, etc. Figure 1(a), without the hardware-aware module, illustrates a typically used reinforcement learning based neural architec- ture search (NAS) [16] framework. As shown in this figure, the RNN controller in NAS iteratively predicts child networks from architecture search space. These child networks will be trained on a held-out dataset to obtain its accuracy. Then, accuracy will be used as reward to update the RNN controller. Existing work has demonstrated that the automatically re- sulting architectures can achieve close or even higher accuracy to the best human-invented architectures [2], [16]. However, there are two important problems in searching architectures. First, the search process is inefficient. [16] reported that 20,000 networks were trained across 500 P100 GPUs over 4 days to find the desired network. Second, since the search process is hardware oblivious, neither the time performance nor the hardware efficiency can be guaranteed. In order to illustrate our framework, we choose to use FPGA as a vehicle in this paper, as it has gradually become one of the most popular platforms to implement deep neural networks (DNNs) due to its programmability, high performance and energy efficiency, in particular for low-batch inferences [19], [20]. Our co-exploration concept and the general framework, however, can also be easily extended to other hardware plat- forms such as ASICs. Since timing performance on a single FPGA is limited by its restricted resource, it is prevalent to or- ganize multiple FPGAs in a pipelined fashion [21], [22], [23], [24] to provide high throughput (frame per second, FPS). In such a system, the pipeline efficiency is one of the most impor- tant metrics needing to be maximized, since it determines the hardware utilization as well as energy efficiency. As such, we use accuracy and pipeline efficiency to guide the exploration of the neural architecture space and hardware design space respectively, while satisfying a given throughput specifications (e.g., ≥30FPS for the ordinary camera). Experimental results show that the co-exploration approach can significantly push forward the Pareto frontier. On ImageNet, the proposed co- exploration framework can identify architecture and hardware pairs to achieve the same accuracy, 35.42% higher throughput, and 54.05% higher energy efficiency with the reduced search time, compared with the hardware-aware NAS. Recently, hardware-aware NAS [6], [7], [8] has been pro- posed to search architectures for a target hardware platform, as shown in Figure 1(a). They always assume a fixed hardware design (e.g., mobile chips) and only explore the architec- ture search space. However, the hardware design freedom is commonly available in many cloud and edge computing applications, like FPGA in cloud platforms [9], [10] and ASIC in edge computing platforms [13], [14]. Without the consideration of hardware design space will lead to inferior designs in hardware efficiency, because the hardware design space and the architecture search space are tightly coupled. Compared with the existing work, the main contribution of this work is to propose a framework to co-explore the architecture search space and the hardware design space, as shown in Figure 1(b). More specifically, this framework determines the best hardware during the search process, which is tailor-made for the candidate architectures. In this way, the framework can obtain a set of superior architecture and hardware design pairs on the Pareto frontier in terms of accuracy and hardware efficiency tradeoffs. In addition, the search time can be significantly reduced, since we can ef- ficiently prune inferior architectures according to multiple design specifications compared with the hardware-aware NAS. JIANG et al.: HARDWARE/SOFTWARE CO-EXPLORATION OF NEURAL ARCHITECTURES 1 Child Network 2 Pipeline Stages para1 = ánum of filters, filter size, precision, ... ñ l1 l2 l3 Partition (P) l1 l2 l3 l4 l4 l5 l5 4 Pipelined FPGAs f1 U1 fk+1 U2 Assignment (a) fn 3 FPGA Pool ... ... U3 f1 fk fk+1 fn Figure 2. An overview of implementing a child network onto multiple FPGAs to be organized in the pipelined fashion. B. Implementation of DNNs on FPGAs This paper will employ FPGA as a vehicle to study how to co-explore neural architectures and hardware designs. FPGA has demonstrated its excellent ability to achieve high performance and energy efficiency for low-batch real-time inferences [19], [20]. Hence, a large amount of work is made in implementing neural networks on FPGAs, in which tools are developed to automatically design accelerators on FPGAs for a given network architecture. In the early stage, research efforts are mainly focusing on designing accelerators on a single FPGA [29], [30], [31], [32]. Authors in [33] target the edge FPGA, Xilinx PYNQ, and demonstrate the advantages of hardware-aware DNN search and update for a single FPGA. Most recently, implementations on multiple FPGAs has become the mainstream [23], [24], [15], [21], [19], [20], since limited resource on a single FPGA becomes the performance bottleneck. To fully utilize the computation power provided by multiple FPGAs, a typical technique is to implement the neural network on multiple FPGAs in a pipelined fashion [23], [24], [15], [21]. Figure 2 demonstrates one such example, in which a 5-layer network is partitioned into 3 pipeline stages, and each pipeline stage is mapped to a certain FPGA in an available pool. Finally, those FPGAs are connected as a linear array to function in the pipelined fashion. # C. Definitions and Problem Statement The goal of the proposed framework is to find both the neural architectures with the highest test accuracy and hard- ware design with the guaranteed performance (e.g. timing requirement and hardware efficiency). In this paper, we will employ the conventional convolutional neural network (CNN) based on the multi-FPGA infrastructure as an example to illustrate such a framework, which is the base for other related problems. In the following, we will first present the relevant definitions. Then, we will formally define the problem. Finally, we will discuss the possible extension. The child network is the bridge between the architecture search space and the hardware design space. Specifically, in each iteration, the controller RNN will predict child networks from the architecture search space, and then determine their implementations in the hardware design space. We will intro- duce the hardware design space as follows. ➁ Partition Child Network to Pipeline Stages. Let P (C) be a set of partitions for the child network C. P (C) = {P1, P2, · · · , PM }, where Pi is a nonempty subset of set L. SPi∈P (C) = L; We have the following two properties: (1) and (2) ∀Pi, Pj ∈ P (C), if i 6= j, then Pi ∩ Pj = ∅. After the partitioning, each set in P (C) corresponds to a pipeline stage. For example, in Figure 2 ➁, we partition the given child network into 3 pipeline stages, P1 = {l1}, P2 = {l2, l3}, and P3 = {l4, l5}. ➂ Assign Pipeline Stages to FPGAs. Then, we can assign each pipeline stage to a specific FPGA in an available FPGA pool, as shown in Figure 2 ➂. An FPGA pool with n FPGAs can be represented by a set F = {f0, f1, · · · , fn}. Each FPGA, fi, has a set of attributes, including memory memi, DSP slices dspi, etc. These attributes will be utilized to model the timing performance for a child network. We define the assignment function α from the partition set P (C) to FPGA pool F . We have α(Pi) = fj to indicate the ith pipeline stage Pi is assigned to the jth FPGA fj to be implemented. After pipeline stages are assigned to FPGA pool according to α, each FPGA will process one or multiple layers. And all FPGAs work together in the pipelined fashion. ➃ Pipelined FPGAs. The pipelined executions of multiple FPGAs are illustrated in Figure 2 ➃. The system will contin- uously obtain inputs from the dataset with a fixed rate (frame per second), and generate output data from the last pipeline stage. The input rate of the system reflects the throughput the latency of each specification T S, which implies that pipeline stage should be no more than 1/T S. The latency of a pipeline stage under an assignment function can be easily captured with a performance model [29]. For FPGA fi, its latency is denoted as Lati. After obtaining the latency of each FPGA, we introduce pipeline efficiency, which is composed of the hardware utilization in each pipeline stage (corresponding to an FPGA). The utilization of FPGA fi is equal to Lati × T S. Higher utilization of an FPGA indicates the less idle time in processing and higher energy efficiency. Therefore, high average utilization of all FPGAs is always desired. Problem Statement. Based on the above definitions, we for- mally define the problem of “hardware/software co-exploration of neural architectures” as: Given a dataset, a pool of FPGAs F , and a throughput specification T S, we are going to co- explore architecture search space and hardware design space to find a child network C: para: parameters of all layers in the child network; • P : the partition of layer set L in the child network; • α: the assignment of pipeline stages to set F ; such that the accuracy of child network C is maximized, the pipeline FPGA system can meet the required throughput T S, and the average utilization of all FPGAs is maximized. Extensions. The targeting problem is the basis for more general problems. Therefore, the proposed framework in the next section can be applied to different scenarios with little or no modifications. In the following, we will discuss different extensions from both hardware and software perspectives. From the hardware perspective, the fundamental problem of mapping child network onto multiple FPGAs is equivalent 3 4 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS to that of mapping child network onto multiple processing elements (PEs) in one FPGA, where each PE indicates a layer processor in [30]). processor for one data tile (aka. Splitting one FPGA to multiple PEs [30] is a promising solution when the single FPGA is large enough or the size of neural architecture is relatively small. In this scenario, a PE can be regarded as an FPGA in the hardware pool in Figure 2. To apply the proposed technique, we only need to iteratively generate a PE pool (i.e., the number of PEs and the size of each PE) according to the FPGA resource, and conduct co- exploration to identify the best solution for each PE pool. From the software perspective, first, the proposed frame- work can handle neural networks with residual connections by integrating techniques in [34] to partition DAG-based child it can explore different operations (e.g., network; second, group convolutions, depthwise separable convolution, etc.) for each node in a child network by adding one additional parameter in parai to determine a specific operation for the node. Finally, throughput (frame per second, FPS) in the above problem is set as a constraint. But we can wrap a binary search procedure to maximize throughput together with the pipeline utilization. Kindly note that by replacing the metrics of FPS to operation per seconds (OPS), the proposed framework can also be applied to optimize other efficiency metrics, like OPS/LUT or OPS/DSP. In the following of this paper, we will focus on determining the best neural architectures and hardware implementations with the conventional CNN structure and multi-FPGA scenario, using the throughput as a constraint and maximizing the hardware utilization. Reward(A,U) Prediction ’, s1 ’, k1 á f1 ’ , ...ñ Prediction ’, s2 ’, k2 á f2 ’ , ...ñ NAS Cell (RNN Cell) q1 NAS Cell (RNN Cell) q2 Layer 1: Parameter á f1, k1, s1, ... ñ Layer 2: Parameter á f2, k2, s2 , ...ñ Hyperparameters of child network Prediction ’, s3 ’, k3 á f3 ’ , ...ñ NAS Cell (RNN Cell) q3 ... Layer 3: Parameter á f3, k3, s3 , ...ñ á R1, R2, R3, ..., RM ñ RNN Controller Prediction ’, sN ’, kN á fN ’ , ...ñ NAS Cell (RNN Cell) qN Layer N: Parameter á fN, kN, sN , ...ñ Level 1: Fast Exploration (FE) (1) Generate pipelined FPGA configuration to satisfy the throughput (2) Iteratively train the controller to maximize utilization of each FPGA Child networks with better hardware utilization Level 2: Slow Exploration (SE) (1) Train the child network from Level 1 to obtain its accuracy (2) Generate Reward in terms of accuracy and utilization Figure 3. An overview of HW/SW co-exploration framework: The controller contains multiple reconfigurable RNN cells and predicts the hyperparameters in a child network; the fast exploration level prunes child networks with inferior hardware utilization; the slow exploration level updates controller using hardware utilization and accuracy obtained by training child networks. In the second level, we train the child network obtained from the first level on the held-out validation set. After that, we generate a reward based on both the yielded accuracy and pipeline efficiency, which is used to update the RNN controller. In case that no child network can meet the required throughput specification in the first level, we generate a negative reward to update the controller. After this level, the controller will predict a new child network from architecture search space for the fast exploration level. # III. HW/SW CO-EXPLORATION FRAMEWORK In this section, we will present the proposed framework. We will use the NAS discussed in [16] as the backbone framework and FPGA as the hardware platform to demonstrate our concept. It can be integrated with any existing NAS techniques [16], [8], [17], [18] or extended to incorporate other hardware platforms. # A. Framework Overview Figure 3 shows the HW/SW co-exploration framework. The framework contains a RNN based controller and two levels of explorations. Unlike that in [16], the controller has multiple RNN cells instead of one. More specifically, each layer in a child network has a corresponding RNN cell. During the exploration, cells will be reorganized to support different optimization goals. In the first level, a fast exploration is carried out in four steps: (1) it first predicts an architecture with probability p, (2) then, it explores the design space to generate a pipelined FPGA system to meet the throughput requirement, (3) according to the pipeline structure, it then reorganizes RNN cells in the controller, and (4) it updates the controller using reinforce- ment learning to maximize the pipeline efficiency. This level explores the hardware design space without training child networks, therefore it performs efficiently. The proposed controller integrated with multiple RNNs, operated in two levels of optimizations as shown in Figure 3, can make a better tradeoff between efficiency and accuracy. First, in Level 1, RNNs operate independently to optimize a given architecture for each pipeline stage. As a result, it can explore the search space more efficiently. On the other hand, RNNs will work together in Level 2 to determine the backbone architecture and pipeline structure. Specifically, let Di = 103 be the size of search space for pipeline stage pi. The proposed controller with multiple RNN can optimize each pipeline stage independently, and therefore, the design space is Pi{Di}) (i.e., O(103) in the example). On the contrary, for O( the controller with only one RNN, it will jointly determine sub- structure for all pipeline stages, leading the search space to be Qi Di) (i.e., O(109)). Kindly note that a huge design space O( will not only significantly prolong the exploration time, but also make it difficult to find the best solution. The advantages of the proposed framework in both efficiency and effectiveness will be verified in the experimental results. B. Fast Exploration for High Resource Utilization In the first level, namely Fast Exploration (FE), the objec- tive is to maximize pipeline efficiency under the throughput specification T S. FE takes three types of inputs: (1) a set of available FPGAs F , (2) hyperparameters of a child network JIANG et al.: HARDWARE/SOFTWARE CO-EXPLORATION OF NEURAL ARCHITECTURES RNN 1 RNN 2 RNN M PAR1 ’ PAR2 ’ PAR3 ’ PARN ’ RNN Cell q1 share wei and states RNN Cell RNN Cell q2=q3 ... RNN Cell qN PAR1 PAR2 PAR3 PARN áPAR1 ’ ñ R1 áPAR2 ’ ñ ’, PAR3 R2 ’ ñ á..., PARM RM Pipeline Stage 1 data flow Pipeline Stage 2 ... Pipeline Stage M P1={L1}; a(P1)=f3 U1=BLAST(P1,a,PAR) R1=Formula-1(U1) P2={L2 ,L3}; a(P2)=f1 U2=BLAST(P2,a,PAR) R2=Formula-1(U2) Partition and Assignment Reward Figure 4. Fast Exploration (FE): organize RNN cells in the controller according to the partition for pipeline stages; independently update multiple RNNs in the controller to predict parameters of layers assigned to each pipeline stage. H, (3) a throughput specification T S. It will generate a new child network, whose throughput at inference phase can meet T S using a subset of FPGAs in F . In addition, the average hardware utilization of FPGAs can be maximized. In FE, there are two challenges needing to be addressed: first, how to partition a given child network and assign each partition to a specific FPGA (Partition and Assignment); second, how to reorganize the RNN cells in the controller and then update them to generate child networks with higher pipeline efficiency (Reorganize and Update Controller). Partition and Assignment. In the search process, a number of candidate child networks need to go through the partition and assignment process. Consequently, an efficient automatic tool should be employed to avoid performance degradation on search process. In this paper, we employ the BLAST algorithm in [21]. BLAST takes child network H, FPGAs F , the throughput specification T S, and the attributes of each FPGA as inputs. It outputs a serial of FPGAs, each of which will implement one or multiple layers in a pipeline stage. The resultant system will satisfy T S with the maximum pipeline efficiency. As shown in Figure 4, layers in a child network are divided into M partitions, and each partition is assigned to one specific type of FPGA under function α. Reorganize and Update Controller. According to the generated pipeline structure, we then reorganize the controller and iteratively update the controller to generate child networks with higher hardware utilization. Our goal is to maximize the average hardware utilization, which is equivalent to maximize the utilization of each hardware. However, the design space of maximizing the average hardware utilization is exponentially larger than that of maximizing the utilization of each hard- ware. To efficiently explore the design space, we choose to maximize the hardware utilization of different pipeline stage independently. Therefore, we reorganize RNN cells in the controller according to the determined pipeline structure. More specifically, for multiple layers in one pipeline stage, their corresponding RNN cells will be configured to form one RNN and their weights and states are shared (e.g., RNN 2 in Figure 4). In consequence, there will be N RNNs for N pipeline stages. In this way, each RNN can be trained to maximize the hardware utilization for each FPGA pipeline stage. RNN Reward(A,U) PAR1 ’ RNN Cell PAR2 ’ PAR3 share wei and states RNN Cell RNN Cell ’ ... RNN Cell PARN q1=q2=...=qN ’ PAR1 PAR2 PAR3 PARN FE Child Network “C” partition “P”, assignment “a” SE 1. Train C on the held-out dataset to obtain accuracy A 2. Obtain the average uitlization U using BLAST(C,P,a) 3. Compute reward based on A and U Figure 5. Slow Exploration (SE): configure RNN cells in the controller to be one RNN; generate reward based on accuracy and pipeline efficiency to update the controller RNN. After we form the RNNs, we apply reinforcement learning to update the parameters in those N RNNs, and use these RNNs to predict the hyperparameters of child networks. In each iteration, we will predict T child networks, which can be viewed as a list of actions a1:T . Correspondingly, notation ai 1:T represents the hyperparameters of the ith pipeline stage in these child networks. For each child network predicted by the controller, we can obtain the utilization of the ith pipeline stage (corresponding to one FPGA) using BLAST, denoted as Ui. Then, for RNN i, we utilize Ui to generate a reward Ri to update its parameters θi. The reward Ri can be calculated using the following formula. Ri = Ui 1 − Ui −1 Ui ≤ 1 1 < U i ≤ 2 Ui > 2 (1) where Ui > 1 indicates that the required throughput cannot be satisfied, and we give the negative reward. For each RNN, our objective is to maximize the expected reward for actions from time 1 to T , represented by J(θi) = EP (ai 1:T ;θi)[Ri]. Since the reward is non-differentiable, we apply the policy of gradient method to update θi. Specifically, the method of REINFORCE rule [35] has been employed as in [16], [8]. C. Slow Exploration for High Accuracy After obtaining a child network meeting the timing speci- fication through the fast exploration level, we now move to the second level. In this level, we aim to update the controller RNN to generate new child networks with higher accuracy and pipeline efficiency. We will train the child network on the held- out validate set, and therefore the exploration speed is much slower than that of the first one. We call it Slow Exploration (SE). As shown in Figure 5, SE takes the generated child network, the partition and the assignment from FE as the inputs. The child network is first trained to obtain accuracy A. Then, the average pipeline efficiency U of the child network under the partition and assignment will be calculated. Finally, we com- pute the reward to update the controller using the following formula. Reward(A, U ) = β × A + (1 − β) × U (2) 5 6 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS where β is an adjustment parameter, which reflects the bias on test accuracy and hardware utilization. The value of β ranges from 0 to 1. We will discuss how to scale β in Section V. After that, we update the controller using the reward by applying the policy gradient reinforcement learning, which is the same as that in FE level. As shown in Figure 5, all RNN cells share the same weights and states in this level, since we have only one reward. # D. Interface between Fast-Slow Explorations Hardware Design Space: The hardware design space is composed of up to three Xilinx FPGAs (XC7Z015), each of which contains 74K logic cells, 4.9Mb on-chip memory, and 150 DSP Slices. One reason for our selection is that such an FPGA provides high speed serial communication (up to 16.8Gbps of bandwidth), so that a high speed hardware pipeline can be formed by multiple FPGAs. In the implemen- tation, the child network is partitioned into pipeline stages, and each stage is mapped to one FPGA. Kindly note that our hardware exploration may not end up using all three FPGAs; it is possible to use fewer for higher hardware efficiency. Before updating the RNN cells in the controller in the fast exploration level, we take a snapshot Snap of all RNN cells. During the fast exploration level, we obtain the hardware design (i.e., pipeline configuration) for the input child network. Based on the determined pipeline structure, RNN cells are reorganized as introduced in Section III-B. And reorganized cells will be trained to generate better child networks for the previously obtained hardware design (i.e., pipeline con- figuration). Finally, a child network with maximum hardware efficiency on the determined pipeline will be sent to the slow exploration level. After entering the slow exploration level, the RNN cells in the controller will be recovered using the previously saved snapshot Snap. Then, SE will train the child network to obtain the accuracy, which will be used to calculate the reward. Using this reward, we will update the recovered RNN. Then, the updated RNN will be used to generate new child networks for the next iteration. In this way, the SE process will always keep improving the RNN accuracy while the FE process will always generate the best hardware design for each iteration. # IV. EXPERIMENTS Datasets: We use CIFAR-10 and ImageNet datasets to study the efficacy of our approach and compare it with the state-of- the-art. During the exploration of child networks, we only use the training images in these datasets, while the test images are used to test the accuracy of the resultant architectures. To evaluate the accuracy in the search process, we randomly select 10% of the samples from the training set as a validation set. All the images undergo the data preprocessing and augmentation procedure, including whitening, upsampling, random cropping, and random horizontal flip, which are common among the related work. In the experiments, we use pipeline efficiency as the metrics to measure the hardware efficiency. As stated in Section I, the pipeline efficiency is one of the most important metrics, since it is related to the hardware utilization, energy efficiency, and timing performance. Then, the timing specifications are set according to the desired processing speed of the data at the inference phase, which are commonly decided by the data collector (e.g., camera). For CIFAR-10, we set the throughput specification to 35FPS, which can satisfy most cameras; whereas for ImageNet, due to the more complicated architectures and the limited resource, we set the specification to 10FPS. Finally, for both data and weights, we apply the commonly used 16-bit fixed point data, as that in [38], [21], [29], [30]. For CIFAR-10, the training settings for both the RNN controller and the child networks are the same as [16]. For the controller RNN, in both slow and fast explorations, it is trained by using the calculated rewards with the ADAM optimizer [39] with a learning rate of 0.0006. Parameter β in Formula 2 is set to 0.5 to equally optimize test accuracy and pipeline efficiency. For the child networks, we apply Momentum Optimizer with a learning rate of 0.1, weight decay of 10−4. and momentum of 0.9. Each child network is trained for 50 epochs. For ImageNet, we build the distributed GPU training envi- ronment on top of Uber Horovod [40]. Training settings are similar to those for CIFAR-10, with the exceptions that we set the initial learning rate to 0.0125, decay 10× at selected epochs, and for the Momentum Optimizer the weight decay is 5 × 10−5 and the momentum is 0.9. # V. RESULTS Architecture Search Space: For CIFAR-10, we use convo- lutional architectures as the backbone. For every convolutional layer, we first determine the filter size in [24,36,48,64], the ker- nel size in [1,3,5,7], and the strides. Two sets of experiments are carried out to determine the strides: (1) by exploring the child networks with a fixed stride of 1; (2) by allowing the controller to predict the strides in [1,2]. After each layer, the rectified linear units [36] and the batch normalization [37] are appended. This section will report comparison results in four sets of experiments: (1) we compare the proposed framework with different configurations; (2) we compare the proposed framework with the existing NAS frameworks; (3) we compare the identified architectures with the existing ones; (4) we show the design space exploration in terms of model size and hardware efficiency to demonstrate the importance of hardware/software co-exploration. For ImageNet, the architecture repeats mobile inverted bot- tleneck convolution layers instead of ordinary convolutional ones, same as that in [8]. The controller explores the archi- tectures with various kernel sizes [3,5,7], strides [1,2] and expansion ratios [3,6]. # A. Comparison Results with Different Configurations Before reporting the results, we first introduce the setting for the proposed framework, namely “Co-Exploration”. First, the search spaces and training settings can be found in Section IV. JIANG et al.: HARDWARE/SOFTWARE CO-EXPLORATION OF NEURAL ARCHITECTURES . h c r a d i l a v 1.0 0.8 0.6 . h c r a d i l a v 1.0 0.8 0.6 20FPS 35FPS 100FPS f o e g a t n e c r e P 0.4 0.2 0.0 4 8 10 12 Number of layers (a) 6 14 f o e g a t n e c r e P 0.4 0.2 0.0 4 8 10 12 Number of layers (a) 6 14 Figure 6. Percentages of valid architectures for different timing specifications: (a) fixed stride of 1; (b) predictable strides. Table II CO-EXPLORATION WITH PREDICTABLE STRIDE PERFORMS BETTER THAN THAT WITH FIXED STRIDE UNDER 35FPS TIMING SPECIFICATION. Models Depth Accuracy Pipeline Eff. Co-Exploration fixed stride (OptSW) 13 81.50% 91.92% Co-Exploration fixed stride (OptHW) 10 78.57% 98.56% Co-Exploration pred. stride (OptSW) 14 85.19% 92.15% Co-Exploration pred. stride (OptHW) 6 80.18% 99.69% Second, the controller will iteratively search child networks for 10,000 episodes through the 2-level exploration. Third, in each episode, the slow exploration phase will obtain accuracy of 16 child networks (train from scratch if one has never been trained or obtain accuracy from a history table); these child networks are identified by the fast exploration phase, where 100 trails will be taken for each child network to optimize the hardware efficiency. Since the proposed framework has multi- ple optimization goals on both software (e.g., accuracy) and hardware (e.g., pipeline efficiency), we record a set of superior architecture and hardware design pairs during the exploration, which forms the Pareto frontier. On the frontier, we denote the solution with the maximum accuracy as “OptSW” and the solution with the maximum pipeline efficiency as “OptHW”. Impact of Timing Specifications: Figure 6 reports the impact of timing specifications for the Co-Exploration frame- work. We randomly sample 10,000 architectures for the layer size ranged from 4 to 14, and obtain the percentage of valid architectures that can meet the timing specification on the if the CIFAR-10 dataset. In Figure 6, constraint is tight (e.g., FPS=100), only a few architectures can satisfy the specification, indicating that the number of architectures with high accuracy is reduced compared with the one without timing constraints. In this case, we can scale up the parameter β in Formula 2 to pursue higher accuracy. On the other hand, if the constraint is loose (e.g., FPS=20), there are a large number of valid architectures. Correspondingly, we can scale down β to find more hardware efficient designs with high accuracy. Comparison between Fixed Stride and Predictable Stride: Table II reports the comparison between the explo- ration with the fixed stride and that with the predictable stride on CIFAR-101. In the table, column “depth” indicates the number of layers in the resulting architecture. As shown in this table, for the exploration with the fixed stride, OptSW achieves 2.93% higher accuracy but 6.64% loss in pipeline 1Models accessed at: https://github.com/PITT-JZ-COOP/Co-Explore-NAS ) W H 1.00 OptHW 1.0 OptSW inferior designs ( y c n e i c i f f e e n i l e p i p 0.95 0.90 0.85 0.75 0.80 accuracy (SW) (a) 0.85 0.9 0.8 0.7 0.75 0.80 accuracy (SW) (b) 0.85 Pareto frontier (Hardware-Aware) Pareto frontier (Co-Exploration) Figure 7. Pareto frontiers between accuracy and pipeline efficiency for Hardware-Aware NAS and Co-Exploration, both of which are designed under the timing specification of 35FPS: (a) designs with 2 FPGAs; (b) designs with 3 FPGAs. efficiency than OptHW. These figures are 5.01% and 7.54% for the exploration with the predictable strides. In addition, it is obvious that compared with fixed stride, the stride prediction can help controller to find better results in both accuracy and pipeline efficiency. As such, in the following experiments we will use predictable stride as the default setting for Co- Exploration. B. Comparison Results with the Existing NAS Frameworks Next, we compare the proposed Co-Exploration framework with the existing NAS frameworks. To be fair, we use the same setting as the Co-Exploration: exploring 10,000 episodes and getting accuracy of 16 child networks in each episode. Because the existing Hardware-Aware NAS frameworks [6], [8], [7] target fixed hardware (e.g., GPU) instead of programmable FPGAs, and they use various settings; for fair evaluation, we use the NAS discussed in [16] as the backbone to im- plement a Hardware-Aware NAS for FPGA with the same search spaces and training settings as described in Section IV. Unlike the Co-Exploration framework, the Hardware-Aware NAS assumes fixed accelerator designs (i.e., optimization parameters) in FPGAs. As shown in Figure 1(a), in the search loop, the controller will first predict a neural architecture; second, the framework tests the hardware efficiency of the predicted architecture on FPGAs; third, it trains architecture to get its accuracy; finally, it utilizes hardware efficiency and accuracy to update the controller. This framework is denoted as Hardware-Aware NAS in the results. In addition, for the final architectures obtained by the Hardware-Aware NAS, we further optimize their hardware implementation to achieve a better design in terms of hardware efficiency. Such a heuristic approach is denoted as “Sequential Optimization” in the results. Impact of Different Exploration Frameworks on Pareto Frontier: Figure 7 reports the design space exploration assum- ing the hardware design space contains up to (a) two FPGAs or (b) three FPGAs. The x-axis and y-axis represent the accuracy and pipeline efficiency, respectively. For clear demonstration, we only include the architectures whose pipeline efficiency is no less than 85% for two FPGAs in Figure 7(a) and no less than 75% for three FPGAs in Figure 7(b). In these figures, the circled design points correspond to those in Table II. 7 8 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS Table III COMPARISON AMONG CO-EXPLORATION, HARDWARE-AWARE NAS AND SEQUENTIAL OPTIMIZATION ON CIFAR-10 AND IMAGENET DATASETS. Dataset Models Depth Parameters Accuracy Accuracy Pipeline Eff. FPS Energy Eff. (Top1) (Top5) GOPS/W Hardware-Aware NAS 13 0.53M 84.53% - 73.27% 16.2 0.84 CIFAR-10 Sequential Optimization 13 0.53M 84.53% - 92.20% 29.7 1.36 Co-Exploration (OptHW) 10 0.29M 80.18% - 99.69% 35.5 2.55 Co-Exploration (OptSW) 14 0.61M 85.19% - 92.15% 35.5 1.91 Hardware-Aware NAS 15 0.44M 68.40% 89.84% 81.07% 6.8 0.34 ImageNet Sequential Optimization 15 0.44M 68.40% 89.84% 86.75% 10.4 0.46 Co-Exploration (OptHW) 17 0.54M 68.00% 89.60% 96.15% 12.1 1.01 Co-Exploration (OptSW) 15 0.48M 70.24% 90.53% 93.89% 10.5 0.74 The red lines represent the Pareto frontiers explored by Co- Exploration. The green lines, on the other hand, represent the frontier obtained by Hardware-Aware NAS (by examining the top architectures identified). These figures clearly show that by exploring hardware design space, our Co-Exploration can significantly push forward the Pareto frontiers in the accuracy and efficiency tradeoffs. It effectively identifies better designs not available through architecture search space only, i.e., those between the two frontiers. Table IV CO-EXPLORATION USES MUCH FEWER GPU HOURS THAN THAT OF HARDWARE-AWARE NAS, BENEFITING FROM THE EARLY-STAGE PRUNING. Dataset Approach Arch for Training GPU Hours Impr. CIFAR-10 Hardware-Aware NAS Co-Exploration 108,000 308 16,586 1 102+1.9=103.9 159× ImageNet Hardware-Aware NAS Co-Exploration 7,263 53 36,315 1 256+1.8=266.8 136× Comparing the two exploration results in Figure 7(a) and (b), we can also see that the solution with the highest pipeline efficiency is located in Figure 7(a), while the one with the highest accuracy is located in Figure 7(b). In general, we can always observe that the average accuracy on three FPGAs is higher than that on two FPGAs, yet the pipeline efficiency is lower. This is because more FPGAs can accommodate deeper architecture in layers for higher accuracy. On the other hand, more layers will easily result in unbalanced pipeline stages, which in turn reduces the pipeline efficiency. Table V COMPARISON WITH THE EXISTING ARCHITECTURES ON IMAGENET WITH THE TIMING SPECIFICATION OF 10FPS. Models Depth Accuracy Accuracy FPS Energy Eff. (Top-1) (Top-5) MobileNetV2 [41] 18 71.80% 91.00% 4.5 0.47 ProxylessNet [8] 21 74.60% 92.50% 3.1 0.41 Co-Exploration (OptHW) 17 68.14% 89.60% 12.1 1.01 Co-Exploration (OptSW) 15 70.24% 90.53% 10.5 0.74 Comparison between Co-Exploration and Existing Frameworks: Table III reports the comparison results on accu- racy, pipeline efficiency, throughput and energy efficiency on CIFAR-10 and ImageNet. All the architectures identified have fewer than 1M parameters mainly due to the hardware capacity. This inevitably leads to accuracy loss; however, as we can see, the architecture explored by OptSW can still achieve 85.19% test accuracy on CIFAR-10, and 70.24% top-1 accuracy on ImageNet. These results demonstrate the effectiveness of the Co-Exploration approach in resource limited scenarios. In addi- tion, OptSW outperforms Hardware-Aware NAS by achieving 54.37% and 35.24% higher throughput, and 56.02% and 54.05% higher energy efficiency on CIFAR-10 and ImageNet, respectively. Compared with Sequential Optimization, OptSW achieves 16.34% and 28.79% improvements on CIFAR-10 in throughput and energy efficiency, respectively; and on ImageNet, it can also slightly improve throughput, and achieve 37.84% improvements on energy efficiency. discussed in Section III-A, compared with the conventional Hardware-Aware NAS with a single RNN in the controller, the proposed Co-Exploration framework with multiple RNNs Qi Di) to can dramatically shrink the design space from O( is the size of design space for the Pi Di), where Di O( ith pipeline stage. Since the number of architecture to be trained is proportional to the size of design space, from column “Arch for Training” in Table IV, we can see that Co- Exploration trains much fewer architectures compared with the Hardware-Aware NAS. Therefore, our Co-Exploration achieves significant speedup over the Hardware-Aware NAS. From the table, we have another observation that the training process takes much longer time than the hardware exploration process, where the hardware exploration only occupies less than 1% GPU hours in the whole search process (1.9 GPU hours for CIFAR-10 and 1.8 GPU hours for ImageNet). Finally, Table IV reports the comparison results on nor- malized search time between the Hardware-Aware NAS and the Co-Exploration. Results in this table show that the Co- Exploration can significantly accelerate the search process, achieving 159× and 136× fewer GPU hours on CIFAR-10 and ImageNet, respectively. The speedup is achieved from the efficient early-stage pruning in the fast exploration level. As # C. Comparison Results with the Existing Architectures In this subsection, we compare the neural architectures identified by the proposed Co-Exploration framework with the existing architectures: ProxylessNet [8] and MobileNetV2 [41]. We set the throughput constraint as 10FPS for Co-Exploration framework as a baseline. To obtain the hardware efficiency (throughput, energy efficiency, etc.) of these architectures, JIANG et al.: HARDWARE/SOFTWARE CO-EXPLORATION OF NEURAL ARCHITECTURES Designs in required model size range Other design points Models in Co-Exploration search space Models in HW-Aware search space 1.0 1.0 ) % ( y c n e i c i f f E e r a w d r a H 0.8 0.6 0.4 0.2 ) % ( y c n e i c i f f E e r a w d r a H 0.9 0.8 0.7 0 90 120 150 Model Size (K) (a) 180 0.6 0.55 0.60 0.65 Accuracy (b) 0.70 Figure 8. Design space of architectures with the depth of 4: (a) model size v.s. hardware efficiency; (b) accuracy v.s. hardware efficiency using co-exploration and hardware-aware NAS approaches. we apply the BLAST approach [21] to partition them onto multiple FPGAs. For the fair of comparison, all models involve 3 FPGAs. Table V reports the results. As we can see, both Mo- bileNetV2 and ProxylessNet cannot meet the timing spec- ification of 10 FPS, while ours can. In comparison with the manually designed MobileNetV2 [41], OptSW with top- 5 accuracy loss of 0.47% can achieve 2.33× and 1.57× improvement on throughput and energy efficiency, respectively. On the other hand, in comparison with ProxylessNet [8], whose throughput is 3× lower than the specifications, OptSW can find architectures that meet the specs with 90.53% top-5 accuracy against 92.50% from ProxylessNet. Results show that the proposed framework can make a better tradeoff between hardware efficiency and architecture accuracy. In addition, it can guarantee that the final architecture identified can meet the timing specification, which is important in real-time AI systems. # D. Importance of Co-Exploration Finally, we show the importance of co-exploration on NAS and hardware design spaces, instead of (1) using a heuristic on restricting the size of models for only NAS exploration, or (2) applying hardware-aware NAS exploration. Figure 8 shows the results of the design space exploration of architectures with 4 layers. In Figure 8(a), the x-axis and y-axis represent the model size and the hardware efficiency (i.e., pipeline efficiency). Each point in this figure is a design, which is optimized using the algorithm in [21]. We have marked the design points whose model size ranges from 120K to 150K. From this figure, we can see that, for the designs whose model size ranges from 120K to 150K, the optimized hardware efficiency ranges from 1.29% to 98.35%. Moreover, for a much narrower range from 149K to 150K, the efficiency still ranges from 7.02% to 98.35%. All the above results reflect that we cannot guarantee the hardware efficiency by restricting the model size only. This is mainly because there are a large number of designs with similar model size, but their structures are quite different, leading to different hardware efficiency. Therefore, it verifies the neural architecture search space and hardware design space are tightly coupled and emphasizes the importance of conducting hardware and software co-exploration. In Figure 8(b), we unveil the fundamental difference be- tween co-exploration and hardware-aware architecture search. In this figure, the black crosses and red circles represent the valid design points in HW-aware NAS and co-exploration search spaces, respectively. We can observe that the HW-aware NAS has a much narrower search space than the proposed co- exploration approach. Basically, HW-aware NAS will prune the architectures with high accuracy but fail to meet hardware specifications on fixed hardware design. However, by opening the hardware design space, it is possible to find a tailor-made hardware design for the pruned architectures to make them meet the hardware specifications. Therefore, compared with the HW-aware NAS, the co-exploration approach enlarges the search space. As a result, it can make better tradeoffs between accuracy and hardware efficiency. # VI. CONCLUSION AND FUTURE WORK We proposed the co-exploration framework to open up the hardware design freedom in neural architecture search. This is driven by the trend that the hardware platform can be programmed or even fully customized for the best performance in cloud and edge computing applications. This paper took the FPGA as a vehicle to show that through jointly exploring architecture search space and hardware design space, the design Pareto frontier on accuracy and hardware efficiency tradeoffs can be significantly pushed forward. The framework proposed in this paper will be the base for neural architecture and hardware co-exploration. Based on the proposed co-exploration framework, we list two promising future directions as follows. First, mixed-precision was re- cently proposed [42] for a fixed architecture; in the future, we plan to further co-explore neural architectures, quantizations and hardware designs. Second, innovations on computing architecture achieves great success for executing inference phase of neural networks [43], we plan to apply the proposed framework to co-explore neural architectures with the novel computing architectures (e.g., computing-in-memory). # REFERENCES [1] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang, “Efficient architecture search by network transformation.” AAAI, 2018. [2] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697–8710. [3] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. Le, and A. Kurakin, “Large-scale evolution of image classifiers,” arXiv preprint arXiv:1703.01041, 2017. [4] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu, “Hierarchical representations for efficient architecture search,” arXiv preprint arXiv:1711.00436, 2017. [5] V. Nekrasov, H. Chen, C. Shen, and I. Reid, “Architecture Search of Dynamic Cells for Semantic Video Segmentation,” arXiv preprint arXiv:1904.02371, 2019. [6] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Va- jda, Y. Jia, and K. Keutzer, “Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search,” arXiv preprint arXiv:1812.03443, 2018. [7] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le, “Mnasnet: Platform-aware neural architecture search for mobile,” arXiv preprint arXiv:1807.11626, 2018. [8] H. Cai, L. Zhu, and S. Han, “Proxylessnas: Direct neural architecture search on target task and hardware,” arXiv preprint arXiv:1812.00332, 2018. 9 10 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS [9] Amazon, “Ec2 f1 instances,” https://aws.amazon.com/ ec2/instance- types/f1, 2017, accessed: 2019-01-20. [10] Microsoft, “Real-time ai: Microsoft announces preview of project https://blogs.microsoft.com/ai/build-2018-project- brainwave,” brainwave/, 2018, accessed: 2019-01-20. [26] B. Baker, O. Gupta, N. Naik, and R. Raskar, “Designing neural learning,” arXiv preprint network architectures using reinforcement arXiv:1611.02167, 2016. [27] L. Xie and A. Yuille, “Genetic cnn,” in International Conference on Computer Vision (ICCV). IEEE, 2017, pp. 1388–1397. [11] J. Wang, Q. Lou, X. Zhang, C. Zhu, Y. Lin, and D. Chen, “Design flow of accelerating hybrid extremely low bit-width neural network in embedded fpga,” in 2018 28th International Conference on Field Programmable Logic and Applications (FPL). [12] F. Shafiq, T. Yamada, A. T. Vilchez, and S. Dasgupta, “Automated flow for compressing convolution neural networks for efficient edge- computation with fpga,” arXiv preprint arXiv:1712.06272, 2017. [13] S. Venkataramani, A. Ranjan, S. Banerjee, D. Das, S. Avancha, A. Ja- gannathan, A. Durg, D. Nagaraj, B. Kaul, P. Dubey et al., “Scaledeep: A scalable compute architecture for learning and evaluating deep net- works,” in ACM SIGARCH Computer Architecture News, vol. 45, no. 2. ACM, 2017, pp. 13–26. [14] P. Whatmough, S. Lee, N. Mulholland, P. Hansen, S. Kodali, D. Brooks, and G. Wei, “Dnn engine: A 16nm sub-uj deep neural network inference accelerator for the embedded masses,” in 2017 IEEE Hot Chips 29 Symposium, 2017. [15] C. Zhang, D. Wu, J. Sun, G. Sun, G. Luo, and J. Cong, “Energy-efficient cnn implementation on a deeply pipelined fpga cluster,” in International Symposium on Low Power Electronics and Design (ISLPED). ACM, 2016, pp. 326–331. [16] B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” in International Conference on Learning Representations (ICLR), 2017. [17] H. Liu, K. Simonyan, and Y. Yang, “Darts: Differentiable architecture search,” arXiv preprint arXiv:1806.09055, 2018. [18] G. Bender, P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le, “Under- standing and simplifying one-shot architecture search,” in International Conference on Machine Learning, 2018, pp. 549–558. [19] E. Chung, J. Fowers, K. Ovtcharov, M. Papamichael, A. Caulfield, T. Massengill, M. Liu, D. Lo, S. Alkalay, M. Haselman et al., “Serving dnns in real time at datacenter scale with project brainwave,” IEEE Micro, vol. 38, no. 2, pp. 8–20, 2018. [20] J. Fowers, K. Ovtcharov, M. Papamichael, T. Massengill, M. Liu, D. Lo, S. Alkalay, M. Haselman, L. Adams, M. Ghandi et al., “A configurable cloud-scale dnn processor for real-time ai,” in International Symposium on Computer Architecture (ISCA). [28] Y.-H. Kim, B. Reddy, S. Yun, and C. Seo, “Nemo: Neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy,” in ICML 2017 AutoML Workshop, 2017. [29] C. Zhang, P. Li, G. Sun, Y. Guan, B. Xiao, and J. Cong, “Opti- mizing fpga-based accelerator design for deep convolutional neural networks,” in International Symposium on Field-Programmable Gate Arrays (FPGA). ACM, 2015, pp. 161–170. [30] Y. Shen, M. Ferdman, and P. Milder, “Maximizing cnn accelerator efficiency through resource partitioning,” in International Symposium on Computer Architecture (ISCA). [31] X. Zhang, J. Wang, C. Zhu, Y. Lin, J. Xiong, W.-m. Hwu, and D. Chen, “Dnnbuilder: An automated tool for building high-performance dnn hardware accelerators for fpgas,” in International Conference on Computer-Aided Design (ICCAD). ACM, 2018, p. 56. [32] X. Wei, Y. Liang, X. Li, C. H. Yu, P. Zhang, and J. Cong, “Tgpa: tile-grained pipeline architecture for low latency cnn inference,” in International Conference on Computer-Aided Design (ICCAD). IEEE, 2018, pp. 1–8. [33] C. Hao, X. Zhang, Y. Li, S. Huang, J. Xiong, K. Rupnow, W.-m. Hwu, and D. Chen, “FPGA/DNN Co-Design: An Efficient Design Methodology for IoT Intelligence on the Edge,” in Proceedings of the 56th Annual Design Automation Conference 2019. ACM, 2019, p. 206. [34] W. Jiang, E. H.-M. Sha, Q. Zhuge, L. Yang, H. Dong, and X. Chen, “On the design of minimal-cost pipeline systems satisfying hard/soft real- time constraints,” IEEE Transactions on Emerging Topics in Computing, 2018. [35] R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992. [36] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in International Conference on Machine Learning (ICML), 2010, pp. 807–814. [37] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. [21] W. Jiang, E. H.-M. Sha, Q. Zhuge, L. Yang, X. Chen, and J. Hu, “Heterogeneous fpga-based cost-optimal design for timing-constrained cnns,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11, pp. 2542–2554, 2018. [38] Y. Ma, Y. Cao, S. Vrudhula, and J.-s. Seo, “Performance modeling for cnn inference accelerators on fpga,” IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems, 2019. [22] W. Zhang, J. Zhang, M. Shen, G. Luo, and N. Xiao, “An efficient mapping approach to large-scale dnns on multi-fpga architectures,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2019. [23] T. Geng, T. Wang, A. Sanaullah, C. Yang, R. Patel, and M. Herbordt, “A framework for acceleration of cnn training on deeply-pipelined fpga clusters with work and weight load balancing,” in International Conference on Field Programmable Logic and Applications (FPL). IEEE, 2018, pp. 394–3944. [24] T. Geng, T. Wang, A. Sanaullah, C. Yang, R. Xu, R. Patel, and M. Herbordt, “Fpdeep: Acceleration and load balancing of cnn training on fpga clusters,” in International Symposium on Field-Programmable Custom Computing Machines (FCCM). [25] J. D. Schaffer, D. Whitley, and L. J. Eshelman, “Combinations of genetic algorithms and neural networks: A survey of the state of the art,” in International Workshop on Combinations of Genetic Algorithms and Neural Networks (COGANN). [39] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. [40] A. Sergeev and M. Del Balso, “Horovod: fast and easy distributed deep learning in tensorflow,” arXiv preprint arXiv:1802.05799, 2018. [41] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” arXiv preprint arXiv:1801.04381, 2018. [42] K. Wang, Z. Liu, Y. Lin, J. Lin, and S. Han, “HAQ: Hardware-Aware Automated Quantization with Mixed Precision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 8612–8620. [43] W.-H. Chen, K.-X. Li, W.-Y. Lin, K.-H. Hsu, P.-Y. Li, C.-H. Yang, C.-X. Xue, E.-Y. Yang, Y.-K. Chen, Y.-S. Chang et al., “A 65nm 1Mb nonvolatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors,” in 2018 IEEE International Solid-State Circuits Conference-(ISSCC). IEEE,
{ "id": "1904.02371" }
1907.01752
On the Weaknesses of Reinforcement Learning for Neural Machine Translation
Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, but rather from changes in the shape of the distribution curve.
http://arxiv.org/pdf/1907.01752
Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend
cs.CL, cs.AI, cs.LG
Accepted to ICLR 2020 (matching content, different style)
null
cs.CL
20190703
20200115
0 2 0 2 n a J 5 1 ] L C . s c [ 4 v 2 5 7 1 0 . 7 0 9 1 : v i X r a # On the Weaknesses of Reinforcement Learning for Neural Machine Translation Leshem Choshen1 Lior Fox1 Zohar Aizenbud1 Omri Abend1,2 1School of Computer Science and Engineering, 2 Department of Cognitive Sciences The Hebrew University of Jerusalem {leshem.choshen, lior.fox, zohar.aizenbud}@mail.huji.ac.il [email protected] # Abstract Reinforcement learning (RL) is frequently used to increase performance in text gen- including machine translation eration tasks, (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adver- sarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results sug- gest that RL practices in MT are likely to im- prove performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further sug- gest that observed gains may be due to effects unrelated to the training signal, but rather from changes in the shape of the distribution curve. interest and strong results, little is known about what accounts for these performance gains, and the training dynamics involved. We present the following contributions. First, our theoretical analysis shows that commonly used approximation methods are theoretically ill- founded, and may converge to parameter values that do not minimize the risk, nor are local min- ima thereof (§2.3). Second, using both naturalistic experiments and carefully constructed simulations, we show that performance gains observed in the literature likely stem not from making target tokens the most prob- able, but from unrelated effects, such as increasing the peakiness of the output distribution (i.e., the probability mass of the most probable tokens). We do so by comparing a setting where the reward is informative, vs. one where it is constant. In §4 we discuss this peakiness effect (PKE). # Introduction Reinforcement learning (RL) is an appealing path for advancement in Machine Translation (MT), as it allows training systems to optimize non- differentiable score functions, common in MT evaluation, as well as its ability to tackle the “ex- posure bias” (Ranzato et al., 2015) in standard training, namely that the model is not exposed dur- ing training to incorrectly generated tokens, and is thus unlikely to recover from generating such to- kens at test time. These motivations have led to much interest in RL for text generation in gen- eral and MT in particular. Various policy gradi- ent methods have been used, notably REINFORCE (Williams, 1992) and variants thereof (e.g., Ran- zato et al., 2015; Edunov et al., 2018) and Mini- mum Risk Training (MRT; e.g., Och, 2003; Shen et al., 2016). Another popular use of RL is for training GANs (Yang et al., 2018; Tevet et al., 2018). See §2. Nevertheless, despite increasing Third, we show that promoting the target token to be the mode is likely to take a prohibitively long time. The only case we find, where improvements are likely, is where the target token is among the first 2-3 most probable tokens according to the pre- trained model. These findings suggest that REIN- FORCE (§5) and CMRT (§6) are likely to improve over the pre-trained model only under the best pos- sible conditions, i.e., where the pre-trained model is “nearly” correct. We conclude by discussing other RL practices in MT which should be avoided for practical and theoretical reasons, and briefly discuss alternative RL approaches that will allow RL to tackle a larger class of errors in pre-trained models (§7). # 2 RL in Machine Translation # 2.1 Setting and Notation An MT system generates tokens y = (y1, ..., yn) from a vocabulary V one token at a time. The probability of generating yi given preceding to- is given by Pθ(·|x, y<i), where x is kens y<i the source sentence and θ are the model param- eters. For each generated token y, we denote with r(y; y<i, x, y(ref )) the score, or reward, for gen- erating y given y<i, x, and the reference sentence y(ref ). For brevity, we omit parameters where they are fixed within context. For simplicity, we as- sume r does not depend on following tokens y>i. We also assume there is exactly one valid tar- get token, as in practice MT systems are trained against a single reference translation per sentence (Schulz et al., 2018). In practice, either a token- level reward is approximated using Monte-Carlo methods (e.g., Yang et al., 2018), or a sentence- level (sparse) reward is given at the end of the episode (sentence). The latter is equivalent to a uniform token-level reward. r is often either the negative log-likelihood, or based on standard MT metrics, e.g., BLEU (Pa- pineni et al., 2002). When applying RL in MT, we seek to maximize the expected reward (denoted with R); i.e., to find θ∗ = argmax R(θ) = argmax Ey∼Pθ [r(y)] θ θ (1) # 2.2 REINFORCE For a given source sentence, and partially gener- ated sentence y-;, REINFORCE (Williams, 1992) samples k tokens (k is a hyperparameter) S = (y,...,y) from Ps and updates @ according to this rule: 1é Ab x - >I r(yi)V loa(Pa(yi)) — Q) i=l The right-hand side of Eq. (2) is an unbiased estimator of the gradient of the objective function, i.e., E [∆θ] ∝ ∇θR (θ). Therefore, REINFORCE is performing a form of stochastic gradient ascent on R, and has similar formal guarantees. From here follows that if R is constant with respect to θ, then the expected ∆θ prescribed by REINFORCE is zero. We note that r may be shifted by a con- stant term (called a “baseline”), without affecting the optimal value for θ. REINFORCE is used by a variety of works in MT, text generation, and image-to-text tasks (Liu et al., 2016; Wu et al., 2018; Rennie et al., 2017; Shetty et al., 2017; Hendricks et al., 2016) – in isolation, or as a part of training (Ranzato et al., 2015). Lately, an especially prominent use for REINFORCE is adversarial training with discrete data, where another network predicts the reward (GAN). For some recent work on RL for NMT, see (Zhang et al., 2016; Li et al., 2017; Wu et al., 2017; Yu et al., 2017; Yang et al., 2018). # 2.3 Minimum Risk Training The term Minimum Risk Training (MRT) is used ambiguously in MT to refer either to the applica- tion of REINFORCE to minimizing the risk (equiv- alently, to maximizing the expected reward, the negative loss), or more commonly to a somewhat different estimation method, which we term Con- trastive MRT (CMRT) and turn now to analyz- ing. CMRT was proposed by Och (2003), adapted to NMT by Shen et al. (2016), and often used since (Ayana et al., 2016; Neubig, 2016; Shen et al., 2017; Edunov et al., 2018; Makarov and Clematide, 2018; Neubig et al., 2018). The method works as follows: at each iteration, sample k tokens S = {y1, . . . , yk} from Pθ, and update θ according to the gradient of y~Q [ry] k R(9,S) = Â¥> Qo,s(yir(yi) = i=l P(yi)® Qo,s(yi) Yes Pw where Commonly (but not universally), deduplication is performed, so R sums over a set of unique val- ues (Sennrich et al., 2017). This changes little in our empirical results and theoretical analysis. Despite the resemblance in definitions of R (Eq. (1)) and R (indeed, R is sometimes presented as an approximation of R), they differ in two impor- tant aspects. First, Q’s support is S, so increas- ing Q(y:) for some y; necessarily comes at the ex- pense of Q(y) for some y € S. In contrast, in- creasing P(y;), as in REINFORCE, may come at the expense of P(y) for any y € V. Second, a is a smoothness parameter: the closer a is to 0, the closer Q is to be uniform. We show in Appendix A that despite its name, CMRT does not optimize R, nor does it optimize [R]. That is, it may well converge to values that are not local maxima of R, making it theoretically ill-founded.! However, given that CMRT is often used in practice, the strong results it yielded and the absence of theory for explaining it, we discuss 1Sakaguchi et al. (2017) discuss the relation between CMRT and REINFORCE, claiming that CMRT is a variant of REINFORCE. Appendix A shows that CMRT does not in fact optimize the same objective. it here. Given a sample S, the gradient of Ris given by k VR =a ~ (QW) -r(yi) «Vlog P(ui)) B) # − EQ[r]∇ log Z(S) where Z(S) = 5°, i P (yi)α. See Appendix B. where Z(S) = 5°, P(y;). See Appendix B. Comparing Equations (2) and (3), the differ- ences between REINFORCE and CMRT are re- flected again. First, VR has an additional term, proportional to V log Z(S), which yields the con- trastive effect. This contrast may improve the rate of convergence since it counters the decrease of probability mass for non-sampled tokens. Second, for a given S, the relative weight- ing of the gradients V log P(y;) is proportional to r(yi)Q(y:), or equivalently to r(y;)P(yi)*. CMRT with deduplication sums over distinct val- ues in S (Eq. (3)), while REINFORCE sums over all values. This means that the relative weight of the unique value y; is ru) te S}] in REINFORCE. For a = 1| the expected value of these relative weights is the same, and so for a < 1 (as is com- monly used), more weight is given to improbable tokens, which could also have a positive effect on the convergence rate.2, However, if a is too close to 0, VR vanishes. This tradeoff explains the im- portance of tuning a reported in the literature. In §6 we present simulations with CMRT, showing very similar trends as presented by REINFORCE. # 3 Motivating Discussion Implementing a stochastic gradient ascent, REIN- FORCE is guaranteed to converge to a stationary point of R under broad conditions. However, not much is known about its convergence rate under the prevailing conditions in NMT. We begin with a qualitative, motivating analysis of these questions. As work on language gener- ation empirically showed, RNNs quickly learn to output very peaky distributions (Press et al., 2017). This tendency is advantageous for generating flu- ent sentences with high probability, but may also entail slower convergence rates when using RL to fine-tune the model, because RL methods used in text generation sample from the (pretrained) pol- icy distribution, which means they mostly sam- ple what the pretrained model deems to be likely. 2Not performing deduplication results in assigning higher relative weight to high-probability tokens, which may have an adverse effect on convergence rate. For an implementation without deduplication, see THUMT (Zhang et al., 2017). Since the pretrained model (or policy) is peaky, ex- ploration of other potentially more rewarding to- kens will be limited, hampering convergence. Intuitively, REINFORCE increases the probabil- ities of successful (positively rewarding) observa- tions, weighing updates by how rewarding they were. When sampling a handful of tokens in each context (source sentence x and generated prefix y<i), and where the number of epochs is not large, it is unlikely that more than a few unique tokens will be sampled from Pθ(·|x, y<i). (In practice, k is typically between 1 and 20, and the number of epochs between 1 and 100.) It is thus unlikely that anything but the initially most probable can- didates will be observed. Consequently, REIN- FORCE initially raises their probabilities, even if more rewarding tokens can be found down the list. We thus hypothesize the peakiness of the distri- bution, i.e., the probability mass allocated to the most probable tokens, will increase, at least in the first phase. We call this the peakiness-effect (PKE), and show it occurs both in simulations (§4.1) and in full-scale NMT experiments (§4.2). With more iterations, the most-rewarding to- kens will be eventually sampled, and gradually gain probability mass. This discussion suggests that training will be extremely sample-inefficient. We assess the rate of convergence empirically in §5, finding this to be indeed the case. # 4 The Peakiness Effect We turn to demonstrate that the initially most probable tokens will initially gain probability mass, even if they are not the most rewarding, yielding a PKE. Caccia et al. (2018) recently observed in the context of language modeling using GANs that performance gains similar to those GAN yield can be achieved by decreasing the temperature for the prediction softmax (i.e., making it peakier). How- ever, they proposed no account as to what causes this effect. Our findings propose an underlying mechanism leading to this trend. We return to this point in §7. Furthermore, given their findings, it is reasonable to assume that our results are rele- vant for RL use in other generation tasks, whose output space too is discrete, high-dimensional and concentrated. # 4.1 Controlled Simulations We experiment with a 1-layer softmax model, that predicts a single token i ∈ V with probability j eθj . θ = {θj}j∈V are the model’s parameters. This model simulates the top of any MT decoder that ends with a softmax layer, as essentially all NMT decoders do. To make experiments realis- tic, we use similar parameters as those reported in the influential Transformer NMT system (Vaswani et al., 2017). Specifically, the size of V (distinct BPE tokens) is 30715, and the initial values for θ were sampled from 1000 sets of logits taken from decoding the standard newstest2013 development set, using a pretrained Transformer model. The model was pretrained on WMT2015 training data (Bojar et al., 2015). Hyperparameters are reported in Appendix C. We define one of the tokens in V to be the target token and denote it with ybest. We assign deterministic token reward, this makes learning easier than when relying on approxima- tions and our predictions optimistic. We experi- ment with two reward functions: 1. Simulated Reward: r(y) = 2 for y = ybest, r(y) = 1 if y is one of the 10 initially highest scoring tokens, and r(y) = 0 otherwise. This simulates a condition where the pretrained model is of decent but sub-optimal quality. r here is at the scale of popular rewards used in MT, such as GAN-based rewards or BLEU (which are between 0 and 1). 2. Constant Reward: r is constantly equal to 1, for all tokens. This setting is aimed to con- firm that PKE is not a result of the signal car- ried by the reward. Experiments with the first setting were run 100 times, each time for 50K steps, updating θ after each step. With the second setting, it is sufficient to take a single step at a time, as the expected up- date after each step is zero, and so any PKE seen in a single step is only accentuated in the next. It is, therefore, more telling to run more repeti- tions rather than more steps per initialization. We, therefore, sample 10000 pretrained distributions, and perform a single REINFORCE step. As RL training in NMT lasts about 30 epochs before stopping, samples about 100K tokens per epoch, and as the network already predicts ybest in about two thirds of the contexts,3 we estimate the number of steps used in practice to be in the order of magnitude of 1M. For visual clarity, we present figures for 50K-100K steps. However, full ex- periments (with 1M steps) exhibit similar trends: 3Based on our NMT experiments, which we assume to be representative of the error rate of other NMT systems. where REINFORCE was not close to converging af- ter 50K steps, the same was true after 1M steps. We evaluate the peakiness of a distribution in terms of the probability of the most probable token (the mode), the total probability of the ten most probable tokens, and the entropy of the distribu- tion (lower entropy indicates more peakiness). Results. The distributions become peakier in terms of all the mode’s probability and the 10 most probable to- kens increases, and the entropy decreases. Figure 1a presents the histogram of the update size, the difference in the probability of the 10 most proba- ble tokens in the Constant Reward setting, after a single step. Figure 1b depicts similar statistics for the mode. The average entropy in the pretrained model is 2.9 is reduced to 2.85 after one REIN- FORCE step. Simulated Reward setting shows similar trends. For example, entropy decreases from 3 to about 0.001 in 100K steps. This extreme decrease sug- gests it is effectively a deterministic policy. PKE is achieved in a few hundred steps, usually before other effects become prominent (see Figure 2), and is stronger than for Constant Reward. # 4.2 NMT Experiments We turn to analyzing a real-world application of REINFORCE to NMT. Important differences be- tween this and the previous simulations are: (1) it is rare in NMT for REINFORCE to sample from the same conditional distribution more than a hand- ful of times, given the number of source sentences x and sentence prefixes y<i (contexts); and (2) in NMT Pθ(·|x, y<i) shares parameters between con- texts, which means that updating Pθ for one con- text may influence Pθ for another. We follow the same pretraining as in §4.1. We then follow Yang et al. (2018) in defining the re- ward function based on the expected BLEU score. Expected BLEU is computed by sampling suffixes for the sentence, and averaging the BLEU score of the sampled sentences against the reference. We use early stopping with a patience of 10 epochs, where each epoch consists of 5000 sen- tences sampled from the WMT2015 (Bojar et al., 2015) German-English training data. We use k = 1. We retuned the learning-rate, and positive base- line settings against the development set. Other hyper-parameters were an exact replication of the experiments reported in (Yang et al., 2018). 0.25 \ 0.2 g 20.15 f € f 8 : 5 0.1 ' a : 0.05 ; 0.0 . -1.0 -0.5 0.0 05 1.0 Update size © 0.08 -1.0 -0.5 0.0 0.5 1.0 Update size (a) Top 10 (b) Mode Figure 1: A histogram of the update size (x-axis) to the total probability of the 10 most probable tokens (left) or the most probable token (right) in the Constant Reward setting. An update is overwhelmingly more probable to increase this probability than to decrease it. > 2 Fs oO g 8 & 0 2 «440 «=sti«SC*«édO Step (thousands) 09 0.8 07 0.6 2 50s oO Soa 8 &03 0.2 on 0.0 0 2 «40 «= 60SsiSC«*dO Step (thousands) 08 > 2os — good Fs oO — best g S04 — worst & 02 0.0 —$—S 0 2 «40 «= 60siSC«*O Step (thousands) (a) (b) (c) Figure 2: Token probabilities through REINFORCE training, in the controlled simulations in the Simulated Reward setting. The left/center/right figures correspond to simulations where the target token (ybest) was initially the second/third/fourth most probable token. The green line corresponds to the target token, yellow lines to medium- reward tokens and red lines to no-reward tokens. Results. Results indicate an increase in the peakiness of the conditional distributions. Our re- sults are based on a sample of 1000 contexts from the pretrained model, and another (independent) sample from the reinforced model. Indeed, the modes of the conditional distribu- tions tend to increase. Figure 3 presents the distri- bution of the modes’ probability in the reinforced conditional distributions compared with the pre- trained model, showing a shift of probability mass towards higher probabilities for the mode, follow- ing RL. Another indication of the increased peak- iness is the decrease in the average entropy of Pθ, which was reduced from 3.45 in the pretrained model to an average of 2.82 following RL. This more modest reduction in entropy (compared to §4.1) might also suggest that the procedure did not converge to the optimal value for θ, as then we would have expected the entropy to substantially drop if not to 0 (overfit), then to the average en- tropy of valid next tokens (given the source and a prefix of the sentence). # 5 Performance following REINFORCE We now turn to assessing under what conditions it is likely that REINFORCE will lead to an improve- ment in the performance of an NMT system. As in the previous section, we use both controlled simu- lations and NMT experiments. # 5.1 Controlled Simulations We use the same model and experimental setup de- scribed in Section 4.1, this time only exploring the Simulated Reward setting, as a Constant Reward is not expected to converge to any meaningful θ. Results are averaged over 100 conditional distri- butions sampled from the pretrained model. Caution should be exercised when determining the learning rate (LR). Common LRs used in the NMT literature are of the scale of 10−4. How- => o = o mmm Pretrain 208 mmm Reinforced | 5 | 8 S 0.6 g a 2 3° i > E02 il oO 0.0 = 0.0 0.2 0.4 0.6 0.8 1.0 Model probability Figure 3: The cumulative distribution of the proba- bility of the most likely token in the NMT experi- ments. The green distribution corresponds to the pre- trained model, and the blue corresponds to the rein- forced model. The y-axis is the proportion of condi- tional probabilities with a mode of value ≤ x (the x- axis). Note that a lower cumulative percentage means a more peaked output distribution. ever, in our simulations, no LR smaller than 0.1 yielded any improvement in R. We thus set the LR to be 0.1. We note that in our simulations, a higher learning rate means faster convergence as our reward is noise-free: it is always highest for the best option. In practice, increasing the learn- ing rate may deteriorate results, as it may cause the system to overfit to the sampled instances. Indeed, when increasing the learning rate in our NMT ex- periments (see below) by an order of magnitude, early stopping caused the RL procedure to stop without any parameter updates. Figure 2 shows how Pθ changes over the first 50K steps of REINFORCE (probabilities are aver- aged over 100 repetitions), for a case where ybest was initially the second, third and fourth most probable. Although these are the easiest settings for REINFORCE, and despite the high learning rate, it fails to make ybest the mode of the distribu- tion within 100K steps, unless ybest was initially the second most probable. In cases where ybest is initially of a lower rank than four, it is hard to see any increase in its probability, even after 1M steps. # 5.2 NMT Experiments We trained an NMT system, using the same pro- cedure as in Section 4.2, and report BLEU scores over the news2014 test set. After training with an expected BLEU reward, we indeed see a minor im- provement which is consistent between trials and pretrained models. While the pretrain BLEU score So o o > S fo) fon) oO Commulative percentage lo} Ny 0.0 mail 10 9 8 7 6 5 4 3 2 1 Gold token rank by model Figure 4: Cumulative percentage of contexts where the pretrained model ranks ybest in rank x or below and where it does not rank ybest first (x = 0). In about half the cases it is ranked fourth or below. is 30.31, the reinforced one is 30.73. Analyzing what words were influenced by the RL procedure, we begin by computing the cumu- lative probability of the target token ybest to be ranked lower than a given rank according to the pretrained model. Results (Figure 4) show that in about half of the cases, ybest is not among the top three choices of the pretrained model, and we thus expect it not to gain substantial probability follow- ing REINFORCE, according to our simulations. We next turn to compare the ranks the rein- forced model assigns to the target tokens, and their ranks according to the pretrained model. Figure 5 presents the difference in the probability that ybest is ranked at a given rank following RL and the probability it is ranked there initially. Results in- dicate that indeed more target tokens are ranked first, and less second, but little consistent shift of probability mass occurs otherwise across the ten first ranks. It is possible that RL has managed to push ybest in some cases between very low ranks (<1000) to medium-low ranks (between 10 and 1000). However, token probabilities in these ranks are so low that it is unlikely to affect the system outputs in any way. This fits well with the results of our simulations that predicted that only the ini- tially top-ranked tokens are likely to change. In an attempt to explain the improved BLEU score following RL with PKE, we repeat the NMT experiment this time using a constant reward of 1. Our results present a nearly identical improve- ment in BLEU, achieving 30.72, and a similar pat- tern in the change of the target tokens’ ranks (see Appendix 8). Therefore, there is room to sus- 4,5 1e-3 1.0 0.5 0.0 O_o L -1.0 Probability change -1.5 or VM YON ORM DD + Ses “s Gold token rank Figure 5: Difference between the ranks of ybest in the reinforced and the pretrained model. Each column x corresponds to the difference in the probability that ybest is ranked in rank x in the reinforced model, and the same probability in the pretrained model. pect that even in cases where RL yields an im- provement in BLEU, it may partially result from reward-independent factors, such as PKE.4 # 6 Experiments with Contrastive MRT In §2.3 we showed that CMRT does not, in fact, maximize R, and so does not enjoy the same theo- retical guarantees as REINFORCE and similar pol- icy gradient methods. However, it is still the RL procedure of choice in much recent work on NMT. We therefore repeat the simulations described in §4 and §5, assessing the performance of MRT in these conditions. We experiment with α = 0.005 and k = 20, common settings in the literature, and average over 100 trials. Figure 6 shows how the distribution Pθ changes over the course of 50K update steps to θ, where ybest is taken to be the second and third initially most probable token (Simulated Reward setting). Results are similar in trends to those obtained with REINFORCE: MRT succeeds in pushing ybest to be the highest ranked token if it was initially second, but struggles where it was initially ranked third or below. We only observe a small PKE in MRT. This is probably due to the contrastive effect, which means that tokens that were not sampled do not lose probability mass. All graphs we present here allow sampling the same token more than once in each batch (i.e., S is a sample with replacements). Simulations with deduplication show similar results. 4We tried several other reward functions as well, all of which got BLEU scores of 30.73–30.84. This improvement is very stable across metrics, trials and pretrained models. # 7 Discussion In this paper, we showed that the type of distri- butions used in NMT entail that promoting the target token to be the mode is likely to take a prohibitively long times for existing RL practices, except under the best conditions (where the pre- trained model is “nearly” correct). This leads us to conclude that observed improvements from using RL for NMT are likely due either to fine-tuning the most probable tokens in the pretrained model (an effect which may be more easily achieved us- ing reranking methods, and uses but little of the power of RL methods), or to effects unrelated to the signal carried by the reward, such as PKE. An- other contribution of this paper is in showing that CMRT does not optimize the expected reward and is thus theoretically unmotivated. A number of reasons lead us to believe that in our NMT experiments, improvements are not due to the reward function, but to artefacts such as PKE. First, reducing a constant baseline from r, so as to make the expected reward zero, disal- lows learning. This is surprising, as REINFORCE, generally and in our simulations, converges faster where the reward is centered around zero, and so the fact that this procedure here disallows learning hints that other factors are in play. As PKE can be observed even where the reward is constant (if the expected reward is positive; see §4.1), this sug- gests PKE may play a role here. Second, we ob- serve more peakiness in the reinforced model and in such cases, we expect improvements in BLEU (Caccia et al., 2018). Third, we achieve similar results with a constant reward in our NMT exper- iments (§5.2). Fourth, our controlled simulations show that asymptotic convergence is not reached in any but the easiest conditions (§5.1). Our analysis further suggests that gradient clip- ping, sometimes used in NMT (Zhang et al., 2016), is expected to hinder convergence further. It should be avoided when using REINFORCE as it violates REINFORCE’s assumptions. The per-token sampling as done in our experi- ments is more exploratory than beam search (Wu et al., 2018), reducing PKE. Furthermore, the lat- ter does not sample from the behavior policy, but does not properly account for being off-policy in the parameter updates. Adding the reference to the sample S, which some implementations allow (Sennrich et al., 2017) may help reduce the problems of never sam- 0.7 0.6 0.5 = ia 0.4 § 8 0.3 o 0.2 — good — best 0.1 — worst 10 20 30 40 50 Step (thousands) good @ 0.4 — best worst 10 20 30 40 50 Step (thousands) 0.7 0.6 0.5 = ia 0.4 § 8 0.3 o 0.2 — good — best 0.1 — worst 10 20 30 40 50 Step (thousands) good @ 0.4 — best worst 10 20 30 40 50 Step (thousands) Figure 6: The probability of different tokens following CMRT, in the controlled simulations in the Simulated Reward setting. The left/right figures correspond to simulations where the target token (ybest) was initially the second/third most probable token. The green line corresponds to the target token, yellow lines to medium-reward tokens and red lines to tokens with r(y) = 0. pling the target tokens. However, as Edunov et al. (2018) point out, this practice may lower results, as it may destabilize training by leading the model to improve over outputs it cannot generalize over, as they are very different from anything the model assigns a high probability to, at the cost of other outputs. # 8 Conclusion The standard MT scenario poses several uncom- mon challenges for RL. First, the action space in MT problems is a high-dimensional discrete space (generally in the size of the vocabulary of the target language or the product thereof for sen- tences). This contrasts with the more common scenario studied by contemporary RL methods, which focuses mostly on much smaller discrete ac- tion spaces (e.g., video games (Mnih et al., 2015, 2016)), or continuous action spaces of relatively low dimensions (e.g., simulation of robotic con- trol tasks (Lillicrap et al., 2015)). Second, reward for MT is naturally very sparse – almost all possi- ble sentences are “wrong” (hence, not rewarding) in a given context. Finally, it is common in MT to use RL for tuning a pretrained model. Using a pretrained model ameliorates the last problem. But then, these pretrained models are in general quite peaky, and because training is done on-policy – that is, actions are being sampled from the same model being optimized – exploration is inherently limited. Here we argued that, taken together, these chal- lenges result in significant weaknesses for current RL practices for NMT, that may ultimately pre- vent them from being truly useful. At least some of these challenges have been widely studied in the RL literature, with numerous techniques de- veloped to address them, but were not yet adopted in NLP. We turn to discuss some of them. Off-policy methods, in which observations are sampled from a different policy than the one being currently optimized, are prominent in RL (Watkins and Dayan, 1992; Sutton and Barto, 1998), and were also studied in the context of policy gradient methods (Degris et al., 2012; Silver et al., 2014). In principle, such methods allow learning from a more “exploratory” policy. Moreover, a key mo- tivation for using α in CMRT is smoothing; off- policy sampling allows smoothing while keeping convergence guarantees. In its basic form, exploration in REINFORCE re- lies on stochasticity in the action-selection (in MT, this is due to sampling). More sophisticated explo- ration methods have been extensively studied, for example using measures for the exploratory use- fulness of states or actions (Fox et al., 2018), or re- lying on parameter-space noise rather than action- space noise (Plappert et al., 2017). For MT, an additional challenge is that even ef- fective exploration (sampling diverse sets of ob- servations), may not be enough, since the state- action space is too large to be effectively covered, with almost all sentences being not rewarding. Re- cently, diversity-based and multi-goal methods for RL were proposed to tackle similar challenges (Andrychowicz et al., 2017; Ghosh et al., 2018; Eysenbach et al., 2019). We believe the adoption of such methods is a promising path forward for the application of RL in NLP. # 9 Acknowledgments This work was supported by the Israel Science Foundation (grant no. 929/17) and by the HUJI Cyber Security Research Center in conjunction with the Israel National Cyber Bureau in the Prime Minister’s Office. # References Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob Mc- Grew, Josh Tobin, OpenAI Pieter Abbeel, and Woj- ciech Zaremba. 2017. Hindsight experience replay. In Advances in Neural Information Processing Sys- tems, pages 5048–5058. Shiqi Shen Ayana, Zhiyuan Liu, and Maosong Sun. 2016. Neural headline generation with minimum risk training. arXiv preprint arXiv:1604.01904. Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In WMT@EMNLP. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2018. Language gans falling short. arXiv preprint arXiv:1811.02549. Thomas Degris, Martha White, and Richard S Sut- ton. 2012. Off-policy actor-critic. arXiv preprint arXiv:1205.4839. Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to se- quence learning. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 355–364. Association for Computational Linguis- tics. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. 2019. Diversity is all you need: Learning skills without a reward function. In Inter- national Conference on Learning Representations. Lior Fox, Leshem Choshen, and Yonatan Loewenstein. 2018. Dora the explorer: Directed outreaching rein- forcement action-selection. ICLR, abs/1804.04012. Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, and Sergey Levine. 2018. Divide-and- In International conquer reinforcement learning. Conference on Learning Representations. Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In ECCV. Diederik P. Kingma and Jimmy Ba. 2015. Adam: CoRR, A method for stochastic optimization. abs/1412.6980. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177–180. Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169, Copenhagen, Denmark. Association for Computa- tional Linguistics. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continu- ous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971. Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2016. Optimization of image description metrics using policy gradient methods. CoRR, abs/1612.00370, 2. Peter Makarov and Simon Clematide. 2018. Neu- ral transition-based string transduction for limited- resource setting in morphology. In COLING. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asyn- chronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidje- land, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Na- ture, 518(7540):529–533. Graham Neubig. 2016. Lexicons and minimum risk training for neural machine translation: Naist-cmu at wat2016. In WAT@COLING. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Pad- manabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Liming Wang. 2018. Xnmt: The extensible neu- ral machine translation toolkit. In AMTA. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics-Volume 1, pages 160–167. As- sociation for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th annual meeting on association for compu- tational linguistics, pages 311–318. Association for Computational Linguistics. Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. 2017. Parameter space noise for exploration. arXiv preprint arXiv:1706.01905. O. Press, A. Bar, B. Bogin, J. Berant, and L. Wolf. 2017. Language generation with recurrent genera- tive adversarial networks without pre-training. In Fist Workshop on Learning to Generate Natural Language@ICML. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732. Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7008–7024. and Benjamin Van Durme. 2017. Grammatical error correction with neural reinforcement learning. arXiv preprint arXiv:1707.00299. Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A stochastic decoder for neural machine translation. In ACL. Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel Läubli, Antonio Vale- rio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: a toolkit for neural machine trans- lation. In EACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1715–1725. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1683–1692. Association for Compu- tational Linguistics. Shiqi Shen, Yang Liu, and Maosong Sun. 2017. Op- timizing non-decomposable evaluation metrics for Journal of Computer neural machine translation. Science and Technology, 32:796–804. Rakshith Shetty, Marcus Rohrbach, Lisa Anne Hen- dricks, Mario Fritz, and Bernt Schiele. 2017. Speak- ing the same language: Matching machine to human captions by adversarial training. In 2017 IEEE In- ternational Conference on Computer Vision (ICCV), pages 4155–4164. IEEE. David Silver, Guy Lever, Nicolas Heess, Thomas De- gris, Daan Wierstra, and Martin Riedmiller. 2014. Deterministic policy gradient algorithms. In ICML. Richard S Sutton and Andrew G Barto. 1998. Rein- forcement learning: An introduction. MIT press. G. Tevet, G. Habib, V. Shwartz, and J. Berant. 2018. Evaluating text GANs as language models. arXiv preprint arXiv:1810.12686. Tijmen Tieleman and Geoffrey Hinton. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running av- erage of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2):26–31. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998–6008. Christopher JCH Watkins and Peter Dayan. 1992. Q- learning. Machine learning, 8(3-4):279–292. Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229–256. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and Tie- Yan Liu. 2018. A study of reinforcement learning for neural machine translation. In EMNLP. Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. 2017. Adver- arXiv preprint sarial neural machine translation. arXiv:1704.06933. Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Improving neural machine translation with condi- tional sequence generative adversarial nets. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 1346–1355. Association for Computational Linguistics. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, pages 2852–2858. Jiac heng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun, Huanbo Luan, and Yang Liu. 2017. Thumt: An open source toolkit for neural ma- chine translation. arXiv preprint arXiv:1706.06415. Yizhe Zhang, Zhe Gan, and Lawrence Carin. 2016. In NIPS Generating text via adversarial training. workshop on Adversarial Training, volume 21. # A Contrastive MRT does not Maximize the Expected Reward We hereby detail a simple example where follow- ing the Contrastive MRT method (see §2.3) does not converge to the parameter value that maxi- mizes R. Let θ be a real number in [0, 0.5], and let Pθ be a family of distributions over three values a, b, c such that: Pθ (x) = x = a θ 2θ2 x = b 1 − θ − 2θ2 x = c Let r(a) = 1, r(b) = 0, r(c) = 0.5. The ex- pected reward as a function of θ is: R(θ) = θ + 0.5(1 − θ − 2θ2) R(θ) is uniquely maximized by θ∗ = 0.25. Table 1 details the possible samples of size k = 2, their probabilities, the corresponding R and its gradient. Standard numerical methods show that [V R] over possible samples S is positive for 9 € 0, y) and negative for 8 € (7,0.5], where y © 0.295. This means that for any initialization of 6 € (0,0.5], Contrastive MRT will converge to 7 if the learning rate is sufficiently small. For @ = 0, R= 0.5, and there will be no gradient updates, so the method will converge to 9 = 0. Neither of these values maximizes R(0). We note that by using some g (θ) the γ could be arbitrarily far from θ∗. g could also map to (−inf, inf ) more often used in neural networks parameters. We further note that resorting to maximizing [R] instead, does not maximize R(@) either. In- deed, plotting E[R] as a function of 0 for this ex- ample, yields a maximum at 6 © 0.32. B_ Deriving the Gradient of 2 Given S, recall the definition of R: k R(0,S) = S> Qo,s(ir(yi) i=l Taking the deriviative w.r.t. θ: S P(S) R VR {a,b} 403 T6 Ca? {a,c} ] 20(1-0-267) | 0.5 + 9p | aif payer {b,c} | 402(1-0-267) | ee ae a,a - 1 0 bb 464 0 0 Ge (1-0-267)? 0.5 0 Table 1: The gradients of R for each possible sample S. The batch size is k = 2. Rows correspond to different sampled outcomes. VR is the gradient of R given the corresponding value for S. “ VP(y) -aP(y)°~! - Z(8) — VZ(S)- Ply) > r(yi) Zs k Dro (FM aw) - Zoo) = k Yr) Qs) (oV log P(ys) — Vlog Z(8)) = i=l k aD? (nv) Qu)V 10g P(y:)) ~ Eolr]V log 2(5) i=l # C NMT Implementation Details True casing and tokenization were used (Koehn et al., 2007), including escaping html symbols and "-" that represents a compound was changed into a separate token of =. Some preprocessing used before us converted the latter to ##AT##-##AT## but standard tokenizers in use process that into 11 different tokens, which over-represents the signif- icance of that character when BLEU is calculated. BPE (Sennrich et al., 2016) extracted 30715 to- kens. For the MT experiments we used 6 lay- ers in the encoder and the decoder. The size of the embeddings was 512. Gradient clipping was used with size of 5 for pre-training (see Discus- sion on why not to use it in training). We did not use attention dropout, but 0.1 residual dropout rate was used. In pretraining and training sentences of more than 50 tokens were discarded. Pretraining and training were considered finished when BLEU 2 FI 8 a 0 20 40 60 80 100 Step (thousands) 09 08 07 208 B05 g04 03 0.2 04 0.0 0 20 40 60 80 100 Step (thousands) 0 20 40 60 80 100 Step (thousands) (a) (b) (c) Figure 7: The probability of different tokens following REINFORCE, in the controlled simulations in the Con- stant Reward setting. The left/center/right figures correspond to simulations where the target token (ybest) was initially the second/third/fourth most probable token. The green line corresponds to the target token, yellow lines to medium-reward tokens and red lines to tokens with r(y) = 0. did not increase in the development set for 10 con- secutive evaluations, and evaluation was done ev- ery 1000 and 5000 for batches of size 100 and 256 for pretraining and training respectively. Learn- ing rate used for rmsprop (Tieleman and Hin- ton, 2012) was 0.01 in pretraining and for adam (Kingma and Ba, 2015) with decay was 0.005 for training. 4000 learning rate warm up steps were used. Pretraining took about 7 days with 4 GPUs, afterwards, training took roughly the same time. Monte Carlo used 20 sentence rolls per word. # D Detailed Results for Constant Reward Setting We present graphs for the constant reward setting in Figures 8 and 7. Trends are similar to the ones obtained for the Simulated Reward setting. 1.5 1e3 1.0 0.5 0.0 OW———Oo_o | [] -0.5 -1.0 Probability change -1.5 oN VM Y © OK © S8s§ “2 Gold token rank Figure 8: Difference between the ranks of ybest in the reinforced with constant reward and the pretrained model. Each column x corresponds to the difference in the probability that ybest is ranked in rank x in the reinforced model, and the same probability in the pre- trained model.
{ "id": "1810.12686" }
1906.08237
XLNet: Generalized Autoregressive Pretraining for Language Understanding
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.
http://arxiv.org/pdf/1906.08237
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le
cs.CL, cs.LG
Pretrained models and code are available at https://github.com/zihangdai/xlnet
null
cs.CL
20190619
20200102
0 2 0 2 n a J 2 ] L C . s c [ 2 v 7 3 2 8 0 . 6 0 9 1 : v i X r a # XLNet: Generalized Autoregressive Pretraining for Language Understanding Zhilin Yang∗1, Zihang Dai∗12, Yiming Yang1, Jaime Carbonell1, Ruslan Salakhutdinov1, Quoc V. Le2 1Carnegie Mellon University, 2Google AI Brain Team {zhiliny,dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, [email protected] # Abstract With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining ap- proaches based on autoregressive language modeling. However, relying on corrupt- ing the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.1. # Introduction Unsupervised representation learning has been highly successful in the domain of natural language processing [7, 22, 27, 28, 10]. Typically, these methods first pretrain neural networks on large-scale unlabeled text corpora, and then finetune the models or representations on downstream tasks. Under this shared high-level idea, different unsupervised pretraining objectives have been explored in literature. Among them, autoregressive (AR) language modeling and autoencoding (AE) have been the two most successful pretraining objectives. AR language modeling seeks to estimate the probability distribution of a text corpus with an au- toregressive model [7] |27||28]. Specifically, given a text sequence x = (x1,--- , a), AR language modeling factorizes the likelihood into a forward product p(x) = Ths p(x | X<t) or a backward one p(x) = Ther p(z | X52). A parametric model (e.g. a neural network) is trained to model each conditional distribution. Since an AR language model is only trained to encode a uni-directional con- text (either forward or backward), it is not effective at modeling deep bidirectional contexts. On the contrary, downstream language understanding tasks often require bidirectional context information. This results in a gap between AR language modeling and effective pretraining. In comparison, AE based pretraining does not perform explicit density estimation but instead aims to reconstruct the original data from corrupted input. A notable example is BERT [10], which has been the state-of-the-art pretraining approach. Given the input token sequence, a certain portion of tokens are replaced by a special symbol [MASK], and the model is trained to recover the original tokens from the corrupted version. Since density estimation is not part of the objective, BERT is allowed to utilize ∗Equal contribution. Order determined by swapping the one in [9]. 1Pretrained models and code are available at https://github.com/zihangdai/xlnet 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. bidirectional contexts for reconstruction. As an immediate benefit, this closes the aforementioned bidirectional information gap in AR language modeling, leading to improved performance. However, the artificial symbols like [MASK] used by BERT during pretraining are absent from real data at finetuning time, resulting in a pretrain-finetune discrepancy. Moreover, since the predicted tokens are masked in the input, BERT is not able to model the joint probability using the product rule as in AR language modeling. In other words, BERT assumes the predicted tokens are independent of each other given the unmasked tokens, which is oversimplified as high-order, long-range dependency is prevalent in natural language [9]. Faced with the pros and cons of existing language pretraining objectives, in this work, we propose XLNet, a generalized autoregressive method that leverages the best of both AR language modeling and AE while avoiding their limitations. • Firstly, instead of using a fixed forward or backward factorization order as in conventional AR mod- els, XLNet maximizes the expected log likelihood of a sequence w.r.t. all possible permutations of the factorization order. Thanks to the permutation operation, the context for each position can consist of tokens from both left and right. In expectation, each position learns to utilize contextual information from all positions, i.e., capturing bidirectional context. • Secondly, as a generalized AR language model, XLNet does not rely on data corruption. Hence, XLNet does not suffer from the pretrain-finetune discrepancy that BERT is subject to. Meanwhile, the autoregressive objective also provides a natural way to use the product rule for factorizing the joint probability of the predicted tokens, eliminating the independence assumption made in BERT. In addition to a novel pretraining objective, XLNet improves architectural designs for pretraining. • Inspired by the latest advancements in AR language modeling, XLNet integrates the segment recurrence mechanism and relative encoding scheme of Transformer-XL [9] into pretraining, which empirically improves the performance especially for tasks involving a longer text sequence. • Naively applying a Transformer(-XL) architecture to permutation-based language modeling does not work because the factorization order is arbitrary and the target is ambiguous. As a solution, we propose to reparameterize the Transformer(-XL) network to remove the ambiguity. Empirically, under comparable experiment setting, XLNet consistently outperforms BERT [10] on a wide spectrum of problems including GLUE language understanding tasks, reading comprehension tasks like SQuAD and RACE, text classification tasks such as Yelp and IMDB, and the ClueWeb09-B document ranking task. Related Work The idea of permutation-based AR modeling has been explored in [32, 12], but there are several key differences. Firstly, previous models aim to improve density estimation by baking an “orderless” inductive bias into the model while XLNet is motivated by enabling AR language models to learn bidirectional contexts. Technically, to construct a valid target-aware prediction distribution, XLNet incorporates the target position into the hidden state via two-stream attention while previous permutation-based AR models relied on implicit position awareness inherent to their MLP architectures. Finally, for both orderless NADE and XLNet, we would like to emphasize that “orderless” does not mean that the input sequence can be randomly permuted but that the model allows for different factorization orders of the distribution. Another related idea is to perform autoregressive denoising in the context of text generation [11], which only considers a fixed order though. # 2 Proposed Method # 2.1 Background In this section, we first review and compare the conventional AR language modeling and BERT for language pretraining. Given a text sequence x = [x1, · · · , xT ], AR language modeling performs pretraining by maximizing the likelihood under the forward autoregressive factorization: exp (ho (X1:-1) ' e(x1)) Dor exp (he(X12-1) Te(2’))’ T T max log po(x) = log pol | x<t) = ) log a) t=1 t=1 2 where hθ(x1:t−1) is a context representation produced by neural models, such as RNNs or Transform- ers, and e(x) denotes the embedding of x. In comparison, BERT is based on denoising auto-encoding. Specifically, for a text sequence x, BERT first constructs a corrupted version ˆx by randomly setting a portion (e.g. 15%) of tokens in x to a special symbol [MASK]. Let the masked tokens be ¯x. The training objective is to reconstruct ¯x from ˆx: exp (Ho(X)/ e(s)) exp (Ho(%)/ e(a’))’ T T max log po(X | x) Â¥ Yom log po (xz | x) = Yom log x (2) t=1 t=1 where mt = 1 indicates xt is masked, and Hθ is a Transformer that maps a length-T text sequence x into a sequence of hidden vectors Hθ(x) = [Hθ(x)1, Hθ(x)2, · · · , Hθ(x)T ]. The pros and cons of the two pretraining objectives are compared in the following aspects: Independence Assumption: As emphasized by the ≈ sign in Eq. (2), BERT factorizes the joint conditional probability p(¯x | ˆx) based on an independence assumption that all masked tokens ¯x are separately reconstructed. In comparison, the AR language modeling objective (1) factorizes pθ(x) using the product rule that holds universally without such an independence assumption. • Input noise: The input to BERT contains artificial symbols like [MASK] that never occur in downstream tasks, which creates a pretrain-finetune discrepancy. Replacing [MASK] with original tokens as in [10] does not solve the problem because original tokens can be only used with a small probability — otherwise Eq. (2) will be trivial to optimize. In comparison, AR language modeling does not rely on any input corruption and does not suffer from this issue. • Context dependency: The AR representation hθ(x1:t−1) is only conditioned on the tokens up to position t (i.e. tokens to the left), while the BERT representation Hθ(x)t has access to the contextual information on both sides. As a result, the BERT objective allows the model to be pretrained to better capture bidirectional context. # 2.2 Objective: Permutation Language Modeling According to the comparison above, AR language modeling and BERT possess their unique advan- tages over the other. A natural question to ask is whether there exists a pretraining objective that brings the advantages of both while avoiding their weaknesses. Borrowing ideas from orderless NADE [32], we propose the permutation language modeling objective that not only retains the benefits of AR models but also allows models to capture bidirectional contexts. Specifically, for a sequence x of length T , there are T ! different orders to perform a valid autoregressive factorization. Intuitively, if model parameters are shared across all factorization orders, in expectation, the model will learn to gather information from all positions on both sides. To formalize the idea, let ZT be the set of all possible permutations of the length-T index sequence [1, 2, . . . , T ]. We use zt and z<t to denote the t-th element and the first t−1 elements of a permutation z ∈ ZT . Then, our proposed permutation language modeling objective can be expressed as follows: T max Env 2, > log pe (xz, ra) 3) t=1 # T Essentially, for a text sequence x, we sample a factorization order z at a time and decompose the likelihood pg (x) according to factorization order. Since the same model parameter 6 is shared across all factorization orders during training, in expectation, x; has seen every possible element x; A x; in the sequence, hence being able to capture the bidirectional context. Moreover, as this objective fits into the AR framework, it naturally avoids the independence assumption and the pretrain-finetune discrepancy discussed in Section[2.1] Remark on Permutation The proposed objective only permutes the factorization order, not the sequence order. In other words, we keep the original sequence order, use the positional encodings corresponding to the original sequence, and rely on a proper attention mask in Transformers to achieve permutation of the factorization order. Note that this choice is necessary, since the model will only encounter text sequences with the natural order during finetuning. To provide an overall picture, we show an example of predicting the token x3 given the same input sequence x but under different factorization orders in the Appendix A.7 with Figure 4. 3 # 2.3 Architecture: Two-Stream Self-Attention for Target-Aware Representations Qo cP &) & & & H i i T T Attention i (,@\,2 @\@) (,@)(,@ @)@ O22) WE IE) ee) ZN Attention Masks Masked Two-stream Attention ecoo © © — Content stream: e can see self eco o\(o OO) ,o),@ oo Pe) We) bee) baler) ee Query stream: cannot see self “NS Masked Two-stream Attention LJ ( Sample a factorization order w JE) fal) fal) fea ea fatxzation (c) Figure 1: (a): Content stream attention, which is the same as the standard self-attention. (b): Query stream attention, which does not have access information about the content xzt. (c): Overview of the permutation language modeling training with two-stream attention. While the permutation language modeling objective has desired properties, naive implementation with standard Transformer parameterization may not work. To see the problem, assume we parameterize the next-token distribution pθ(Xzt | xz<t ) using the standard Softmax formulation, i.e., pθ(Xzt = , where hθ(xz<t) denotes the hidden representation of xz<t x | xz<t) = produced by the shared Transformer network after proper masking. Now notice that the representation hθ(xz<t) does not depend on which position it will predict, i.e., the value of zt. Consequently, the same distribution is predicted regardless of the target position, which is not able to learn useful representations (see Appendix A.1 for a concrete example). To avoid this problem, we propose to re-parameterize the next-token distribution to be target position aware: # exp(e(x) " ho(xz-,)) exp(e(@’) "ho (Xae,) exp (e(x)" go(%z2.; 2)) So exp (e(2!) " go(Xz-,,2))’ po(Xz, = 2 | Xz2,) (4) where gθ(xz<t, zt) denotes a new type of representations which additionally take the target position zt as input. Two-Stream Self-Attention While the idea of target-aware representations removes the ambiguity in target prediction, how to formulate gθ(xz<t, zt) remains a non-trivial problem. Among other possibilities, we propose to “stand” at the target position zt and rely on the position zt to gather information from the context xz<t through attention. For this parameterization to work, there are two requirements that are contradictory in a standard Transformer architecture: (1) to predict the token xzt, gθ(xz<t, zt) should only use the position zt and not the content xzt, otherwise the objective becomes trivial; (2) to predict the other tokens xzj with j > t, gθ(xz<t , zt) should also encode the content xzt to provide full contextual information. To resolve such a contradiction, we propose to use two sets of hidden representations instead of one: The content representation hθ(xz≤t), or abbreviated as hzt, which serves a similar role to the standard hidden states in Transformer. This representation encodes both the context and xzt itself. • The query representation gθ(xz<t, zt), or abbreviated as gzt, which only has access to the contex- e The query representation go (xz _,, 21), or abbreviated as g.,, which only has access to the contex- tual information x,_, and the position z;, but not the content «,,, as discussed above. tual information xz<t and the position zt, but not the content xzt, as discussed above. Computationally, the first layer query stream is initialized with a trainable vector, i.e. g(0) i = w, while the content stream is set to the corresponding word embedding, i.e. h(0) i = e(xi). For each self-attention layer m = 1, . . . , M , the two streams of representations are schematically2 updated 2To avoid clutter, we omit the implementation details including multi-head attention, residual connection, layer normalization and position-wise feed-forward as used in Transformer(-XL). The details are included in Appendix A.2 for reference. 4 with a shared set of parameters as follows (illustrated in Figures 1 (a) and (b)): g(m) zt h(m) zt ← Attention(Q = g(m−1) ← Attention(Q = h(m−1) zt zt , KV = h(m−1) , KV = h(m−1) z<t z≤t ; θ), ; θ), (query stream: use zt but cannot see xzt) (content stream: use both zt and xzt). where Q, K, V denote the query, key, and value in an attention operation [33]. The update rule of the content representations is exactly the same as the standard self-attention, so during finetuning, we can simply drop the query stream and use the content stream as a normal Transformer(-XL). Finally, we can use the last-layer query representation g(M ) Partial Prediction While the permutation language modeling objective (3) has several benefits, it is a much more challenging optimization problem due to the permutation and causes slow convergence in preliminary experiments. To reduce the optimization difficulty, we choose to only predict the last tokens in a factorization order. Formally, we split z into a non-target subsequence z≤c and a target subsequence z>c, where c is the cutting point. The objective is to maximize the log-likelihood of the target subsequence conditioned on the non-target subsequence, i.e., lz| Xre.)|] =Ex~zr | Do logpo(ws, | X22.)}) (5) t=c41 max E.wZp [log Po(xx.. Note that z>c is chosen as the target because it possesses the longest context in the sequence given the current factorization order z. A hyperparameter K is used such that about 1/K tokens are selected for predictions; i.e., |z| /(|z| − c) ≈ K. For unselected tokens, their query representations need not be computed, which saves speed and memory. # Incorporating Ideas from Transformer-XL Since our objective function fits in the AR framework, we incorporate the state-of-the-art AR language model, Transformer-XL [9], into our pretraining framework, and name our method after it. We integrate two important techniques in Transformer-XL, namely the relative positional encoding scheme and the segment recurrence mechanism. We apply relative positional encodings based on the original sequence as discussed earlier, which is straightforward. Now we discuss how to integrate the recurrence mechanism into the proposed permutation setting and enable the model to reuse hidden states from previous segments. Without loss of generality, suppose we have two segments taken from a long sequence s; i.e., ˜x = s1:T and x = sT +1:2T . Let ˜z and z be permutations of [1 · · · T ] and [T + 1 · · · 2T ] respectively. Then, based on the permutation ˜z, we process the first segment, and then cache the obtained content representations ˜h(m) for each layer m. Then, for the next segment x, the attention update with memory can be written as ne) < Attention(Q = A"), KV = [AY nen] :8) where [., .] denotes concatenation along the sequence dimension. Notice that positional encodings only depend on the actual positions in the original sequence. Thus, the above attention update is independent of ˜z once the representations ˜h(m) are obtained. This allows caching and reusing the memory without knowing the factorization order of the previous segment. In expectation, the model learns to utilize the memory over all factorization orders of the last segment. The query stream can be computed in the same way. Finally, Figure 1 (c) presents an overview of the proposed permutation language modeling with two-stream attention (see Appendix A.7 for more detailed illustration). # 2.5 Modeling Multiple Segments Many downstream tasks have multiple input segments, e.g., a question and a context paragraph in question answering. We now discuss how we pretrain XLNet to model multiple segments in the autoregressive framework. During the pretraining phase, following BERT, we randomly sample two segments (either from the same context or not) and treat the concatenation of two segments as one sequence to perform permutation language modeling. We only reuse the memory that belongs to the same context. Specifically, the input to our model is the same as BERT: [CLS, A, SEP, B, SEP], where “SEP” and “CLS” are two special symbols and “A” and “B” are the two segments. Although 5 we follow the two-segment data format, XLNet-Large does not use the objective of next sentence prediction [10] as it does not show consistent improvement in our ablation study (see Section 3.4). Relative Segment Encodings Architecturally, different from BERT that adds an absolute segment embedding to the word embedding at each position, we extend the idea of relative encodings from Transformer-XL to also encode the segments. Given a pair of positions 7 and j in the sequence, if i and j are from the same segment, we use a segment encoding s;; = s+ or otherwise s;; = s_, where s and s_ are learnable model parameters for each attention head. In other words, we only consider whether the two positions are within the same segment, as opposed to considering which specific segments they are from. This is consistent with the core idea of relative encodings; i.e., only modeling the relationships between positions. When ¢ attends to j, the segment encoding s;; is used to compute an attention weight a;; = (q; + b)'sij, where q, is the query vector as in a standard attention operation and b is a learnable head-specific bias vector. Finally, the value a;; is added to the normal attention weight. There are two benefits of using relative segment encodings. First, the inductive bias of relative encodings improves generalization [9]. Second, it opens the possibility of finetuning on tasks that have more than two input segments, which is not possible using absolute segment encodings. # 2.6 Discussion Comparing Eq. (2) and (5), we observe that both BERT and XLNet perform partial prediction, i.e., only predicting a subset of tokens in the sequence. This is a necessary choice for BERT because if all tokens are masked, it is impossible to make any meaningful predictions. In addition, for both BERT and XLNet, partial prediction plays a role of reducing optimization difficulty by only predicting tokens with sufficient context. However, the independence assumption discussed in Section 2.1 disables BERT to model dependency between targets. To better understand the difference, let’s consider a concrete example [New, York, is, a, city]. Suppose both BERT and XLNet select the two tokens [New, York] as the prediction targets and maximize log p(New York | is a city). Also suppose that XLNet samples the factorization order [is, a, city, New, York]. In this case, BERT and XLNet respectively reduce to the following objectives: JBERT = log p(New | is a city) + log p(York | is a city), JXLNet = log p(New | is a city) + log p(York | New, is a city). Notice that XLNet is able to capture the dependency between the pair (New, York), which is omitted by BERT. Although in this example, BERT learns some dependency pairs such as (New, city) and (York, city), it is obvious that XLNet always learns more dependency pairs given the same target and contains “denser” effective training signals. For more formal analysis and further discussion, please refer to Appendix A.5. # 3 Experiments # 3.1 Pretraining and Implementation Following BERT [10], we use the BooksCorpus [40] and English Wikipedia as part of our pretraining data, which have 13GB plain text combined. In addition, we include Giga5 (16GB text) [26], ClueWeb 2012-B (extended from [5]), and Common Crawl [6] for pretraining. We use heuristics to aggressively filter out short or low-quality articles for ClueWeb 2012-B and Common Crawl, which results in 19GB and 110GB text respectively. After tokenization with SentencePiece [17], we obtain 2.78B, 1.09B, 4.75B, 4.30B, and 19.97B subword pieces for Wikipedia, BooksCorpus, Giga5, ClueWeb, and Common Crawl respectively, which are 32.89B in total. Our largest model XLNet-Large has the same architecture hyperparameters as BERT-Large, which results in a similar model size. During pretraining, we always use a full sequence length of 512. Firstly, to provide a fair comparison with BERT (section 3.2), we also trained XLNet-Large-wikibooks on BooksCorpus and Wikipedia only, where we reuse all pretraining hyper-parameters as in the original BERT. Then, we scale up the training of XLNet-Large by using all the datasets described above. Specifically, we train on 512 TPU v3 chips for 500K steps with an Adam weight decay optimizer, linear learning rate decay, and a batch size of 8192, which takes about 5.5 days. It was 6 observed that the model still underfits the data at the end of training. Finally, we perform ablation study (section 3.4) based on the XLNet-Base-wikibooks. Since the recurrence mechanism is introduced, we use a bidirectional data input pipeline where each of the forward and backward directions takes half of the batch size. For training XLNet-Large, we set the partial prediction constant K as 6 (see Section 2.3). Our finetuning procedure follows BERT [10] except otherwise specified3. We employ an idea of span-based prediction, where we first sample a length L ∈ [1, · · · , 5], and then randomly select a consecutive span of L tokens as prediction targets within a context of (KL) tokens. We use a variety of natural language understanding datasets to evaluate the performance of our method. Detailed descriptions of the settings for all the datasets can be found in Appendix A.3. # 3.2 Fair Comparison with BERT Model SQuAD1.1 SQuAD2.0 RACE MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B BERT-Large (Best of 3) 86.7/92.8 82.8/85.5 75.1 87.3 93.0 91.4 74.0 94.0 88.7 63.7 88.2/94.0 85.1/87.8 77.4 88.4 93.9 91.8 81.2 94.4 90.0 65.2 90.2 91.1 XLNet-Large- wikibooks Table 1: Fair comparison with BERT. All models are trained using the same data and hyperparameters as in BERT. We use the best of 3 BERT variants for comparison; i.e., the original BERT, BERT with whole word masking, and BERT without next sentence prediction. Here, we first compare the performance of BERT and XLNet in a fair setting to decouple the effects of using more data and the improvement from BERT to XLNet. In Table 1, we compare (1) best performance of three different variants of BERT and (2) XLNet trained with the same data and hyperparameters. As we can see, trained on the same data with an almost identical training recipe, XLNet outperforms BERT by a sizable margin on all the considered datasets. # 3.3 Comparison with RoBERTa: Scaling Up RACE Accuracy Middle High Model NDCG@20 ERR@20 GPT [28] BERT [25] BERT+DCMN∗ [38] RoBERTa [21] 59.0 72.0 74.1 83.2 62.9 76.6 79.5 86.5 57.4 70.1 71.8 81.8 DRMM [13] KNRM [8] Conv [8] BERT† 24.3 26.9 28.7 30.53 13.8 14.9 18.1 18.67 XLNet 85.4 88.6 84.0 XLNet 31.10 20.28 Table 2: Comparison with state-of-the-art results on the test set of RACE, a reading comprehension task, and on ClueWeb09-B, a document ranking task. ∗ indicates using ensembles. † indicates our implementations. “Middle” and “High” in RACE are two subsets representing middle and high school difficulty levels. All BERT, RoBERTa, and XLNet results are obtained with a 24-layer architecture with similar model sizes (aka BERT-Large). After the initial publication of our manuscript, a few other pretrained models were released such as RoBERTa [21] and ALBERT [19]. Since ALBERT involves increasing the model hidden size from 1024 to 2048/4096 and thus substantially increases the amount of computation in terms of FLOPs, we exclude ALBERT from the following results as it is hard to lead to scientific conclusions. To obtain relatively fair comparison with RoBERTa, the experiment in this section is based on full data and reuses the hyper-parameters of RoBERTa, as described in section 3.1. The results are presented in Tables 2 (reading comprehension & document ranking), 3 (question answering), 4 (text classification) and 5 (natural language understanding), where XLNet generally outperforms BERT and RoBERTa. In addition, we make two more interesting observations: 3Hyperparameters for pretraining and finetuning are in Appendix A.4. 7 SQuAD2.0 EM F1 SQuAD1.1 EM F1 Dev set results (single model) BERT [10] RoBERTa [21] XLNet 78.98 86.5 87.9 81.77 89.4 90.6 BERT† [10] RoBERTa [21] XLNet 84.1 88.9 89.7 90.9 94.6 95.1 Test set results on leaderboard (single model, as of Dec 14, 2019) BERT [10] RoBERTa [21] XLNet 80.005 86.820 87.926 85.083 87.433 89.898‡ 91.835 93.294 95.080‡ 83.061 BERT [10] 89.795 BERT∗ [10] 90.689 XLNet Table 3: Results on SQuAD, a reading comprehension dataset. † marks our runs with the official code. ∗ indicates ensembles. ‡: We are not able to obtain the test results of our latest model on SQuAD1.1 from the organizers after submitting our result for more than one month, and thus report the results of an older version for the SQuAD1.1 test set. Model IMDB Yelp-2 Yelp-5 DBpedia AG Amazon-2 Amazon-5 CNN [15] DPCNN [15] Mixed VAT [31, 23] ULMFiT [14] BERT [35] - - 4.32 4.6 4.51 2.90 2.64 - 2.16 1.89 32.39 30.58 - 29.98 29.32 0.84 0.88 0.70 0.80 0.64 6.57 6.87 4.95 5.01 - 3.79 3.32 - - 2.63 36.24 34.81 - - 34.17 XLNet 3.20 1.37 27.05 0.60 4.45 2.11 31.67 Table 4: Comparison with state-of-the-art error rates on the test sets of several text classification datasets. All BERT and XLNet results are obtained with a 24-layer architecture with similar model sizes (aka BERT-Large). Model MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B WNLI Single-task single models on dev BERT [2] RoBERTa [21] XLNet 86.6/- 90.2/90.2 90.8/90.8 92.3 94.7 94.9 91.3 92.2 92.3 70.4 86.6 85.9 93.2 96.4 97.0 88.0 90.9 90.8 60.6 68.0 69.0 90.0 92.4 92.5 Multi-task ensembles on test (from leaderboard as of Oct 28, 2019) MT-DNN∗ [20] RoBERTa∗ [21] XLNet∗ 87.9/87.4 90.8/90.2 90.9/90.9† 96.0 98.9 99.0† 89.9 90.2 90.4† 96.5 96.7 97.1† 92.7 92.3 92.9 68.4 67.8 70.2 91.1 92.2 93.0 - - - 89.0 89.0 92.5 86.3 88.2 88.5 Table 5: Results on GLUE. ∗ indicates using ensembles, and † denotes single-task results in a multi-task row. All dev results are the median of 10 runs. The upper section shows direct comparison on dev data and the lower section shows comparison with state-of-the-art results on the public leaderboard. • For explicit reasoning tasks like SQuAD and RACE that involve longer context, the performance gain of XLNet is usually larger. This superiority at dealing with longer context could come from the Transformer-XL backbone in XLNet. • For classification tasks that already have abundant supervised examples such as MNLI (>390K), Yelp (>560K) and Amazon (>3M), XLNet still lead to substantial gains. # 3.4 Ablation Study We perform an ablation study to understand the importance of each design choice based on four datasets with diverse characteristics. Specifically, there are three main aspects we hope to study: • The effectiveness of the permutation language modeling objective alone, especially compared to the denoising auto-encoding objective used by BERT. The importance of using Transformer-XL as the backbone neural architecture. • The necessity of some implementation details including span-based prediction, the bidirectional input pipeline, and next-sentence prediction. 8 With these purposes in mind, in Table 6, we compare 6 XLNet-Base variants with different implemen- tation details (rows 3 - 8), the original BERT-Base model (row 1), and an additional Transformer-XL baseline trained with the denoising auto-encoding (DAE) objective used in BERT but with the bidi- rectional input pipeline (row 2). For fair comparison, all models are based on a 12-layer architecture with the same model hyper-parameters as BERT-Base and are trained on only Wikipedia and the BooksCorpus. All results reported are the median of 5 runs. # Model RACE SQuAD2.0 EM F1 MNLI m/mm SST-2 1 BERT-Base 2 DAE + Transformer-XL 3 XLNet-Base (K = 7) 4 XLNet-Base (K = 6) 5 6 7 8 - memory - span-based pred - bidirectional data + next-sent pred 64.3 65.03 66.05 66.66 65.55 65.95 66.34 66.76 76.30 79.56 81.33 80.98 80.15 80.61 80.65 79.83 73.66 76.80 78.46 78.18 77.27 77.91 77.87 76.94 84.34/84.65 84.88/84.45 85.84/85.43 85.63/85.12 85.32/85.05 85.49/85.02 85.31/84.99 85.32/85.09 92.78 92.60 92.66 93.35 92.78 93.12 92.66 92.89 Table 6: The results of BERT on RACE are taken from [38]. We run BERT on the other datasets using the official implementation and the same hyperparameter search space as XLNet. K is a hyperparameter to control the optimization difficulty (see Section 2.3). Examining rows 1 - 4 of Table 6, we can see both Transformer-XL and the permutation LM clearly contribute the superior performance of XLNet over BERT. Moreover, if we remove the memory caching mechanism (row 5), the performance clearly drops, especially for RACE which involves the longest context among the 4 tasks. In addition, rows 6 - 7 show that both span-based prediction and the bidirectional input pipeline play important roles in XLNet. Finally, we unexpectedly find the the next-sentence prediction objective proposed in the original BERT does not necessarily lead to an improvement in our setting. Hence, we exclude the next-sentence prediction objective from XLNet. Finally, we also perform a qualitative study of the attention patterns, which is included in Appendix A.6 due to page limit. # 4 Conclusions XLNet is a generalized AR pretraining method that uses a permutation language modeling objective to combine the advantages of AR and AE methods. The neural architecture of XLNet is developed to work seamlessly with the AR objective, including integrating Transformer-XL and the careful design of the two-stream attention mechanism. XLNet achieves substantial improvement over previous pretraining objectives on various tasks. # Acknowledgments The authors would like to thank Qizhe Xie and Adams Wei Yu for providing useful feedback on the project, Jamie Callan for providing the ClueWeb dataset, Youlong Cheng, Yanping Huang and Shibo Wang for providing ideas to improve our TPU implementation, Chenyan Xiong and Zhuyun Dai for clarifying the setting of the document ranking task. ZY and RS were supported by the Office of Naval Research grant N000141812861, the National Science Foundation (NSF) grant IIS1763562, the Nvidia fellowship, and the Siebel scholarship. ZD and YY were supported in part by NSF under the grant IIS-1546329 and by the DOE-Office of Science under the grant ASCR #KJ040201. # References [1] Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-level language modeling with deeper self-attention. arXiv preprint arXiv:1808.04444, 2018. [2] Anonymous. Bam! born-again multi-task networks for natural language understanding. anony- mous preprint under review, 2018. [3] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018. 9 [4] Yoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neural networks. In Advances in Neural Information Processing Systems, pages 400–406, 2000. [5] Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. Clueweb09 data set, 2009. [6] Common Crawl. Common crawl. URl: http://http://commoncrawl. org, 2019. [7] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in neural information processing systems, pages 3079–3087, 2015. [8] Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In Proceedings of the eleventh ACM international conference on web search and data mining, pages 126–134. ACM, 2018. [9] Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. [11] William Fedus, Ian Goodfellow, and Andrew M Dai. Maskgan: better text generation via filling in the_. arXiv preprint arXiv:1801.07736, 2018. [12] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pages 881–889, 2015. [13] Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. A deep relevance matching model for ad-hoc retrieval. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 55–64. ACM, 2016. [14] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classifica- tion. arXiv preprint arXiv:1801.06146, 2018. [15] Rie Johnson and Tong Zhang. Deep pyramid convolutional neural networks for text catego- rization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 562–570, 2017. [16] Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. A surprisingly robust trick for winograd schema challenge. arXiv preprint arXiv:1905.06290, 2019. [17] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018. [18] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017. [19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. [20] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019. [21] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. [22] Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pages 6294–6305, 2017. [23] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semi- supervised text classification. arXiv preprint arXiv:1605.07725, 2016. [24] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [25] Xiaoman Pan, Kai Sun, Dian Yu, Heng Ji, and Dong Yu. Improving question answering with external knowledge. arXiv preprint arXiv:1902.00993, 2019. 10 [26] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword fifth edition, linguistic data consortium. Technical report, Technical Report. Linguistic Data Consortium, Philadelphia, Tech. Rep., 2011. [27] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Ken- ton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. [28] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/research-covers/languageunsupervised/language understanding paper. pdf, 2018. [29] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. [30] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. [31] Devendra Singh Sachan, Manzil Zaheer, and Ruslan Salakhutdinov. Revisiting lstm networks for semi-supervised text classification via mixed objective function. 2018. [32] Benigno Uria, Marc-Alexandre Côté, Karol Gregor, Iain Murray, and Hugo Larochelle. Neural autoregressive distribution estimation. The Journal of Machine Learning Research, 17(1):7184– 7220, 2016. [33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017. [34] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019. In the Proceedings of ICLR. [35] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848, 2019. [36] Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 55–64. ACM, 2017. [37] Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. Breaking the softmax bottleneck: A high-rank rnn language model. arXiv preprint arXiv:1711.03953, 2017. [38] Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. Dual co- matching network for multi-choice reading comprehension. arXiv preprint arXiv:1901.09381, 2019. [39] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in neural information processing systems, pages 649–657, 2015. [40] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19–27, 2015. 11 # A Target-Aware Representation via Two-Stream Self-Attention # A.1 A Concrete Example of How Standard LM Parameterization Fails In this section, we provide a concrete example to show how the standard language model parameteri- zation fails under the permutation objective, as discussed in Section 2.3. Specifically, let’s consider two different permutations z(1) and z(2) satisfying the following relationship 2) = 2) = 224 but WY sidjar. Then, substituting the two permutations respectively into the naive parameterization, we have | exp (el) hae) CO ese) = a 8) = exp (el) A) QQ) Q)_, ,(2) 2, St, 207 =Bet 2p Sj, Bl, Shee i) fi <t =z<t Effectively, two different target positions i and j share exactly the same model prediction. However, the ground-truth distribution of two positions should certainly be different. # A.2 Two-Stream Attention Here, we provide the implementation details of the two-stream attention with a Transformer-XL backbone. Initial represetation: ∀t = 1, . . . , T : ht = e(xt) and gt = w Cached layer-m content represetation (memory) from previous segment: he) For the Transformer-XL layer m = 1,--- ,M, attention with relative positional encoding and position-wise feed-forward are consecutively employed to update the represetntations: fa), Vt=1,...,T7: Aw = LayerNorm( hi") + RelAtin( AL"), fa), ng») A) = LayerNorm (fi.” AW) + PosFF(h{"””) ) gh? = = a (m1) RelAttn(g m— 1), [ni nyr-9])) 9 = = LayerNorm( g{”) + PosFF(4{"")) ) Target-aware prediction distribution: M exp (e(2)"9i”) Ty exp (ele) gl) po(Xz, = 2% | Xz2,) # A.3 Datasets # A.3.1 RACE Dataset The RACE dataset [18] contains near 100K questions taken from the English exams for middle and high school Chinese students in the age range between 12 to 18, with the answers generated by human experts. This is one of the most difficult reading comprehension datasets that involve challenging reasoning questions. Moreover, the average length of the passages in RACE are longer than 300, which is significantly longer than other popular reading comprehension datasets such as SQuAD [29]. As a result, this dataset serves as a challenging benchmark for long text understanding. We use a sequence length of 512 during finetuning. # A.3.2 SQuAD SQuAD is a large-scale reading comprehension dataset with two tasks. SQuAD1.1 [30] contains questions that always have a corresponding answer in the given passages, while SQuAD2.0 [29] introduces unanswerable questions. To finetune an XLNet on SQuAD2.0, we jointly apply a logis- tic regression loss for answerability prediction similar to classification tasks and a standard span extraction loss for question answering [10]. 12 # A.3.3 Text classification Datasets Following previous work on text classification [39, 23], we evaluate XLNet on the following bench- marks: IMDB, Yelp-2, Yelp-5, DBpedia, AG, Amazon-2, and Amazon-5. # A.3.4 GLUE Dataset The GLUE dataset [34] is a collection of 9 natural language understanding tasks. The test set labels are removed from the publicly released version, and all the practitioners must submit their predictions on the evaluation server to obtain test set results. In Table 5, we present results of multiple settings, including single-task and multi-task, as well as single models and ensembles. In the multi-task setting, we jointly train an XLNet on the four largest datasets—MNLI, SST-2, QNLI, and QQP—and finetune the network on the other datasets. Only single-task training is employed for the four large datasets. For QNLI, we employed a pairwise relevance ranking scheme as in [20] for our test set submission. However, for fair comparison with BERT, our result on the QNLI dev set is based on a standard classification paradigm. For WNLI, we use the loss described in [16]. # A.3.5 ClueWeb09-B Dataset Following the setting in previous work [8], we use the ClueWeb09-B dataset to evaluate the perfor- mance on document ranking. The queries were created by the TREC 2009-2012 Web Tracks based on 50M documents and the task is to rerank the top 100 documents retrieved using a standard retrieval method. Since document ranking, or ad-hoc retrieval, mainly concerns the low-level representations instead of high-level semantics, this dataset serves as a testbed for evaluating the quality of word embeddings. We use a pretrained XLNet to extract word embeddings for the documents and queries without finetuning, and employ a kernel pooling network [36] to rank the documents. # A.4 Hyperparameters # A.4.1 Pretraining Hyperparameters Hparam Value Number of layers Hidden size Number of attention heads Attention head size FFN inner hidden size Hidden Dropout GeLU Dropout Attention dropout Partial prediction K Max sequence length Batch size Learning rate Number of steps Warmup steps Learning rate decay Adam epsilon Weight decay 24 1024 16 64 4096 0.1 0.0 0.1 6 512 8192 4e-4 500K 40,000 linear 1e-6 0.01 # Table 7: Hyperparameters for pretraining. The hyperparameters used for pretraining XLNet are shown in Table 7. # A.4.2 Hyperparameters for Finetuning The hyperparameters used for finetuning XLNet on various tasks are shown in Table 8. “Layer-wise decay” means exponentially decaying the learning rates of individual layers in a top-down manner. For example, suppose the 24-th layer uses a learning rate l, and the Layer-wise decay rate is α, then the learning rate of layer m is lα24−m. 13 Hparam RACE SQuAD MNLI Yelp-5 Dropout Attention dropout Max sequence length Batch size Learning rate Number of steps Learning rate decay Weight decay Adam epsilon Layer-wise lr decay 0.1 0.1 512 32 2e-5 12K 512 48 3e-5 8K 128 128 2e-5 10K linear 0.01 1e-6 1.0 1e-6 0.75 512 128 1e-5 10K 1e-6 1.0 1e-6 1.0 Table 8: Hyperparameters for finetuning. # A.5 Discussion and Analysis # A.5.1 Comparison with BERT To prove a general point beyond one example, we now turn to more formal expressions. Inspired by previous work [37], given a sequence x = [x1, · · · , xT ], we define a set of target-context pairs of interest, I = {(x, U)}, where U is a set of tokens in x that form a context of x. Intuitively, we want the model to learn the dependency of x on U through a pretraining loss term log p(x | U). For example, given the above sentence, the pairs of interest I could be instantiated as: {(e = York,U/ = {New}), (a = York,U/ = {city}), (a = York,l/ = {New, city}), I = Note that I is merely a virtual notion without unique ground truth, and our analysis will hold regardless of how I is instantiated. Given a set of target tokens 7 and a set of non-target tokens ’ = x\J7, BERT and XLNet both maximize log p(T | VV) but with different formulations: NU JBERT = log p(x | N ); JXLNet = log p(x | N ∪ T<x) x∈T x∈T where T<x denote tokens in T that have a factorization order prior to x. Both objectives consist of multiple loss terms in the form of log p(x | Vx). Intuitively, if there exists a target-context pair (x, U) ∈ I such that U ⊆ Vx, then the loss term log p(x | Vx) provides a training signal to the dependency between x and U. For convenience, we say a target-context pair (x, U) ∈ I is covered by a model (objective) if U ⊆ Vx. Given the definition, let’s consider two cases: e Ift/ CN, the dependency (x,U/) is covered by both BERT and XLNet. e IU CNU Teg andU AN Tex 4 0, the dependency can only be covered by XLNet but not BERT. As a result, XLNet is able to cover more dependencies than BERT. In other words, the XLNet objective contains more effective training signals, which empirically leads to better performance in SectionB] # A.5.2 Comparison with Language Modeling Borrowing examples and notations from Section A.5.1, a standard AR language model like GPT [28] is only able to cover the dependency (x = York, U = {New}) but not (x = New, U = {York}). XLNet, on the other hand, is able to cover both in expectation over all factorization orders. Such a limitation of AR language modeling can be critical in real-world applications. For example, consider a span extraction question answering task with the context “Thom Yorke is the singer of Radiohead” and the question “Who is the singer of Radiohead”. The representations of “Thom Yorke” are not dependent on “Radiohead” with AR language modeling and thus they will not be chosen as the answer by the standard approach that employs softmax over all token representations. More formally, consider a context-target pair (x, U): e If ¢ Tex, where T<, denotes the tokens prior to x in the original sequence, AR language modeling is not able to cover the dependency. 14 • In comparison, XLNet is able to cover all dependencies in expectation. Approaches like ELMo [27] concatenate forward and backward language models in a shallow manner, which is not sufficient for modeling deep interactions between the two directions. # A.5.3 Bridging the Gap Between Language Modeling and Pretraining With a deep root in density estimation4 [4, 32, 24], language modeling has been a rapidly-developing research area [9, 1, 3]. However, there has been a gap between language modeling and pretraining due to the lack of the capability of bidirectional context modeling, as analyzed in Section A.5.2. It has even been challenged by some machine learning practitioners whether language modeling is a meaningful pursuit if it does not directly improve downstream tasks 5. XLNet generalizes language modeling and bridges such a gap. As a result, it further “justifies” language modeling research. Moreover, it becomes possible to leverage the rapid progress of language modeling research for pretraining. As an example, we integrate Transformer-XL into XLNet to demonstrate the usefulness of the latest language modeling progress. # A.6 Qualitative Analysis of Attention Patterns We compare the attention pattern of BERT and XLNet without finetuning. Firstly, we found 4 typical patterns shared by both, as shown in Fig. 2. (b) Local/Self focus (c) Two segments (d) Content-based symme- try # (a) Content stripes # vepresen Figure 2: Attention patterns shared by XLNet and BERT. Rows and columns represent query and key respectively. More interestingly, in Fig. 3, we present 3 patterns that only appear in XLNet but not BERT: (a) The self-exclusion pattern attends to all other tokens but itself, probably offering a fast way to gather global information; (b) The relative-stride pattern attends to positions every a few stride apart relative to the query position; (c) The one-side masked pattern is very similar to the lower-left part of Fig. 1-(d), with the upper-right triangle masked out. It seems that the model learns not to attend the relative right half. Note that all these three unique patterns involve the relative positions rather than absolute ones, and hence are likely enabled by the “relative attention” mechanism in XLNet. We conjecture these unique patterns contribute to the performance advantage of XLNet. On the other hand, the proposed permutation LM objective mostly contributes to a better data efficiency, whose effects may not be obvious from qualitative visualization. (b) Relative stride Figure 3: Attention patterns that appear only in XLNet. Rows and columns represent query and key respec- tively. 15 mem“ —“ & 8 @ B Factorization order: 33234791 Factorization order: 29349391 mem =“ & BB Factorization order: 1 > 4923 Factorization order: 4 > 317 2 Figure 4: Illustration of the permutation language modeling objective for predicting x3 given the same input sequence x but with different factorization orders. # A.7 Visualizing Memory and Permutation In this section, we provide a detailed visualization of the proposed permutation language modeling objective, including the mechanism of reusing memory (aka the recurrence mechanism), how we use attention masks to permute the factorization order, and the difference of the two attention streams. As shown in Figure 5 and 6, given the current position zt, the attention mask is decided by the permutation (or factorization order) z such that only tokens the occur before zt in the permutation can be attended; i.e., positions zi with i < t. Moreover, comparing Figure 5 and 6, we can see how the query stream and the content stream work differently with a specific permutation through attention masks. The main difference is that the query stream cannot do self-attention and does not have access to the token at the position, while the content stream performs normal self-attention. 4The problem of language modeling is essentially density estimation for text data. 5https://openreview.net/forum?id=HJePno0cYm 16 Joint View of the Content Stream (Factorization order: 3 > 2 > 4 > 1) # Split View Ne coe | | me | Position-2 View Oe | | | mem = 88 88 BB Position-4 View Position-1 View Split View of the Content Stream (Factorization order: 3 > 2 > 4 > 1) Figure 5: A detailed illustration of the content stream of the proposed objective with both the joint view and split views based on a length-4 sequence under the factorization order [3, 2, 4, 1]. Note that if we ignore the query representation, the computation in this figure is simply the standard self-attention, though with a particular attention mask. 17 Joint ew of the Query Stream (Factorization order: 3 > 2 > 4 > 1) Split View Ne mem 1 1 — i SS 88 ae Position-3 View mem mem Position-4 View mem mem” Position-2 View mem“) 1 ! oc |) Position-1 View Split View of the Query Stream (Factorization order: 3 > 24> 1) Figure 6: A detailed illustration of the query stream of the proposed objective with both the joint view and split views based on a length-4 sequence under the factorization order [3, 2, 4, 1]. The dash arrows indicate that the query stream cannot access the token (content) at the same position, but only the location information. 18
{ "id": "1810.04805" }
1906.07337
Measuring Bias in Contextualized Word Representations
Contextual word embeddings such as BERT have achieved state of the art performance in numerous NLP tasks. Since they are optimized to capture the statistical properties of training data, they tend to pick up on and amplify social stereotypes present in the data as well. In this study, we (1)~propose a template-based method to quantify bias in BERT; (2)~show that this method obtains more consistent results in capturing social biases than the traditional cosine based method; and (3)~conduct a case study, evaluating gender bias in a downstream task of Gender Pronoun Resolution. Although our case study focuses on gender bias, the proposed technique is generalizable to unveiling other biases, including in multiclass settings, such as racial and religious biases.
http://arxiv.org/pdf/1906.07337
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, Yulia Tsvetkov
cs.CL
1st ACL Workshop on Gender Bias for Natural Language Processing 2019
null
cs.CL
20190618
20190618
9 1 0 2 n u J 8 1 ] L C . s c [ 1 v 7 3 3 7 0 . 6 0 9 1 : v i X r a # Measuring Bias in Contextualized Word Representations Keita Kurita Nidhi Vyas Ayush Pareek Alan W Black Yulia Tsvetkov # Carnegie Mellon University {kkurita,nkvyas,apareek,awb,ytsvetko}@andrew.cmu.edu # Abstract Contextual word embeddings such as BERT have achieved state of the art performance in numerous NLP tasks. Since they are optimized to capture the statistical properties of training data, they tend to pick up on and amplify so- cial stereotypes present in the data as well. In this study, we (1) propose a template-based method to quantify bias in BERT; (2) show that this method obtains more consistent results in capturing social biases than the traditional co- sine based method; and (3) conduct a case study, evaluating gender bias in a downstream task of Gender Pronoun Resolution. Although our case study focuses on gender bias, the pro- posed technique is generalizable to unveiling other biases, including in multiclass settings, such as racial and religious biases. embeddings and attaining new state of the art In these results in the majority of NLP tasks. models, every word has a different embedding, depending on the context and the language model state; in these settings, the analogy task used to reveal biases in uncontextualized embeddings is not applicable. Recently, May et al. (2019) showed that traditional cosine-based methods for exposing bias in sentence embeddings fail results for embeddings to produce consistent generated using contextual methods. We find results with cosine-based similar inconsistent this is a motivation methods of exposing bias; to the development of a novel bias test that we propose. # 1 Introduction including Type-level word embedding models, (Mikolov et al., 2013; word2vec have been shown Pennington et al., 2014), in human- to exhibit (Bolukbasi et al., generated 2016; Caliskan et al., 2017; Garg et al., 2018; Manzini et al., 2019). These embeddings are then used in a plethora of downstream applications, which perpetuate and further amplify stereotypes (Zhao et al., 2017; Leino et al., 2019). To reveal and quantify corpus-level biases is word em- beddings, Bolukbasi et al. (2016) used the word analogy task (Mikolov et al., 2013). For example, they showed that gendered male word embeddings like he, man are associated with higher-status jobs like computer programmer and doctor, whereas gendered words like she or woman are associated with homemaker and nurse. Contextual word embedding models, as ELMo Devlin et al., ingly common, and BERT (Peters et al., have 2019) become replacing traditional such 2018; increas- type-level In this work, we propose a new method to quan- tify bias in BERT embeddings (§2). Since BERT embeddings use a masked language modelling ob- jective, we directly query the model to measure the bias for a particular token. More specifically, we create simple template sentences containing the at- tribute word for which we want to measure bias (e.g. programmer) and the target for bias (e.g. she for gender). We then mask the attribute and target tokens sequentially, to get a relative measure of bias across target classes (e.g. male and female). Contextualized word embeddings for a given to- ken change based on its context, so such an ap- proach allows us measure the bias for similar cate- gories divergent by the the target attribute (§2). We compare our approach with the cosine similarity- based approach (§3) and show that our measure of bias is more consistent with human biases and is sensitive to a wide range of biases in the model using various stimuli presented in Caliskan et al. (2017). Next, we investigate the effect of a specific type of bias in a specific downstream task: gender bias in BERT and its effect on the task of Gen- dered Pronoun Resolution (GPR) (Webster et al., 2018). We show that the bias in GPR is highly correlated with our measure of bias (§4). Finally, we highlight the potential negative impacts of us- ing BERT in downstream real world applications (§5). The code and data used in this work are pub- licly available.1 # 2 Quantifying Bias in BERT BERT is trained using a masked language mod- elling objective i.e. to predict masked tokens, de- noted as [MASK], in a sentence given the entire context. We use the predictions for these [MASK] tokens to measure the bias encoded in the actual representations. We directly query the underlying masked lan- guage model in BERT2 to compute the association between certain targets (e.g., gendered words) and attributes (e.g. career-related words). For example, to compute the association between the target male gender and the attribute programmer, we feed in the masked sentence “[MASK] is a programmer” to BERT, and compute the proba- bility assigned to the sentence ‘he is a program- mer” (ptgt). To measure the association, however, we need to measure how much more BERT prefers the male gender association with the attribute pro- grammer, compared to the female gender. We thus re-weight this likelihood ptgt using the prior bias of the model towards predicting the male gender. To do this, we mask out the attribute programmer and query BERT with the sentence “[MASK] is a [MASK]”, then compute the probability BERT as- signs to the sentence ‘he is a [MASK]” (pprior). Intuitively, pprior represents how likely the word he is in BERT, given the sentence structure and no other evidence. Finally, the difference between the normalized predictions for the words he and she can be used to measure the gender bias in BERT for the programmer attribute. 4. Compute the association as log ptgt pprior We refer to this normalized measure of associa- tion as the increased log probability score and the difference between the increased log probability scores for two targets (e.g. he/she) as log proba- bility bias score which we use as measure of bias. Although this approach requires one to construct a template sentence, these templates are merely simple sentences containing attribute words of in- terest, and can be shared across multiple targets and attributes. Further, the flexibility to use such templates can potentially help measure more fine- grained notions of bias in the model. In the next section, we show that our proposed log probability bias score method is more effec- tive at exposing bias than traditional cosine-based measures. # 3 Correlation with Human Biases We investigate the correlation between our mea- sure of bias and human biases. To do this, we apply the log probability bias score to the same set of attributes that were shown to exhibit human bias in experiments that were performed using the Implicit Association Test (Greenwald et al., 1998). Specifically, we use the stimuli used in the Word Embedding Association Test (WEAT) (Caliskan et al., 2017). Word Embedding Association Test (WEAT): The WEAT method compares set of target con- cepts (e.g. male and female words) denoted as X and Y (each of equal size N ), with a set of at- tributes to measure bias over social attributes and roles (e.g. career/family words) denoted as A and B. The degree of bias for each target concept t is calculated as follows: Generalizing, we use the following procedure to compute the association between a target and an attribute: s(t, A, B) = [meana∈Asim(t, a) − meanb∈B sim(t, b)], where sim is the cosine similarity between the em- beddings. The test statistics is 1. Prepare a template sentence e.g.“[TARGET] is a [ATTRIBUTE]” 2. Replace [TARGET] with [MASK] and com- pute ptgt=P([MASK]=[TARGET]| sentence) 3. Replace both [TARGET] and [ATTRIBUTE] with [MASK], and compute prior probability pprior=P([MASK]=[TARGET]| sentence) S(X, Y, A, B) = [meanx∈Xs(x, A, B)− meany∈Y s(y, A, B)], where the test is a permutation test over X and Y . The p-value is computed as p = Pr[S(Xi, Yi, A, B) > S(X, Y, A, B)] # 1https://bit.ly/2EkJwh1 2For experiments version The effect size is measured as all uncased we use BERTBASE of # S(X, Y, A, B) stdt∈X∪Y s(t, A, B) # the https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip. # d = Category Pleasant/Unpleasant (Insects/Flowers) Pleasant/Unpleasant (EA/AA) Career/Family (Male/Female) Math/Arts (Male/Female) Science/Arts (Male/Female) Templates T are A, T is A T are A, T is A T likes A, T like A, T is interested in A T likes A, T like A, T is interested in A T likes A, T like A, T is interested in A Table 1: Template sentences used for the WEAT tests (T: target, A: attribute) Category Pleasant/Unpleasant (Insects/Flowers) Pleasant/Unpleasant (EA/AA) Career/Family (Male/Female) Math/Arts (Male/Female) Science/Arts (Male/Female) Targets flowers,insects,flower,insect black, white he,she,boys,girls,men,women he,she,boys,girls,men,women he,she,boys,girls,men,women Templates T are A, the T is A T people are A, the T person is A T likes A, T like A, T is interested in A T likes A, T like A, T is interested in A T likes A, T like A, T is interested in A Table 2: Template sentences used and target words for the grammatically correct sentences (T: target, A: attribute) It is important to note that the statistical test is a permutation test, and hence a large effect size does not guarantee a higher degree of statistical signifi- cance. # 3.1 Baseline: WEAT for BERT To apply the WEAT method on BERT, we first compute the embeddings for target and attribute words present in the stimuli using multiple tem- plates, such as “TARGET is ATTRIBUTE” (Re- fer Table 1 for an exhaustive list of templates used for each category). We mask the TARGET to compute the embedding3 for the ATTRIBUTE and vice versa. Words that are absent in the BERT vo- cabulary are removed from the targets. We ensure that the number of words for both targets are equal, by removing random words from the smaller tar- get set. To confirm whether the reduction in vo- cabulary results in a change of p-value, we also conduct the WEAT on GloVe with the reduced vo- cabulary.4 incorrect, resulting in low predicted probabili- ties, we fixed the TARGET to common pro- nouns/indicators of category such as flower, he, she (Table 2 contains a full list of target words and templates). This avoids large variance in predicted probabilities, leading to more reliable results. The effect size is computed in the same way as the WEAT except the standard deviation is computed over the mean log probability bias scores. We experiment over the following categories of stimuli in the WEAT experiments: Category 1 (flower/insect targets and pleasant/unpleasant at- tributes), Category 3 (European American/African American names and pleasant/unpleasant at- tributes), Category 6 (male/female names and ca- reer/family attributes), Category 7 (male/female targets and math/arts attributes) and Category 8 (male/female targets and science/arts attributes). # 3.3 Comparison Results # 3.2 Proposed: Log Probability Bias Score To compare our method measuring bias, and to test for human-like biases in BERT, we also com- pute the log probability bias score for the same set of attributes and targets in the stimuli. We compute the mean log probability bias score for each attribute, and permute the attributes to mea- sure statistical significance with the permutation test. Since many TARGETs in the stimuli cause the template sentence to become grammatically 3We use the outputs from the final layer of BERT as em- beddings The WEAT on GloVe returns similar findings to those of Caliskan et al. (2017) except for the European/African American names and pleas- ant/unpleasant association not exhibiting signifi- cant bias. This is due to only 5 of the African American names being present in the BERT vo- cabulary. The WEAT for BERT fails to find any statistically significant biases at p < 0.01. This implies that WEAT is not an effective measure for bias in BERT embeddings, or that methods for constructing embeddings require additional inves- tigation. In contrast, our method of querying the underlying language model exposes statistically significant association across all categories, show- ing that BERT does indeed encode biases and that our method is more sensitive to them. 4WEAT was originally used to study the GloVe embed- dings Category WEAT on GloVe WEAT on BERT Ours on BERT Log Probability Bias Score Pleasant/Unpleasant (Insects/Flowers) Pleasant/Unpleasant (EA/AA) Career/Family (Male/Female) Math/Arts (Male/Female) Science/Arts (Male/Female) 1.543* 1.012 1.814* 1.061 1.246* 0.6688 1.003 0.5047 0.6755 0.8815 0.8744* 0.8864* 1.126* 0.8495* 0.9572* Table 3: Effect sizes of bias measurements on WEAT Stimuli. (* indicates significant at p < 0.01) Prior Prob. Avg. Predicted Prob. Gender 11.5% 13.9% 10.3% 9.8% Male Female noun referring to no entities with a significantly higher probability (p = 0.007 on a permutation test); see Table 4. As the training set is balanced, we attribute this bias to the underlying BERT rep- resentations. Table 4: Probability of pronoun referring to neither entity in a sentence of GPR We also investigate the relation between the topic of the sentence and model’s ability to asso- ciate the female pronoun with no entity. We first extracted 20 major topics from the dataset using non-negative matrix factorization (Lee and Seung, 2001) (refer to Appendix for the list of topics). We then compute the bias score for each topic as the sum of the log probability bias score for the top 15 most prevalent words of each topic weighted by their weights within the topic. For this, we use a generic template “[TARGET] are interested in [ATTRIBUTE]” where TARGET is either men or women. Next we compute a bias score for each sample in the training data as the sum of indi- vidual bias scores of topics present in the sam- ple, weighted by the topic weights. Finally, we measured the Spearman correlation coefficient to be 0.207 (which is statistically significant with p = 4e − 11) between the bias scores for male gender across all samples and the model’s proba- bility to associate a female pronoun with no entity. We conclude that models using BERT find it chal- lenging to perform coreference resolution when the gender pronoun is female and if the topic is biased towards the male gender. # 4 Case Study: Effects of Gender Bias on Gendered Pronoun Resolution Dataset We examined the downstream effects of bias in BERT using the Gendered Pronoun Res- olution (GPR) task (Webster et al., 2018). GPR is a sub-task in co-reference resolution, where a pronoun-containing expression is to be paired with the referring expression. Since pronoun re- solving systems generally favor the male entities (Webster et al., 2018), this task is a valid test- bed for our study. We use the GAP dataset5 by Webster et al. (2018), containing 8,908 human- labeled ambiguous pronoun-name pairs, created from Wikipedia. The task is to classify whether an ambiguous pronoun P in a text refers to entity A, entity B or neither. There are 1,000 male and female pronouns in the training set each, with 103 and 98 of them not referring to any entity in the sentence, respectively. Model We use the model suggested on Kaggle,6 inspired by Tenney et al. (2019). The model uses BERT embeddings for P , A and B, given the con- text of the input sentence. Next, it uses a multi- layer perceptron (MLP) layer to perform a naive classification to decide if the pronoun belongs to A, B or neither. The MLP layer uses a single hid- den layer with 31 dimensions, a dropout of 0.6 and L2 regularization with weight 0.1. # 5 Real World Implications In previous sections, we discussed that BERT has human-like biases, which are propagated to down- stream tasks. In this section, we discuss an- other potential negative impact of using BERT in a downstream model. Given that three quarters of US employers now use social media for recruiting job candidates (Segal, 2014), many applications are filtered using job recommendation systems and other AI-powered services. Zhao et al. (2018) dis- cussed that resume filtering systems are biased Results Although the number of male pronouns associated with no entities in the training data is slightly larger, the model predicted the female pro- # 5https://github.com/google-research-datasets/gap-coreference 6https://www.kaggle.com/mateiionita/taming-the-bert-a-baseline when the model has strong association between gender and certain professions. Similarly, certain gender-stereotyped attributes have been strongly associated with occupational salary and prestige (Glick, 1991). Using our proposed method, we investigate the gender bias in BERT embeddingss for certain occupation and skill attributes. Datasets: We use three datasets for our study of gender bias in employment attributes: Dataset Percentage Salary Pos-Traits Neg-Traits Skills 88.5% 80.0% 78.9% 84.0% Table 5: strongly with the male gender Percentage of attributes associated more # 6 Related Work • Employee Salary Dataset7 for Montgomery County of Maryland- Contains 6882 in- stances of “Job Title” and “Salary” records along with other attributes. We sort this dataset in decreasing order of salary and take the first 1000 instances as a proxy for high- paying and prestigious jobs. • Positive and Negative Traits Dataset8- Con- tains a collection of 234 and 292 adjectives considered “positive” and “negative” traits, respectively. NLP applications ranging from core tasks such as coreference resolution (Rudinger et al., 2018) and language identification (Jurgens et al., 2017), to downstream systems such as automated essay scoring (Amorim et al., 2018), exhibit inherent so- cial biases which are attributed to the datasets used to train the embeddings (Barocas and Selbst, 2016; Zhao et al., 2017; Yao and Huang, 2017). There have been several efforts to investigate the amount of intrinsic bias within uncontextual- ized word embeddings in binary (Bolukbasi et al., 2016; Garg et al., 2018; Swinger et al., 2019) and multiclass (Manzini et al., 2019) settings. • O*NET 23.2 technology skills9 Contains 17649 unique skills for 27660 jobs, which are posted online Discussion We used the following two templates to measure gender bias: • “TARGET is ATTRIBUTE”, where TAR- GET are male and female pronouns viz. he and she. The ATTRIBUTE are job titles from the Employee Salary dataset, or the adjec- tives from the Positive and Negative traits dataset. Contextualized embeddings such as BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018) have been replacing the traditional type- It is thus important to under- level embeddings. stand the effects of biases learned by these em- bedding models on downstream tasks. However, it is not straightforward to use the existing bias- exposure methods for contextualized embeddings. For instance, May et al. (2019) used WEAT on sentence embeddings of ELMo and BERT, but there was no clear indication of bias. Rather, they observed counterintuitive behavior like vastly dif- ferent p-values for results concerning gender. Along similar lines, Basta et al. (2019) noted that contextual word-embeddings are less biased than traditional word-embeddings. Yet, biases like gender are propagated heavily in downstream tasks. For instance, Zhao et al. (2019) showed that ELMo exhibits gender bias for certain pro- fessions. As a result, female entities are pre- dicted less accurately than male entities for certain occupation words, in the coreference resolution task. Field and Tsvetkov (2019) revealed biases in ELMo embeddings that limit their applicability across data domains. Motivated by these recent findings, our work proposes a new method to ex- pose and measure bias in contextualized word em- beddings, specifically BERT. As opposed to previ- • “TARGET can do ATTRIBUTE”, where the AT- from the O*NET the TARGETs are the same, but TRIBUTE are skills dataset. Table 5 shows the percentage of attributes that were more strongly associated with the male than the female gender. The results prove that BERT expresses strong preferences for male pronouns, raising concerns with using BERT in downstream tasks like resume filtering. 7https://catalog.data.gov/dataset/employee-salaries-2017 8http://ideonomy.mit.edu/essays/traits.html 9https://www.onetcenter.org/database.html#individual-files ous work, our measure of bias is more consistent with human biases. We also study the effect of this intrinsic bias on downstream tasks, and highlight the negative impacts of gender-bias in real world applications. # 7 Conclusion In this paper, we showed that querying the under- lying language model can effectively measure bias in BERT and expose multiple stereotypes embed- ded in the model. We also showed that our mea- sure of bias is more consistent with human-biases, and outperforms the traditional WEAT method on BERT. Finally we showed that these biases can have negative downstream effects. In the future, we would like to explore the effects on other downstream tasks such as text classification, and device an effective method of debiasing contextu- alized word embeddings. # Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. IIS1812327. # References Evelin Amorim, Marcia Canc¸ado, and Adriano Veloso. 2018. Automated essay scoring in the presence of biased ratings. In Proc. of NAACL, pages 229–237. Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Calif. L. Rev., 104:671. Christine Basta, Marta R Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.08783. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proc. of NIPS, pages 4349–4357. and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proc. of NAACL. Anjalie Field and Yulia Tsvetkov. 2019. Entity-centric contextual affective analysis. In Proc. of ACL. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635–E3644. Peter Glick. 1991. Trait-based and sex-based discrimi- nation in occupational prestige, occupational salary, and hiring. Sex Roles, 25(5-6):351–378. Anthony Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz. 1998. Measuring individual differ- ences in implicit cognition: The implicit association test. Journal of personality and social psychology, 74:1464–80. David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equitable language identification. In Proc. of ACL, pages 51–57. Daniel Lee and Hyunjune Seung. 2001. Algorithms for non-negative matrix factorization. In Proc. of NIPS. Klas Leino, Matt Fredrikson, Emily Black, Shayak Sen, and Anupam Datta. 2019. Feature-wise bias amplification. In Prof. of ICLR. Thomas Manzini, Yao Chong, Yulia Tsvetkov, and Alan W Black. 2019. Black is to criminal as cau- casian is to police: Detecting and removing multi- class bias in word embeddings. In Proc. of NAACL. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proc. of NAACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Proc.of NIPS, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proce. of EMNLP, pages 1532– 1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proc. of NAACL. J Segal. 2014. Social media use in hiring: Assessing the risks. HR Magazine, 59(9). Nathaniel Swinger, Maria De-Arteaga, Neil Heffer- nan IV, Mark Leiserson, and Adam Kalai. 2019. What are the biases in my word embedding? In Proc. of the AAAI/ACM Conference on Artificial In- telligence, Ethics, and Society (AIES). Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in con- textualized word representations. In Proc. of ICLR. Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the gap: A balanced cor- pus of gendered ambiguous pronouns. Transactions of the Association for Computational Linguistics. Sirui Yao and Bert Huang. 2017. Beyond parity: Fair- In Ad- ness objectives for collaborative filtering. vances in Neural Information Processing Systems, pages 2921–2930. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In NAACL (short). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proc. of EMNLP. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word embeddings. # Appendix Top 5 Words match,round,second,team,season times,city,jersey,york,new married,son,died,wife,daughter best,award,actress,films,film friend,like,work,mother,life university,music,attended,high,school president,general,governor,party,state songs,solo,song,band,album medal,gold,final,won,world best,role,character,television,series kruse,moved,amy,esme,time usa,trunchbull,pageant,2011,miss american,august,brother,actress,born sir,died,church,song,john natasha,days,hospital,helene,later played,debut,sang,role,opera january,december,october,july,married academy,member,american,university,family award,best,played,mary,year jersey,death,james,king,paul Table 6: Extracted topics for the GPR dataset
{ "id": "1904.08783" }
1906.07348
Zero-Shot Entity Linking by Reading Entity Descriptions
We present the zero-shot entity linking task, where mentions must be linked to unseen entities without in-domain labeled data. The goal is to enable robust transfer to highly specialized domains, and so no metadata or alias tables are assumed. In this setting, entities are only identified by text descriptions, and models must rely strictly on language understanding to resolve the new entities. First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities. Second, we propose a simple and effective adaptive pre-training strategy, which we term domain-adaptive pre-training (DAP), to address the domain shift problem associated with linking unseen entities in a new domain. We present experiments on a new dataset that we construct for this task and show that DAP improves over strong pre-training baselines, including BERT. The data and code are available at https://github.com/lajanugen/zeshel.
http://arxiv.org/pdf/1906.07348
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee
cs.CL, cs.LG
ACL 2019
null
cs.CL
20190618
20190618
9 1 0 2 n u J 8 1 ] L C . s c [ 1 v 8 4 3 7 0 . 6 0 9 1 : v i X r a # Zero-Shot Entity Linking by Reading Entity Descriptions # Lajanugen Logeswaran†∗ Ming-Wei Chang‡ Kenton Lee‡ Kristina Toutanova‡ Jacob Devlin‡ Honglak Lee‡,† # †University of Michigan, ‡Google Research {llajan,honglak}@umich.edu, {mingweichang,kentonl,kristout,jacobdevlin,honglak}@google.com # Abstract We present the zero-shot entity linking task, where mentions must be linked to unseen en- tities without in-domain labeled data. The goal is to enable robust transfer to highly spe- cialized domains, and so no metadata or alias tables are assumed. In this setting, entities are only identified by text descriptions, and models must rely strictly on language under- standing to resolve the new entities. First, we show that strong reading comprehension mod- els pre-trained on large unlabeled data can be used to generalize to unseen entities. Second, we propose a simple and effective adaptive pre-training strategy, which we term domain- adaptive pre-training (DAP), to address the domain shift problem associated with linking unseen entities in a new domain. We present experiments on a new dataset that we construct for this task and show that DAP improves over strong pre-training baselines, including BERT. The data and code are available at https: //github.com/lajanugen/zeshel.1 mn Wye ae Military Wyre Star Wars >) 1 Were /—m © My, Elder Serots Mention The Burden spells the opposite of Feather, increasing a character 's encumbrance ... an Burden ( Effect ) Burden ( Oblivion ) Al entities Burden is a spell Burden is an di ne entity’ effect that temporarily || Alteration spell that ictionary increases the weight... | | temporarily adds Entity Linking Model Wa ce Coronation Street Wi Lego of me Mya. i > Mention Orient expedition is a train ride named after the theme of the same name. The train itself is Ew, WA Test Orient Expedition Orient Expedition Wallet All entities Orient Expedition Orient Expedition Wallet In the entity was one of the ‘was a wallet themed dictionary various subthemes...| |around Orient Expedition. # Introduction Entity linking systems have achieved high per- formance in settings where a large set of dis- ambiguated mentions of entities in a target en- tity dictionary is available for training. Such systems typically use powerful resources such as a high-coverage alias table, structured data, and linking frequency statistics. For example, Milne and Witten (2008) show that by only using the prior probability gathered from hyperlink statis- tics on Wikipedia training articles, one can achieve 90% accuracy on the task of predicting links in Wikipedia test articles. While most prior works focus on linking to gen- eral entity databases, it is often desirable to link to ∗ Work completed while interning at Google 1zeshel stands for zero-shot entity linking. Figure 1: Zero-shot entity linking. Multiple training and test domains (worlds) are shown. The task has two key properties: (1) It is zero-shot, as no mentions have been observed for any of the test world entities during training. (2) Only textual (non-structured) information is available. specialized entity dictionaries such as legal cases, company project descriptions, the set of charac- ters in a novel, or a terminology glossary. Un- fortunately, labeled data are not readily available and are often expensive to obtain for these spe- cialized entity dictionaries. Therefore, we need to develop entity linking systems that can generalize to unseen specialized entities. Without frequency statistics and meta-data, the task becomes substan- tially more challenging. Some prior works have pointed out the importance of building entity link- ing systems that can generalize to unseen entity sets (Sil et al., 2012; Wang et al., 2015), but adopt an additional set of assumptions. In this work, we propose a new zero-shot en- tity linking task, and construct a new dataset for it.2 The target dictionary is simply defined as a set of entities, each with a text description (from a canonical entity page, for example). We do not constrain mentions to named entities, unlike some prior work, which makes the task harder due to large number of candidate entities. In our dataset, multiple entity dictionaries are available for train- ing, with task performance measured on a dis- joint set of test entity dictionaries for which no labeled data is available. Figure 1 illustrates the task setup. We construct the dataset using mul- tiple sub-domains in Wikia and automatically ex- tract labeled mentions using hyper-links. Zero-shot entity linking poses two challenges for entity linking models. First, without the avail- ability of powerful alias tables or frequency pri- ors, models must read entity descriptions and rea- son about the correspondence with the mention in context. We show that a strong reading compre- hension model is crucial. Second, since labeled mentions for test entities are not available, models must adapt to new mention contexts and entity de- scriptions. We focus on both of these challenges. The contributions of this paper are as follows: • We propose a new zero-shot entity linking task that aims to challenge the generalization ability of entity linking systems with minimal assump- tions. We construct a dataset for this task, which will be made publicly available. • We build a strong baseline by using state-of-the- art reading comprehension models. We show that attention between mention in context and entity descriptions, which has not been used in prior entity linking work, is critical for this task. • We propose a simple yet novel adaptation strat- egy called domain-adaptive pre-training (DAP) and show that it can further improve entity link- ing performance. # 2 Zero-shot Entity Linking We first review standard entity linking task defini- tions and discuss assumptions made by prior sys- tems. We then define the zero-shot entity linking task and discuss its relationship to prior work. 2Existing datasets are either unsuitable or would have to be artificially partitioned to construct a dataset for this task. # 2.1 Review: Entity linking Entity linking (EL) is the task of grounding en- tity mentions by linking them to entries in a given database or dictionary of entities. Formally, given a mention m and its context, an entity linking sys- tem links m to the corresponding entity in an en- tity set E = {ei}i=1,...,K, where K is the num- ber of entities. The standard definition of EL (Bunescu and Pasca, 2006; Roth et al., 2014; Sil et al., 2018) assumes that mention boundaries are provided by users or a mention detection system. The entity set E can contain tens of thousands or even millions of entities, making this a challeng- ing task. In practice, many entity linking systems rely on the following resources or assumptions: Single entity set This assumes that there is a sin- gle comprehensive set of entities E shared between training and test examples. Alias table An alias table contains entity can- didates for a given mention string and limits the possibilities to a relatively small set. Such tables are often compiled from a labeled training set and domain-specific heuristics. Frequency statistics Many systems use fre- quency statistics obtained from a large labeled cor- pus to estimate entity popularity and the probabil- ity of a mention string linking to an entity. These statistics are very powerful when available. Structured data Some systems assume access to structured data such as relationship tuples (e.g., (Barack Obama, Spouse, Michelle Obama)) or a type hierarchy to aid disambiguation. # 2.2 Task Definition The main motivation for this task is to ex- pand the scope of entity linking systems and make them generalizable to unseen entity sets for which none of the powerful resources listed above are readily available. Therefore, we drop the above assumptions and make one weak as- the existence of an entity dictionary sumption: E = {(ei, di)}i=1,..,K, where di is a text descrip- tion of entity ei. Our goal is to build entity linking systems that can generalize to new domains and entity dictio- naries, which we term worlds. We define a world as W = (MW , UW , EW ), where MW and UW are distributions over mentions and documents from the world, respectively, and EW is an entity dictio- nary associated with W. Mentions m from MW _ 7 Seen Small citcpive Structured Entity Task In-Domain Entity Set Candidate Set Statistics Data dictionary Standard EL v v v v v Cross-Domain EL v v v v Linking to Any DB (Sil et al., 2012) v v v Zero-Shot EL v Table 1: Assumptions and resources for entity linking task definitions. We classify task definitions based on whether (i) the system is tested on mentions from the training domain (In-Domain), (ii) linked mentions from the target entity set are seen during training (Seen Entity Set), (iii) a small high-coverage candidate set can be derived using alias tables or strict token overlap constraints (Small Candidate Set) and the availability of (iv) Frequency statistics, (v) Structured Data, and (vi) textual descriptions (Entity dictionary). are defined as mention spans in documents from UW . We assume the availability of labelled men- tion, entity pairs from one or more source worlds W 1 src for training. At test time we need to be able to label mentions in a new world Wtgt. src, EWtgt are Note that the entity sets EW 1 disjoint. See Figure 1 for an illustration of several training and test worlds. We additionally assume that samples from the document distribution UWtgt and the entity descrip- tions EWtgt are available for training. These sam- ples can be used for unsupervised adaptation to the target world. During training, mention boundaries for mentions in Wtgt are not available. At test time, mention boundaries are provided as input. (Wang et al., 2015) has followed a similar set- ting. The main difference between zero-shot EL and these works is that they assumed either a high- coverage alias table or high-precision token over- lap heuristics to reduce the size of the entity can- didate set (i.e., to less than four in Sil et al. (2012)) and relied on structured data to help disambigua- tion. By compiling and releasing a multi-world dataset focused on learning from textual informa- tion, we hope to help drive progress in linking en- tities for a broader set of applications. Work on word sense disambiguation based on dictionary definitions of words is related as well (Chaplot and Salakhutdinov, 2018), but this task exhibits lower ambiguity and existing formu- lations have not focused on domain generalization. # 2.3 Relationship to other EL tasks We summarize the relationship between the newly introduced zero-shot entity linking task and prior EL task definitions in Table 1. Standard EL While there are numerous differ- ences between EL datasets (Bunescu and Pasca, 2006; Ling et al., 2015), most focus on a standard setting where mentions from a comprehensive test entity dictionary (often Wikipedia) are seen dur- ing training, and rich statistics and meta-data can be utilized (Roth et al., 2014). Labeled in-domain documents with mentions are also assumed to be available. Cross-Domain EL Recent work has also gen- eralized to a cross-domain setting, linking en- tity mentions in different types of text, such as blogposts and news articles to the Wikipedia KB, while only using labeled mentions in Wikipedia for training (e.g., Gupta et al. (2017); Le and Titov (2018), inter alia). # 3 Dataset Construction We construct a new dataset to study the zero- shot entity linking problem using documents from Wikia.3 Wikias are community-written encyclo- pedias, each specializing in a particular subject or theme such as a fictional universe from a book or film series. Wikias have many interesting proper- ties suitable for our task. Labeled mentions can be automatically extracted based on hyperlinks. Mentions and entities have rich document context that can be exploited by reading comprehension approaches. Each Wikia has a large number of unique entities relevant to a specific theme, mak- ing it a useful benchmark for evaluating domain generalization of entity linking systems. We use data from 16 Wikias, and use 8 of them for training and 4 each for validation and testing. To construct data for training and evaluation, we first extract a large number of mentions from the Wikias. Many of these mentions can be easily linked by string matching between mention string Linking to Any DB Sil et al. (2012) proposed a task setup very similar to ours, and later work # 3 https://www.wikia.com. World Entities Mentions Train Evaluation Seen Unseen Training American Football Doctor Who Fallout Final Fantasy Military Pro Wrestling StarWars World of Warcraft 31929 40281 16992 14044 104520 10133 87056 27677 3898 8334 3286 6041 13063 1392 11824 1437 410 819 337 629 1356 151 1143 155 333 702 256 527 1408 111 1563 100 Validation Coronation Street Muppets Ice Hockey Elder Scrolls 17809 21344 28684 21712 0 0 0 0 0 0 0 0 1464 2028 2233 4275 Test Forgotten Realms Lego Star Trek YuGiOh 15603 10076 34430 10031 0 0 0 0 0 0 0 0 1200 1199 4227 3374 Table 2: Zero-shot entity linking dataset based on Wikia. and the title of entity documents. These men- tions are downsampled during dataset construc- tion, and occupy a small percentage (5%) of the final dataset. While not completely representa- tive of the natural distribution of mentions, this data construction method follows recent work that focuses on evaluating performance on the chal- lenging aspects of the entity linking problem (e.g., Gupta et al. (2017) selected mentions with mul- tiple possible entity candidates for assessing in- domain unseen entity performance). Each Wikia document corresponds to an entity, represented by the title and contents of the document. These en- tities, paired with their text descriptions, comprise the entity dictionary. Since the task is already quite challenging, we assume that the target entity exists in the entity dictionary and leave NIL recognition or cluster- ing (NIL mentions/entities refer to entities non- existent in the knowledge-base) to future editions of the task and dataset. We categorize the mentions based on token overlap between mentions and the corresponding entity title as follows. High Overlap: title is iden- tical to mention text, Multiple Categories: title is mention text followed by a disambiguation phrase (e.g., mention string: ‘Batman (Lego)’), Ambiguous substring: mention is a sub- string of title (e.g., mention string: ‘Agent’, title: ‘The Agent’). All other mentions are categorized Coronation Street Mention She told ray that Dickie and Audrey had met up again and tried to give their marriage another go . . . I don’t want to see her face again . . . ” Dickie Fleming Audrey Fleming Zeedan Nazir Richard “Dickie” Fleming lived in coronation street with his wife Audrey from 1968 to 1970. Audrey Fleming (ne´e bright) was a resident of 3 coronation street from 1968 to 1970 . Audrey mar- ried Dickie Fleming . . . Zeedan Nazir is the son of the Late Kal and Jamila Nazir . . . Star Wars Mention The droid acted as Moff Kilran’s representative on board the Black Talon, an Imperial trans- port ship. Gage- class transport Imperial Armored Transport M-class Imperial Attack Transport The Gage-class transport was a transport design used by the re- constituted Sith Empire of the Great Galactic War. The Kuat Drive Yards Imperial Armored Transport was fifty me- ters long and carried ten crewmen and twenty soldiers. The M-class Imperial Attack Transport was a type of starship which saw service in the Imperial Military during the Galactic War. Table 3: Example mention and entity candidates from Coronation Street and Star Wars. Note that the lan- guage usage is very different across different Worlds. as Low Overlap. These mentions respectively con- stitute approximately 5%, 28%, 8% and 59% of the mentions in the dataset. Table 2 shows some statistics of the dataset. Each domain has a large number of entities rang- ing from 10,000 to 100,000. The training set has 49,275 labeled mentions. To examine the in- domain generalization performance, we construct heldout sets seen and unseen of 5,000 mentions each, composed of mentions that link to only en- tities that were seen or unseen during training, respectively. The validation and test sets have 10,000 mentions each (all of which are unseen). Table 3 shows examples of mentions and enti- ties in the dataset. The vocabulary and language used in mentions and entity descriptions differs drastically between the different domains. In ad- dition to acquiring domain specific knowledge, understanding entity descriptions and performing reasoning is required in order to resolve mentions. # 4 Models for Entity Linking We adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more ex- pensive but powerful candidate ranking stage. # 4.1 Candidate generation Without alias tables for standard entity linking, a natural substitute is to use an IR approach for candidate generation. We use BM25, a variant of TF-IDF to measure similarity between mention string and candidate documents.4 Top-k entities retrieved by BM25 scoring with Lucene5 are used for training and evaluation. In our experiments k is set to 64. The coverage of the top-64 candidates is less than 77% on average, indicating the diffi- culty of the task and leaving substantial room for improvement in the candidate generation phase. # 4.2 Candidate ranking Since comparing two texts—a mention in context and a candidate entity description—is a task simi- lar to reading comprehension and natural language inference tasks, we use an architecture based on a deep Transformer (Vaswani et al., 2017) which has achieved state-of-the-art performance on such tasks (Radford et al., 2018; Devlin et al., 2019). As in BERT (Devlin et al., 2019), the mention in context m and candidate entity description e, each represented by 128 word-piece tokens, are concatenated and input to the model as a sequence pair together with special start and separator to- kens: ({CLS] m [SEP] e [SEP]). Mention words are signaled by a special embedding vector that is added to the mention word embeddings. The Transformer encoder produces a vector represen- tation h»,¢ of the input pair, which is the output of the last hidden layer at the special pooling to- ken [CLS]. Entities in a given candidate set are scored as w! Am¢ Where w is a learned parameter vector, and the model is trained using a softmax loss. An architecture with 12 layers, hidden di- mension size 768 and 12 attention heads was used in our experiments. We refer to this model as Full- Transformer. By jointly encoding the entity de- scription and the mention in context with a Trans- former, they can attend to each other at every layer. Note that prior neural approaches for entity linking have not explored such architectures with deep cross-attention. To assess the value of this departure from prior work, we implement the fol- lowing two variants: (i) Pool-Transformer: a siamese-like network which uses two deep Trans- formers to separately derive single-vector repre- 4We also experimented with using the mention+context text but this variant performs substantially worse. # 5 http://lucene.apache.org/ sentations of the mention in context, h,, and the candidate entity, h.; they take as input the mention in context and entity description respec- ively, together with special tokens indicating the boundaries of the texts: ([CLS]m[SEP]) and ({CLS] e [SEP]), and output the last hidden layer encoding at the special start token. The scoring unction is h,|,he. Single vector representations ‘or the two components have been used in many prior works, e.g., Gupta et al. (2017). (ii) Cand- Pool-Transformer: a variant which uses single vector entity representations but can attend to in- dividual tokens of the mention and its context as in Ganea and Hofmann (2017). This architec- ure also uses two Transformer encoders, but intro- duces an additional attention module which allows he to attend to individual token representations of the mention in context. In the experiments section, we also compare to re-implementations of Gupta et al. (2017) and Ganea and Hofmann (2017), which are similar to Pool-Transformer and Cand-Pool-Transformer re- spectively but with different neural architectures for encoding. # 5 Adapting to the Target World We focus on using unsupervised pre-training to ensure that downstream models are robust to target domain data. There exist two general strategies for pre-training: (1) task-adaptive pre-training, and (2) open-corpus pre-training. We describe these below, and also propose a new strategy: domain- adaptive pre-training (DAP), which is complemen- tary to the two existing approaches. Task-adaptive al. (2011); Chen et al. (2012); Yang and Eisenstein inter alia, pre-trained on the source (2015), and target domain unlabeled data jointly with the goal of discovering features that generalize across domains. After pre-training, the model is fine-tuned on the source-domain labeled data.6 Open-corpus pre-training Instead of explicitly adapting to a target domain, this approach sim- ply applies unsupervised pre-training to large cor- pora before fine-tuning on the source-domain la- beled data. Examples of this approach include ELMo (Peters et al., 2018), OpenAI GPT (Rad- ford et al., 2018), and BERT (Devlin et al., 2019). 6In many works, the learned representations are kept fixed and only higher layers are updated. Intuitively, the target-domain distribution is likely to be partially captured by pre-training if the open corpus is sufficiently large and diverse. Indeed, open-corpus pre-training has been shown to ben- efit out-of-domain performance far more than in- domain performance (He et al., 2018). Domain-adaptive pre-training In addition to pre-training stages from other approaches, we propose to insert a penultimate domain adaptive pre-training (DAP) stage, where the model is pre-trained only on the target-domain data. As usual, DAP is followed by a final fine-tuning stage on the source-domain labeled data. The intuition for DAP is that representational capacity is limited, so models should prioritize the quality of target domain representations above all else. We introduce notation to describe various ways in which pre-training stages can be composed. • Usrc denotes text segments from the union source world document distributions of UW 1 . . . UW n src. src • Utgt denotes text segments from the document distribution of a target world Wtgt. • Usrc+tgt denotes randomly interleaved text segments from both Usrc and Utgt. • UWB denotes text segments from open cor- pora, which in our experiments are Wikipedia and the BookCorpus datasets used in BERT. We can chain together a series of pre-training stages. For example, UWB → Usrc+tgt → Utgt in- dicates that the model is first pre-trained on the open corpus, then pre-trained on the combined source and target domains, then pre-trained on only the target domain, and finally fine-tuned on the source-domain labeled data.7 We show that chaining together different pre-training strategies provides additive gains. # 6 Experiments Pre-training We use the BERT-Base model ar- chitecture in all our experiments. The Masked LM objective (Devlin et al., 2019) is used for unsuper- vised pre-training. For fine-tuning language mod- els (in the case of multi-stage pre-training) and 7We use the notation Ux interchangeably to mean both the unsupervised data x and the strategy to pre-train on x. Model Resources Avg Acc Edit-distance TF-IDF 8 Ganea and Hofmann (2017) Gupta et al. (2017) ∅ ∅ GloVe GloVe 16.49 26.06 26.96 27.03 Full-Transformer Full-Transformer (Pre-trained) Full-Transformer (Pre-trained) Full-Transformer (Pre-trained) ∅ Usrc Utgt Usrc+tgt 19.17 66.55 67.87 67.91 Pool-Transformer (Pre-trained) Cand-Pool-Trans. (Pre-trained) Full-Transformer (Pre-trained) UWB UWB UWB 57.61 52.62 76.06 Table 4: Baseline results for Zero-shot Entity Linking. Averaged normalized Entity-Linking accuracy on all validation domains. Usrc+tgt refers to masked language model pre-training on unlabeled data from training and validation worlds. fine-tuning on the Entity-Linking task, we use a small learning rate of 2e-5, following the recom- mendations from Devlin et al. (2019). For models trained from scratch we use a learning rate of 1e-4. Evaluation We define the normalized entity- linking performance as the performance evaluated on the subset of test instances for which the gold entity is among the top-k candidates retrieved dur- ing candidate generation. The unnormalized per- formance is computed on the entire test set. Our IR-based candidate generation has a top-64 recall of 76% and 68% on the validation and test sets, re- spectively. The unnormalized performance is thus upper-bounded by these numbers. Strengthening the candidate generation stage improves the un- normalized performance, but this is outside the scope of our work. Average performance across a set of worlds is computed by macro-averaging. Performance is defined as the accuracy of the single-best identified entity (top-1 accuracy). # 6.1 Baselines We first examine some baselines for zero-shot en- tity linking in Table 4. We include naive base- lines such as Levenshtein edit-distance and TF- IDF, which compare the mention string against candidate entity title and full document descrip- tion, respectively, to rank candidate entities. We re-implemented recent neural models de- signed for entity linking (Ganea and Hofmann, 2017; Gupta et al., 2017), but did not expect them to perform well since the original systems were designed for settings where labeled mentions or meta-data for the target entities were available. (a) (b) Pretraining Usrc+tgt (Glorot et al., 2011)† Usrc+tgt → Utgt (DAP) W 1 tgt W 2 tgt W 3 tgt W 4 tgt 73.19 79.20 71.61 75.55 62.16 66.85 64.69 66.72 Avg 67.91 72.08 y c a r u c c a 80 75 UWB UWB → Usrc+tgt UWB (Devlin et al., 2019) UWB → Utgt (DAP) UWB → Usrc+tgt UWB → Usrc+tgt → Utgt (DAP) 83.40 81.68 82.92 82.82 79.00 81.34 79.00 81.59 73.03 73.17 72.62 75.34 68.82 71.97 69.55 72.52 76.06 77.04 76.02 78.07 g n i k n i L - y t i t n E 70 65 Usrc+tgt 60 # 60 Target domain MLM accuracy Figure 2: Left: (a) Impact of using Domain Adaptive Pre-training. We fine-tune all the models on the source labeled data after pretraining. Right: (b) Relationship between MLM (Masked LM) accuracy of pre-trained model and Entity-Linking performance of the fine-tuned model, evaluated on target domains. Adding domain adaptive pre-training improves both MLM accuracy as well as the entity linking performance. Note: src represents the union of all 8 training worlds and we adapt to one tgt world at a time. The target worlds are W 1 tgt: Elder scrolls. †We refer to Glorot et al. (2011) for the idea of training a denoising autoencoder on source and target data together rather than the actual implementation. See text for more details. The poor performance of these models validates the necessity of using strong reading comprehen- sion models for zero-shot entity linking. When using the Full-Transformer model, pre- training is necessary to achieve reasonable perfor- mance. We present results for models pre-trained on different subsets of our task corpus (Usrc, Utgt, Usrc+tgt) as well as pre-training on an external large corpus (UWB). We observe that the choice of data used for pre-training is important. In Table 4 we also compare the Pool- Transformer, Candidate-Pool-Transformer and Full-Transformer. The significant gap between Full-Transformer and the other variants shows the importance of allowing fine-grained comparisons between the two inputs via the cross attention mechanism embedded in the Transformer. We hy- pothesize that prior entity linking systems did not need such powerful reading comprehension mod- els due to the availability of strong additional meta information. The remaining experiments in the pa- per use the Full-Transformer model, unless men- tioned otherwise. Evaluation Accuracy Training worlds, seen Training worlds, unseen Validation worlds, unseen 87.74 82.96 76.06 Table 5: Performance of the Full-Transformer (UWB) model evaluated on seen and unseen entities from the training and validation worlds. 5-point drop in performance. Entities from new worlds (which are by definition unseen and are mentioned in out-of-domain text) prove to be the most difficult. Due to the shift in both the lan- guage distribution and entity sets, we observe a 11-point drop in performance. This large general- ization gap demonstrates the importance of adap- tation to new worlds. # Impact of Domain Adaptive Pre-training Our experiments demonstrate that DAP improves on three state-of-the-art pre-training strategies: task-adaptive pre-training, which combines source and target data for pre- training (Glorot et al., 2011).9 # 6.2 Generalization to Unseen Entities and New Worlds To analyze the impact of unseen entities and do- main shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds. Table 5 com- pares entity linking performance for different en- tity splits. Seen entities from the training worlds are unsurprisingly the easiest to link to. For un- seen entities from the training world, we observe a open-corpus pre-training, which uses Wikipedia and the BookCorpus for pre-training (We use a pre-trained BERT model (Devlin et al., 2019)). • UWB → Usrc+tgt: the previous two strategies chained together. While no prior work has applied this approach to domain adaptation, a similar approach for task adaptation was pro- posed by Howard and Ruder (2018). 9We use Masked LM and Transformer encoder, which are more powerful than the instantiation in (Glorot et al., 2011). Pre-training EL Accuracy N. Acc. U. Acc. UWB (Devlin et al., 2019) 75.06 55.08 UWB → Utgt (DAP) UWB → Usrc+tgt → Utgt (DAP) 76.17 77.05 55.88 56.58 Table 6: Performance on test domains with Full- Transformer. N. Acc represents the normalized accu- racy. U. Acc represents the unnormalized accuracy. The unnormalized accuracy is upper-bounded by 68%, the top-64 recall of the candidate generation stage. The results are in Figure 2(a). DAP improves all pre-training strategies with an additional pre- training stage on only target-domain data. The best setting, UWB → Usrc+tgt → Utgt, chains to- gether all existing strategies. DAP improves the performance over a strong pre-trained model (De- vlin et al., 2019) by 2%. To further analyze the results of DAP, we plot the relationships between the accuracy of Masked LM (MLM accuracy) on target unlabeled data and the final target normalized accuracy (after fine- tuning on the source labeled data) in Figure 2(b). Adding an additional pre-training stage on the tar- get unlabeled data unsurprisingly improves the MLM accuracy. More interestingly, we find that improvements in MLM accuracy are consistently followed by improvements in entity linking accu- racy. It is intuitive that performance on unsuper- vised objectives reflect the quality of learned rep- resentations and correlate well with downstream performance. We show empirically that this trend holds for a variety of pre-training strategies. # 6.4 Test results and performance analysis Table 6 shows the normalized and unnormal- ized Entity Linking performance on test worlds. that chains together all pre- Our best model training strategies achieves normalized accuracy of 77.05% and unnormalized accuracy of 56.58%. Note that the unnormalized accuracy corresponds to identifying the correct entity from tens of thou- sands of candidate entities. To analyze the mistakes made by the model, we compare EL accuracy across different men- tion categories in Table 7. Candidate generation (Recall@64) is poor in the Low Overlap category. However, the ranking model performs in par with other hard categories for these mentions. Overall EL accuracy can thus be improved significantly by strengthening candidate generation. Mention Category Recall@64 EL Accuracy N. Acc. U. Acc. High Overlap Ambiguous Substring Multiple categories Low Overlap 99.28 88.03 84.88 54.37 87.64 75.89 77.27 71.46 87.00 66.81 65.59 38.85 Table 7: Performance on test domains categorized by mention categories. Recall@64 indicates top-64 per- formance of candidate generation. N. Acc. and U. Acc. are respectively the normalized and unnormalized ac- curacies. # 7 Related Work We discussed prior entity linking task definitions and compared them to our task in section 2. Here, we briefly overview related entity linking models and unsupervised domain adaptation methods. Entity linking models Entity linking given mention boundaries as input can be broken into the tasks of candidate generation and candidate rank- ing. When frequency information or alias tables are unavailable, prior work has used measures of similarity of the mention string to entity names for candidate generation (Sil et al., 2012; Murty et al., 2018). For candidate ranking, recent work employed distributed representations of mentions in context and entity candidates and neural mod- els to score their compatibility. Mentions in con- text have been represented using e.g., CNN (Murty et al., 2018), LSTM (Gupta et al., 2017), or bag-of-word embeddings (Ganea and Hofmann, 2017). Entity descriptions have been represented using similar architectures. To the best of our knowledge, while some models allow for cross- attention between single-vector entity embeddings and mention-in-context token representations, no prior works have used full cross-attention between mention+context and entity descriptions. Prior work on entity linking tasks most simi- lar to ours used a linear model comparing a men- tion in context to an entity description and asso- ciated structured data (Sil et al., 2012). Sil et al. (2012) also proposed a distant supervision ap- proach which could use first-pass predictions for mentions in the target domain as noisy supervi- sion for re-training an in-domain model. We be- lieve this approach is complementary to unsuper- vised representation learning and could bring ad- ditional benefits. In another task similar to ours, Wang et al. (2015) used collective inference and target database relations to obtain good perfor- mance without (domain, target database)-specific labeled training data. Collective inference is an- other promising direction, but could have limited success when no metadata is available. Unsupervised domain adaptation There is a large body of work on methods for unsupervised domain adaptation, where a labeled training set is available for a source domain and unlabeled data is available for the target domain. The majority of work in this direction assume that training and test examples consist of (x, y) pairs, where y is in a fixed shared label set Y. This assumption holds for classification and sequence labeling, but not for zero-shot entity linking, since the source and target domains have disjoint labels. Most state-of-the-art methods learn non-linear shared representations of source and target domain instances, through denoising training objectives (Eisenstein, 2018). In Section 5, we overviewed such work and proposed an improved domain adaptive pre-training method. training methods (Ganin et al., 2016), which have also been applied to tasks where the space Y is not shared between source and target domains (Cohen et al., 2018), and multi- source domain adaptation methods (Zhao et al., 2018; Guo et al., 2018) are complementary to our work and can contribute to higher performance. # 8 Conclusion We introduce a new task for zero-shot entity link- ing, and construct a multi-world dataset for it. The dataset can be used as a shared benchmark for en- tity linking research focused on specialized do- mains where labeled mentions are not available, and entities are defined through descriptions alone. A strong baseline is proposed by combining pow- erful neural reading comprehension with domain- adaptive pre-training. Future variations of the task could incorporate NIL recognition and mention detection (instead of mention boundaries being provided). The candi- date generation phase leaves significant room for improvement. We also expect models that jointly resolve mentions in a document would perform better than resolving them in isolation. # Acknowledgements We thank Rahul Gupta and William Cohen for pro- viding detailed helpful feedback on an earlier draft of this paper. We thank the Google AI Language Team for valuable suggestions and feedback. # References Razvan Bunescu and Marius Pasca. 2006. Using en- cyclopedic knowledge for named entity disambigua- tion. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Devendra Singh Chaplot and Ruslan Salakhutdinov. 2018. Knowledge-based word sense disambiguation In Proceedings of the Thirty- using topic models. Second AAAI Conference on Artificial Intelligence. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoen- coders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learn- ing. Daniel Cohen, Bhaskar Mitra, Katja Hofmann, and W. Bruce Croft. 2018. Cross domain regularization for neural ranking models using adversarial learn- ing. In The 41st International ACM SIGIR Confer- ence on Research; Development in Information Re- trieval. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Jacob Eisenstein. 2018. Natural Language Processing. MIT Press. Octavian-Eugen Ganea and Thomas Hofmann. 2017. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Francois Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. The Journal of Machine Learning Research, 17(1):2096–2030. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment In Pro- classification: A deep learning approach. ceedings of the 28th International Conference on Machine Learning. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of ex- In Proceedings of the 2018 Conference on perts. Empirical Methods in Natural Language Process- ing. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, In Proceedings of the 2017 Confer- and context. ence on Empirical Methods in Natural Language Processing. Luheng He, Kenton Lee, Omer Levy, and Luke Zettle- moyer. 2018. Jointly predicting predicates and ar- arXiv guments in neural semantic role labeling. preprint arXiv:1805.04787. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics. Phong Le and Ivan Titov. 2018. Improving entity link- ing by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Xiao Ling, Sameer Singh, and Daniel S. Weld. 2015. Design challenges for entity linking. Transactions of the Association for Computational Linguistics. David Milne and Ian H. Witten. 2008. Learning to link In Proceedings of the 17th ACM with wikipedia. Conference on Information and Knowledge Man- agement. Irena Radovanovic, and Andrew McCallum. 2018. Hier- archical losses and new resources for fine-grained In Proceedings of the entity typing and linking. 56th Annual Meeting of the Association for Compu- tational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing with unsupervised learning. Technical re- port, OpenAI. Dan Roth, Heng Ji, Ming-Wei Chang, and Taylor Cas- sidy. 2014. Wikification and beyond: The chal- lenges of entity and concept grounding. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics: Tutorials. Avi Sil, Heng Ji, Dan Roth, and Silviu-Petru Cucerzan. 2018. Multi-lingual entity discovery and linking. In Proceedings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics, Tutorial Ab- stracts. Avirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, and Alexander Yates. 2012. In Pro- Linking named entities to any database. ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems. Han Wang, Jin Guang Zheng, Xiaogang Ma, Peter Fox, and Heng Ji. 2015. Language and domain indepen- dent entity linking with quantified collective valida- tion. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing. Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. Han Zhao, Shanghang Zhang, Guanhang Wu, Jos´e MF Moura, Joao P Costeira, and Geoffrey J Gordon. 2018. Adversarial multiple source domain adapta- tion. In Advances in Neural Information Processing Systems. # A Examining model errors and predictions In tables 8, 9, 10, 11 we show some example men- tions and model predictions. For each instance, the examples show the correct gold entity and the top- 5 predictions from the model. Examples show 32 token contexts centered around mentions and the first 32 tokens of candidate entity documents. Coronation Street Robbie pulled over the ambulance with a van and used a gun to get the Prison Officer with Tony to release him . He integrated himself with the Street residents , finding Prison Officer (Episode 7351) The unnamed Prison Officer was on duty during May 2010 in the Highfield Prison dining room when Tony Gordon provoked a fight with a fellow inmate Top-5 predictions Prison Officer (Episode 7351) The unnamed Prison Officer was on duty during May 2010 in the Highfield Prison dining room when Tony Gordon provoked a fight with a fellow inmate Inmate (Episode 7351) The Inmate was an unnamed fellow prisoner of Tony Gordon in Highfield Prison . Tony provoked a fight in the dining room with the inmate by staring Police Officer (Simon Willmont) The unnamed Police Officer was on duty at Weatherfield Police Station in March 2010 when Peter Barlow was released from custody following his arrest as he Prison Officer (Bill Armstrong) The Prison Officer looked after the incarceration of three Coronation Street residents : In November 2000 he was on duty at Strangeways Jail when Jim McDonald Robbie Sloane Quietly spoken Robbie Sloane was Tony Gordon ’ s henchman and a convicted mur- derer , who he met while sharing a cell at Highfield Prison in 2010 . When Robbie # Gold Entity Table 8: Mention and entity candidates from Coronation Street. Muppets Bean Bunny was introduced during the seventh season of ” Muppet Babies ” , and a pre - teen Bean would later be featured as part of the Muppet Kids series . Bean was active (Muppet Kids) A young version of Bean Bunny made a few appearances in the Muppet Kids books and video games . Young Bean moves to the Muppet Kids Top-5 predictions Baby Bean Bunny Baby Bean Bunny appeared in the late 1989 / 1990 seasons of ” Muppet Babies ” as a baby version of Bean Bunny . He joined the other babies Bean Bunny (Muppet Kids) A young version of Bean Bunny made a few appearances in the Muppet Kids books and video games . Young Bean moves to the Muppet Kids Bean Bunny Bean Bunny first appeared in 1986 as the star of the TV special ” The Tale of the Bunny Picnic ” . The cute bunny was part of a family Piggy (Muppet Kids) A pre - teen version of Miss Piggy , as seen in the ” Muppet Kids ” books and video games . Piggy lives in a fancy Muppet Kids Muppet Kids was a series of books and educational software made in the 1990s , featuring young , pre - teen versions of the principal franchise characters . Characters included # Mention # Gold Entity Bean Bunny Table 9: Mention and entity candidates from Muppets. Ice Hockey Mention 1979 - 80 PCJHL Season This is a list of Peace - Cariboo Junior Hockey League Standings for the 1979 - 80 season . This was the PCJHL ’ s final Hockey League The Rocky Mountain Junior Hockey League was a Canadian Junior ” A ” ice hockey league in British Columbia . History . Promoted to a Junior ” Top-5 predictions Peace Junior Hockey League Hockey League Peace Junior Hockey League is a League that started in the 1960 ’ s and ended in 1975 . Then change its name to Peace Cariboo junior Hockey Cariboo Hockey League The Cariboo Hockey League was a Senior and Intermediate hockey league in the Cariboo District of British Columbia , Canada . History . The league began in the 1955 Cariboo Junior League The Cariboo Junior League operated in northern British Columbia in the 1963 - 64 season . Its champion was eligible for the British Columbia Junior Playoffs . The league Rocky Mountain Junior Hockey League The Rocky Mountain Junior Hockey League was a Canadian Junior ” A ” ice hockey league in British Columbia . History . Promoted to a Junior ” North West Junior Hockey League The North West Junior Hockey League is a Junior ” B ” ice hockey league oper- ating in the Peace River region of Alberta and British Columbia , Table 10: Mention and entity candidates from Ice Hockey. Elder Scrolls to get everyone to safety . Rolunda ’ s brother is one of those people . The Frozen Man . Rolunda ’ s brother Eiman has ventured into Orkey ’ s Hollow to find The Frozen Man (Quest) The Frozen Man is a quest available in The Elder Scrolls Online. It involves finding a Nord who has been trapped in ice by a mysterious ” Frozen Man Top-5 predictions The Frozen Man (Quest) The Frozen Man is a quest available in The Elder Scrolls Online. It involves finding a Nord who has been trapped in ice by a mysterious ” Frozen Man The Frozen Man The Frozen Man is an insane Bosmer ghost found in Orkey ’ s Hollow . He says he was in a group of people inside the cave when it Kewan Kewan is a Redguard worshipper of the Daedric Prince Peryite . He is frozen in a trance that relates to the Daedric quest , but can be unfrozen in completion the Stromgruf the Steady Stromgruf the Steady is the Nord who is found in the Grazelands of Vvardenfell , west of Pulk and east of Vassamsi Grotto ( Online ) . He is Maren the Seal Maren the Seal is a Nord hunter and worshipper of the Daedric Prince Peryite . She is frozen in a trance that relates to the Daedric Prince ’ s # Mention # Gold Entity Table 11: Mention and entity candidates from Elder Scrolls.
{ "id": "1805.04787" }
1906.07241
Barack's Wife Hillary: Using Knowledge-Graphs for Fact-Aware Language Modeling
Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional language models are only capable of remembering facts seen at training time, and often have difficulty recalling them. To address this, we introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context. These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens. We also introduce the Linked WikiText-2 dataset, a corpus of annotated text aligned to the Wikidata knowledge graph whose contents (roughly) match the popular WikiText-2 benchmark. In experiments, we demonstrate that the KGLM achieves significantly better performance than a strong baseline language model. We additionally compare different language model's ability to complete sentences requiring factual knowledge, showing that the KGLM outperforms even very large language models in generating facts.
http://arxiv.org/pdf/1906.07241
Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer Singh
cs.CL
null
null
cs.CL
20190617
20190620
9 1 0 2 n u J 0 2 ] L C . s c [ 2 v 1 4 2 7 0 . 6 0 9 1 : v i X r a # Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling # Robert L. Logan IV∗ Nelson F. Liu†§ Matthew E. Peters§ Matt Gardner§ Sameer Singh∗ ∗ University of California, Irvine, CA, USA † University of Washington, Seattle, WA, USA § Allen Institute for Artificial Intelligence, Seattle, WA, USA {rlogan, sameer}@uci.edu, {mattg, matthewp}@allenai.org, nfl[email protected] # Abstract Modeling human language requires the ability to not only generate fluent text but also en- code factual knowledge. However, traditional language models are only capable of remem- bering facts seen at training time, and often have difficulty recalling them. To address this, we introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context. These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens. We also introduce the Linked WikiText- 2 dataset,1 a corpus of annotated text aligned to the Wikidata knowledge graph whose contents (roughly) match the popular WikiText-2 bench- mark (Merity et al., 2017). In experiments, we demonstrate that the KGLM achieves signifi- cantly better performance than a strong base- line language model. We additionally com- pare different language models’ ability to com- plete sentences requiring factual knowledge, and show that the KGLM outperforms even very large language models in generating facts. # 1 Introduction For language models to generate plausible sen- tences, they must be both syntactically coherent as well as consistent with the world they describe. Al- though language models are quite skilled at generat- ing grammatical sentences, and previous work has shown that language models also possess some de- gree of common-sense reasoning and basic knowl- edge (Vinyals and Le, 2015; Serban et al., 2016; Trinh and Le, 2019), their ability to generate fac- tually correct text is quite limited. The clearest limitation of existing language models is that they, at best, can only memorize facts observed during # 1https://rloganiv.github.io/linked-wikitext-2 [Super Mario Land] is a [1989] [side-scrolling] [platform video game] developed and published by [Nintendo] as a [launch title] for their [Game Boy] [handheld game console]. Super Mario Land | PUBLISHER [nintendo launch game PUBLICATION MANUFACTURER DaTE 21 April 1989 platform game side-scrolling video game ERTS handheld game console INSTANCE OF Figure 1: Linked WikiText-2 Example. A localized knowledge graph containing facts that are (possibly) conveyed in the sentence above. The graph is built by it- eratively linking each detected entity to Wikidata, then adding any relations to previously mentioned entities. Note that not all entities are connected, potentially due to missing relations in Wikidata. training. For instance, when conditioned on the text at the top of Figure 1, an AWD-LSTM language model (Merity et al., 2018) trained on Wikitext-2 assigns higher probability to the word “PlaySta- tion” than “Game Boy”, even though this sentence appears verbatim in the training data. This is not surprising—existing models represent the distribu- tion over the entire vocabulary directly, whether they are common words, references to real world entities, or factual information like dates and num- bers. As a result, language models are unable to generate factually correct sentences, do not gen- eralize to rare/unseen entities, and often omit rare tokens from the vocabulary (instead generating UN- KNOWN tokens). We introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying information from an external knowledge graph. The KGLM maintains a dynamically growing local knowledge graph, a subset of the knowledge graph that con- tains entities that have already been mentioned in the text, and their related entities. When generating entity tokens, the model either decides to render a new entity that is absent from the local graph, thereby growing the local knowledge graph, or to render a fact from the local graph. When render- ing, the model combines the standard vocabulary with tokens available in the knowledge graph, thus supporting numbers, dates, and other rare tokens. Figure 1 illustrates how the KGLM works. Ini- tially, the graph is empty and the model uses the entity Super Mario Land to render the first three tokens, thus adding it and its relations to the local knowledge graph. After generating the next two to- kens (“is”, “a”) using the standard language model, the model selects Super Mario Land as the parent entity, Publication Date as the relation to render, and copies one of the tokens of the date entity as the token (“1989” in this case). To facilitate research on knowledge graph-based language modeling, we collect the distantly su- pervised Linked WikiText-2 dataset. The underly- ing text closely matches WikiText-2 (Merity et al., 2017), a popular benchmark for language model- ing, allowing comparisons against existing mod- els. The tokens in the text are linked to entities in Wikidata (Vrandeˇci´c and Krötzsch, 2014) using a combination of human-provided links and off-the- shelf linking and coreference models. We also use relations between these entities in Wikidata to con- struct plausible reasons for why an entity may have been mentioned: it could either be related to an entity that is already mentioned (including itself) or a brand new, unrelated entity for the document. We train and evaluate the KGLM on Linked WikiText-2. When compared against AWD-LSTM, a recent and performant language model, KGLM obtains not only a lower overall perplexity, but also a substantially lower unknown-penalized perplex- ity (Ueberla, 1994; Ahn et al., 2016), a metric that allows fair comparisons between models that accu- rately model rare tokens and ones that predict them to be unknown. We also compare factual com- pletion capabilities of these models, where they predict the next word after a factual sentence (e.g., “Barack is married to ”) and show that KGLM is significantly more accurate. Lastly, we show that the model is able to generate accurate facts for rare entities, and can be controlled via modifications the knowledge graph. # 2 Knowledge Graph Language Model In this section we introduce a language model that is conditioned on an external, structured knowledge source, which it uses to generate factual text. # 2.1 Problem Setup and Notation A language model defines a probability distribution over each token within a sequence, conditioned on the sequence of tokens observed so far. We denote the random variable representing the next token as xt and the sequence of the tokens before t as x<t, i.e. language models compute p(xt|x<t). RNN lan- guage models (Mikolov et al., 2010) parameterize this distribution using a recurrent structure: p(xt|x<t) = softmax(Whht + b), ht = RNN(ht−1, xt−1). (1) We use LSTMs (Hochreiter and Schmidhuber, 1997) as the recurrent module in this paper. A knowledge graph (KG) is a directed, labeled graph consisting of entities E as nodes, with edges defined over a set of relations R, i.e. KG = {(p, r, e) | p ∈ E, r ∈ R, e ∈ E}, where p is a par- ent entity with relation r to another entity e. Prac- tical KGs have other aspects that make this for- mulation somewhat inexact: some relations are to literal values, such as numbers and dates, facts may be expressed as properties on relations, and entities have aliases as the set of strings that can refer to the entity. We also define a local knowl- edge graph for a subset of entities E<t as KG<t = {(p, r, e) | p ∈ E<t, r ∈ R, e ∈ E}, i.e. contains entities E<t and all facts they participate in. # 2.2 Generative KG Language Model The primary goal of the knowledge graph lan- guage model (KGLM) is to enable a neural lan- guage model to generate entities and facts from a knowledge graph. To encourage the model to generate facts that have appeared in the context already, KGLM will maintain a local knowledge graph containing all facts involving entities that have appeared in the context. As the model decides to refer to entities that have not been referred to yet, it will grow the local knowledge graph with additional entities and facts to reflect the new entity. Formally, we will compute p(xt, Et|x<t, E<t) where x<t is the sequence of observed tokens, E<t is the set of entities mentioned in x<t, and KG<t is the local knowledge graph determined by E<t, as described above. The generative process is: Super Mario Land is a 1989 side-scrolling platform video game developed and published by Nintendo t parent from local entities Pr platform gane C) side-scrolling game () Super Mario Land O || oO Relation to | Existing Entity pick from all entities @¢ AAA Inc. O Mention of a New Entity Zzyzx, CA O Not an = Entity Mention a= © Sony Inc. O i vocabulary and aliases of e; | er Xt Super " Mario Land Nintendo standard vocabulary SH PUBLISHER a ~ the Game Boy yy company dog platform game aliases of e, Kabushiki Distribution over standard Koppai Nintendo Distribution over standard vocabulary Figure 2: KGLM Illustration. When trying to generate the token following “published by”, the model first decides the type of the mention (tt) to be a related entity (darker indicates higher probability), followed by identifying the parent (pt), relation (rt), and entity to render (et) from the local knowledge graph as (Super Mario Land, Publisher, Nintendo). The final distribution over the words includes the standard vocabulary along with aliases of Nintendo, and the model selects “Nintendo” as the token xt. Facts related to Nintendo will be added to the local graph. Decide the type of xt, which we denote by tt: whether it is a reference to an entity in KG<t (related), a reference to an entity not in KG<t (new), or not an entity mention (∅). • If tt = new then choose the upcoming entity et from the set of all entities E. • If tt = related then: – Choose a parent entity pt from E<t. – Choose a factual relation rt to render, — Choose a factual relation r; to render, rz € {(p,r,e) © KG cilp = pr}. rt ∈ {(p, r, e) ∈ KG<t|p = pt}. – Choose et as one of the tail entities, et ∈ {e|(pt, rt, e) ∈ KG<t}. If tt = ∅ then et = ∅. • Generate xt conditioned on et, potentially copy- ing one of et’s aliases. lect Nintendo as the entity to render (et). When rendering Nintendo as a token xt, the model has an expanded vocabulary available to it, containing the standard vocabulary along with all word types in any of the aliases of et. Marginalizing out the KG There is a mismatch between our initial task requirement, p(x4|2 <1), and the model we describe so far, which computes p(xt, Er|v<t, Ect). We will essentially marginal- ize out the local knowledge graph to compute the probability of the tokens, i.e. p(x) = S\¢ p(x, €). We will clarify this, along with describing the train- ing and the inference/decoding algorithms for this model and other details of the setup, in Section 4. • If et /∈ E<t, then E<(t+1) ← E<t ∪ {et}, else E<(t+1) ← E<t. # 2.3 Parameterizing the Distributions For the model to refer to an entity it has already mentioned, we introduce a Reflexive relation that self-relates, i.e. p = e for (p, Reflexive, e). An illustration of this process and the variables is provided in Figure 2, for generating a token in the middle of the same sentence as in Figure 1. Amongst the three mention types (tt), the model chooses a reference to existing entity, which re- quires picking a fact to render. As the parent entity of this fact (pt), the model picks Super Mario Land, and then follows the Publisher relation (rt) to se- The parametric distributions used in the generative process above are defined as follows. We begin by computing the hidden state ht using the for- mula in Eqn (1). We then split the vector into three components: ht = [ht,x; ht,p; ht,r], which are respectively used to predict words, parents, and relations. The type of the token, tt, is computed using a single-layer softmax over ht,x to predict one of {new, related, ∅}. Picking an Entity We also introduce pretrained embeddings for all entities and relations in the knowledge graph, denoted by ve for entity e and vr for relation r. To select et from all entities in case tt = new, we use: p(et) = softmax(ve · (ht,p + ht,r)) over all e ∈ E. The reason we add ht,p and ht,r is to mimic the structure of TransE, which we use to obtain entity and relation embeddings. Details on TransE will be provided in Section 4. For mention of a related entity, tt = related, we pick a parent entity pt using p(pt) = softmax(vp · ht,p) over all p ∈ Et, then pick the relation rt using p(rt) = softmax(vr · ht,r) over all r ∈ {r|(pt, r, e) ∈ KGt}. The combina- tion of pt and rt determine the entity et (which must satisfy (pt, rt, et) ∈ KGt; if there are multi- ple options one is chosen at random). Rendering the Entity If ce, = 0, i.e. there is no entity to render, we use the same distribution over the vocabulary as in Eqn (1) - a softmax using h,,,. If there is an entity to render, we construct the distribution over the original vocabulary and a vocabulary containing all the tokens that appear in aliases of e;. This distribution is conditioned on e; in addition to z;. To compute the scores over the original vocabulary, h;,, is replaced by hi, = Worojllic; Ve,] where Wproj is a learned weight matrix that projects the concatenated vector into the same vector space as hy... To obtain probabilities for words in the alias vocabulary, we use a copy mechanism Gu et al. (2016). The token sequences comprising each alias {aj} are embedded then encoded using an LSTM to form vectors aj. Copy scores are computed as: p(x = aj) x exp G ((hy..)” Weory) a)| # 3 Linked WikiText-2 Modeling aside, one of the primary barriers to in- corporating factual knowledge into language mod- els is that training data is hard to obtain. Standard language modeling corpora consist only of text, and thus are unable to describe which entities or facts each token is referring to. In contrast, while relation extraction datasets link text to a knowledge graph, the text is made up of disjoint sentences that do not provide sufficient context to train a pow- erful language model. Our goals are much more aligned to the data-to-text task (Ahn et al., 2016; Lebret et al., 2016; Wiseman et al., 2017; Yang et al., 2017; Gardent et al., 2017; Ferreira et al., 2018), where a small table-sized KB is provided to generate a short piece of text; we are interested in language models that dynamically decide the facts to incorporate from the knowledge graph, guided by the discourse. For these reasons we introduce the Linked WikiText-2 dataset, consisting of (approximately) the same articles appearing in the WikiText-2 lan- guage modeling corpus, but linked to the Wiki- data (Vrandeˇci´c and Krötzsch, 2014) knowledge graph. Because the text closely matches, mod- els trained on Linked WikiText-2 can be compared to models trained on WikiText-2. Furthermore, because many of the facts in Wikidata are de- rived from Wikipedia articles, the knowledge graph has a good coverage of facts expressed in the text. The dataset is available for download at: https://rloganiv.github.io/linked-wikitext-2. Our system annotates one document at a time, and con- sists of entity linking, relation annotations, and post-processing. The following paragraphs de- scribe each step in detail. Initial entity annotations We begin by identify- ing an initial set of entity mentions within the text. The primary source of these mentions is the human- provided links between Wikipedia articles. When- ever a span of text is linked to another Wikipedia article, we associate its corresponding Wikidata entity with the span. While article links provide a large number of gold entity annotations, they are in- sufficient for capturing all of the mentions in the ar- ticle since entities are only linked the first time they occur. Accordingly, we use the neural-el (Gupta et al., 2017) entity linker to identify additional links to Wikidata, and identify coreferences using Stan- ford CoreNLP2 to cover pronouns, nominals, and other tokens missed by the linker. Local knowledge graph The next step iteratively creates a generative story for the entities using rela- tions in the knowledge graph as well as identifies new entities. To do this, we process the text token by token. Each time an entity is encountered, we add all of the related entities in Wikidata as candi- 2https://stanfordnlp.github.io/CoreNLP/ Tokens xt Super Mario Land is a 1989 side - scrolling platform video game developed Mention type tt Entity Mentioned et Relation rt Parent Entity pt new SML ∅ ∅ ∅ ∅ ∅ ∅ 04-21-1989 SIDE_SCROLL ∅ ∅ ∅ ∅ related new pub date SML ∅ ∅ related PVG genre SML ∅ ∅ ∅ ∅ xt and published by Nintendo as a launch title for their Game Boy handheld game console . tt et rt pt ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ related NIN pub SML ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ new LT ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ ∅ related GAME_BOY R:manu / platform NIN / SML related HGC instance of GAME_BOY ∅ ∅ ∅ ∅ Table 1: Example Annotation of the sentence from Figure 1, including corresponding variables from Figure 2. Note that Game Boy has multiple parent and relation annotations, as the platform for Super Mario Land and as manufactured by Nintendo. Wikidata identifiers are made human-readable (e.g., SML is Q647249) for clarity. dates for matching. If one of these related entities is seen later in the document, we identify the entity as a parent for the later entity. Since multiple re- lations may appear as explanations for each token, we allow a token to have multiple facts. Expanding the annotations Since there may be entities that were missed in the initial set, as well as non-entity tokens of interest such as dates and quantities we further expand the entity annotations using string matching. For entities, we match the set of aliases provided in Wikidata. For dates, we create an exhaustive list of all of the possible ways of expressing the date (e.g. "December 7, 1941", "7-12-1941", "1941", ...). We perform a similar approach for quantities, using the pint library in Python to handle the different ways of expressing units (e.g. "g", "gram", ...). Since there are many ways to express a numerical quantity, we only ren- der the quantity at the level of precision supplied by Wikidata, and do not perform unit conversions. Example Annotation An example annotation is provided in Table 1 corresponding to the instance in Figure 1, along with the variables that correspond to the generative process of the knowledge graph language model (KGLM). The entity mentioned for most tokens here are human-provided links, apart from “1989” that is linked to 04-21-1989 by the string matching process. The annotations indicate which of the entities are new and related based on whether they are reachable by entities linked so far, clearly making a mistake for side-scrolling game and platform video game due to missing links in Wikidata. Finally, multiple plausible reasons for Game Boy are included: it’s the platform for Super Mario Land and it is manufactured by Nintendo, even though only the former is more relevant here. Train Dev Test Documents Tokens Vocab. Size Mention Tokens Mention Spans Unique Entities Unique Relations 600 2,019,195 33,558 207,803 122,983 41,058 1,291 60 207,982 - 21,226 12,214 5,415 484 60 236,062 - 24,441 15,007 5,625 504 # Table 2: Linked WikiText-2 Corpus Statistics. Even with these omissions and mistakes, it is clear that the annotations are rich and detailed, with a high coverage, and thus should prove beneficial for training knowledge graph language models. Dataset Statistics Statistics for Linked WikiText-2 are provided in Table 2. In this corpus, more than 10% of the tokens are considered entity tokens, i.e. they are generated as factual references to informa- tion in the knowledge graph. Each entity is only mentioned a few times (less than 5 on average, with a long tail), and with more than thousand different relations. Thus it is clear that regular language models would not be able to generate factual text, and there is a need for language models to be able to refer to external sources of information. Differences from WikiText-2 Although our dataset is designed to closely replicate WikiText-2, there are some differences that prevent direct com- parison. Firstly, there are minor variations in text across articles due to edits between download dates. Secondly, according to correspondence with Merity et al. (2017), WikiText-2 was collected by querying the Wikipedia Text API. Because this API discards useful annotation information (e.g. article links), Linked WikiText-2 instead was created by directly from the article HTML. # 4 Training and Inference for KGLM In this section, we describe the training and infer- ence algorithm for KGLM. Pretrained KG Embeddings During evaluation, we may need to make predictions on entities and relations that have not been seen during training. Accordingly, we use fixed entity and relations em- beddings pre-trained using TransE (Bordes et al., 2013) on Wikidata. Given (p, r, e), we learn em- beddings vp, vr and ve to minimize the distance: 6(Vp, Vr; Ve) = ||Vp + vr — Vell. We use a max-margin loss to learn the embeddings: L = max (0,7 + 4 (Vp, Vr, Ve) — 6 (Vp, Vrs Ve)) where Â¥ is the margin, and either p’ or e’ is a ran- domly chosen entity embedding. Training with Linked WikiText-2 Although the generative process in KGLM involves many steps, training the model on Linked WikiText-2 is straight- forward. Our loss objective is the negative log- likelihood of the training data: (0) = S- log p(az, Er|v<t, Ect; 9), t where Θ is the set of model parameters. Note that if an annotation has multiple viable parents such as Game Boy in 1, then we marginalize over all of the parents. Since all random variables are observed, training can performed using off-the-shelf gradient- based optimizers. Inference While observing annotations makes the model easy to train, we do not assume that the model has access to annotations during evaluation. Furthermore, as discussed in Section 2.2, the goal in language modelling is to measure the marginal probability p(x) = > ¢ p(x, E) not the joint proba- bility. However, this sum is intractable to compute due to the large combinatorial space of possible annotations. We address this problem by approxi- mating the marginal distribution using importance sampling. Given samples from a proposal distribu- tion q(E|x) the marginal distribution is: _~, _woPé) Px) = DPE) = Dae) 1 ER) ~ Lr plx€) ~ ve q (E|x) This approach is used to evaluate models in Ji et al. (2017) and Dyer et al. (2016). Following Ji et al. (2017), we compute q (E|x) using a discriminative version of our model that predicts annotations for the current token instead of for the next token. # 5 Experiments To evaluate the proposed language model, we first introduce the baselines, followed by an evalua- tion using perplexity of held-out corpus, accuracy on fact completion, and an illustration of how the model uses the knowledge graph. # 5.1 Evaluation Setup Baseline Models We compare KGLM to the fol- lowing baseline models: • AWD-LSTM (Merity et al., 2018): strong LSTM-based model used as the foundation of most state-of-the-art models on WikiText-2. • ENTITYNLM (Ji et al., 2017): an LSTM-based language model with the ability to track entity mentions. Embeddings for entities are created dy- namically, and are not informed by any external sources of information. • EntityCopyNet: a variant of the KGLM where tt = new for all mentions, i.e. entities are selected from E and entity aliases are copied, but relations in the knowledge graph are unused. Hyperparameters We pre-train 256 dimensional entity and relation embeddings for all entities within two hops of the set of entities that occur in Linked WikiText-2 using TransE with margin γ = 1. Weights are tied between all date embeddings and between all quantity embeddings to save memory. Following Merity et al. (2018) we use 400 dimen- sional word embeddings and a 3 layer LSTM with hidden dimension 1150 to encode tokens. We also employ the same regularization strategy (DropCon- nect (Wan et al., 2013) + Dropout(Srivastava et al., 2014)) and weight tying approach. However, we perform optimization using Adam (Kingma and Ba, 2015) with learning rate 1e-3 instead of NT-ASGD, having found that it is more stable. # 5.2 Results Perplexity We evaluate our model using the stan- dard perplexity metric: exp (+ yan log p(x1)). However, perplexity suffers from the issue that it PPL UPP ENTITYNLM* (Ji et al., 2017) EntityCopyNet* AWD-LSTM (Merity et al., 2018) KGLM* 85.4 76.1 74.8 44.1 189.2 144.0 165.8 88.5 Table 3: Perplexity Results on Linked WikiText-2. Re- sults for models marked with * are obtained using im- portance sampling. overestimates the probability of out-of-vocabulary tokens when they are mapped to a single UNK token. This is problematic for comparing the per- formance of the KGLM to traditional language models on Linked WikiText-2 since there are a large number of rare entities whose alias tokens are out- of-vocabulary. That is, even if the KGLM identifies the correct entity and copies the correct alias token with high probability, other models can attain bet- ter perplexity by assigning a higher probability to UNK. Accordingly, we also measure unknown pe- nalized perplexity (UPP) (a.k.a adjusted perplexity) introduced by Ueberla (1994), and used recently by Ahn et al. (2016) and Spithourakis and Riedel (2018). This metric penalizes the probability of UNK tokens by evenly dividing their probability mass over U, the set of tokens that get mapped to UNK . We can be compute UPP by replacing p(UNK) in the perplexity above by 1 |U| p(UNK), where |U| is estimated from the data. We present the model perplexities in Table 3. To marginalize over annotations, perplexities for the ENTITYNLM, EntityCopyNet, and KGLM are es- timated using the importance sampling approach described in Section 4. We observe that the KGLM attains substantially lower perplexity than the other entity-based language models (44.1 vs. 76.1/85.4), providing strong evidence that leveraging knowl- edge graphs is crucial for accurate language mod- eling. Furthermore, KGLM significantly outper- forms all models in unknown penalized perplexity, demonstrating its ability to generate rare tokens. Fact Completion Since factual text generation is our primary objective, we evaluate the ability of language models to complete sentences with factual information. We additionally compare with the small GPT-2 (Radford et al., 2019), a language model trained on a much larger corpus of text. We select 6 popular relations from Freebase, and write a simple completion template for each, such as “X was born in ” for the birthplace relation. We AWD- LSTM GPT-2 KGLM Oracle NEL nation-capital birthloc birthdate spouse city-state book-author 0 / 0 0 / 9 0 / 25 0 / 0 0 / 13 0 / 2 6 / 7 14 / 14 8 / 9 2 / 3 62 / 62 0 / 0 0 / 0 94 / 95 65 / 68 2 / 2 9 / 59 61 / 62 0 / 4 85 / 92 61 / 67 1 / 19 4 / 59 25 / 28 Average 0.0/8.2 15.3/15.8 38.5/47.7 29.3/44.8 Top-k accuracy Table 4: (@1/@5,%) for predicting the next token for an incom- plete factual sentence. See examples in Table 5. generate sentences for these templates for a number of (X, Y ) pairs for which the relation holds, and manually examine the first token generated by each language model to determine whether it is correct. Table 4 presents performance of each language model on the relations. The oracle KGLM is given the correct entity annotation for X, while the NEL KGLM uses the discriminative model used for im- portance sampling combined with the NEL entity linker to produce an entity annotation for X. Amongst models trained on the same data, both KGLM variants significantly outperform AWD- LSTM; they produce accurate facts, while AWD- LSTM produced generic, common words. KGLMs are also competitive with models trained on orders of magnitude more data, producing factual com- pletions that require specific knowledge, such as birthplaces, dates, and authors. However, they do not capture facts or relations that frequently appear in large corpora, like the cities within states.3 It is encouraging to see that the KGLM with automatic linking performs comparably to oracle linking. We provide examples in Table 5 to highlight qualitative differences between KGLM, trained on 600 documents, and the recent state-of-the-art lan- guage model, GPT-2, trained on the WebText cor- pus with over 8 million documents (Radford et al., 2019). For examples that both models get factu- ally correct or incorrect, the generated tokens by KGLM are often much more specific, as opposed to selection of more popular/generic tokens (GPT-2 often predicts “New York” as the birthplace, even for popular entities). KGLM, in particular, gets factual statements correct when the head or tail en- tities are rare, while GPT-2 can only complete facts for more-popular entities while using more-generic tokens (such as “January” instead of “20”). 3This is not a failure of the KG, but of the model’s ability to pick the correct relation from the KG given the prompt. Input Sentence Gold GPT-2 KGLM Both correct Paris Hilton was born in Arnold Schwarzenegger was born on New York City 1947-07-30 New July 1981 30 KGLM correct Bob Dylan was born in Barack Obama was born on Ulysses is a book that was written by Duluth 1961-08-04 James Joyce New January a Duluth August James GPTv2 correct St. Louis is a city in the state of Richard Nixon was born on Kanye West is married to Missouri 1913-01-09 Missouri Oldham January 20 the Kim Kardashian Kim Both incorrect The capital of India is Madonna is married to New Delhi Carlos Leon the a a Alex Table 5: Completion Examples. Examples of fact completion by KGLM and GPT-2, which has been trained on a much larger corpus. GPT-2 tends to produce very common and general tokens, such as one of a few popular cities to follow “born in”. KGLM sometimes makes mistakes in linking to the appropriate fact in the KG, however, the generated facts are more specific and contain rare tokens. We omit AWD-LSTM from this figure as it rarely ee produced tokens apart from the generic “the” or “a”, or “(UNK)”. Effect of changing the KG For most language models, it is difficult to control their generation since factual knowledge is entangled with gener- ation capabilities of the model. For KGLM, an additional benefit of its use of an external source of knowledge is that KGLM is directly control- lable via modifications to the KG. To illustrate this capability with a simple example, we create com- pletion of “Barack Obama was born on ” with the original fact (Barack Obama, birthDate, 1961- 08-04), resulting in the top three decoded tokens as “August”, “4”, “1961”. After changing the birth date to 2013-03-21, the top three decoded tokens become “March”, “21”, “2013”. Thus, changing the fact in the knowledge graph directly leads to a corresponding change in the model’s prediction. # 6 Related Work Knowledge-based language models Our work draws inspiration from two existing knowledge- based language models: (i) ENTITYNLM (Ji et al., 2017) which im- proves a language model’s ability to track entities by jointly modeling named entity recognition and coreference. Our model similarly tracks entities through a document, improving its ability to gener- ate factual information by modeling entity linking and relation extraction. (ii) The neural knowledge language model (NKLM) (Ahn et al., 2016) which established the idea of leveraging knowledge graphs in neural lan- guage models. The main differentiating factor be- tween the KGLM and NKLM is that the KGLM operates on an entire knowledge graph and can be evaluated on text without additional conditioning information, whereas the NKLM operates on a rel- atively smaller set of predefined edges emanating from a single entity, and requires that entity be pro- vided as conditioning information ahead of time. This requirement precludes direct comparison be- tween NKLM and the baselines in Section 5. Data-to-text generation Our work is also related to the task of neural data-to-text generation. For a survey of early non-neural text generation meth- ods we refer the reader to Reiter and Dale (1997). Recent neural methods have been applied to gener- ating text from tables of sports statistics (Wiseman et al., 2017), lists and tables (Yang et al., 2017), and Wikipedia info-boxes (Lebret et al., 2016). The pri- mary difference between these works and ours is our motivation. These works focus on generating coherent text within a narrow domain (e.g. sports, recipes, introductory sentences), and optimize met- rics such as BLEU and METEOR score. Our focus instead is to use a large source of structured knowl- edge to improve language model’s ability to handle rare tokens and facts on a broad domain of topics, and our emphasis is on improving perplexity. General language modeling Also related are the recent papers proposing modifications to the AWD- LSTM that improve performance on Wikitext- 2 (Gong et al., 2018; Yang et al., 2018; Krause et al., 2018). We chose to benchmark against AWD- LSTM since these contributions are orthogonal, and many of the techniques are compatible with the KGLM. KGLM improves upon AWD-LSTM, and we expect using KGLM in conjunction with these methods will yield further improvement. # 7 Conclusions and Future Work By relying on memorization, existing language models are unable to generate factually correct text about real-world entities. In particular, they are unable to capture the long tail of rare entities and word types like numbers and dates. In this work, we proposed the knowledge graph language model (KGLM), a neural language model that can access an external source of facts, encoded as a knowledge graph, in order to generate text. Our implementa- tion is available at: https://github.com/rloganiv/ kglm-model. We also introduced Linked WikiText- 2 containing text that has been aligned to facts in the knowledge graph, allowing efficient training of the model. Linked WikiText-2 is freely avail- able for download at: https://rloganiv.github.io/ linked-wikitext-2. In our evaluation, we showed that by utilizing this graph, the proposed KGLM is able to generate higher-quality, factually correct text that includes mentions of rare entities and spe- cific tokens like numbers and dates. This work lays the groundwork for future re- search into knowledge-aware language modeling. The limitations of the KGLM model, such as the need for marginalization during inference and re- liance on annotated tokens, raise new research prob- lems for advancing neural NLP models. Our dis- tantly supervised approach to dataset creation can be used with other knowledge graphs and other kinds of text as well, providing opportunities for accurate language modeling in new domains. # Acknowledgements First and foremost, we would like to thank Stephen Merity for sharing the materials used to collect the WikiText-2 dataset, and Nitish Gupta for modify- ing his entity linker to assist our work. We would also like to thank Dheeru Dua and Anthony Chen for their thoughtful feedback. This work was sup- ported in part by Allen Institute of Artificial In- telligence (AI2), and in part by NSF award #IIS- 1817183. The views expressed are those of the authors and do not reflect the official policy or po- sition of the funding agencies. # References Sungjin Ahn, Heeyoul Choi, Tanel Pärnamaa, and Yoshua Bengio. 2016. A neural knowledge language model. ArXiv:1608.00318. Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Proc. of NeurIPS. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proc. of NAACL. Thiago Castro Ferreira, Diego Moussallem, Emiel Krahmer, and Sander Wubben. 2018. Enriching the WebNLG corpus. In Proc. of INLG. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Proc. of INLG. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: frequency-agnostic word representation. In Proc. of NeurIPS. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Incorporating copying mechanism in Li. 2016. sequence-to-sequence learning. In Proc. of ACL. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. En- tity linking via joint encoding of types, descriptions, and context. In Proc. of EMNLP. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735–1780. Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Proc. of EMNLP. Diederik P. Kingma and Jimmy Ba. 2015. Adam: In Proc. of A method for stochastic optimization. ICLR. Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. 2018. Dynamic evaluation of neural sequence models. In Proc. of ICML. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with In Proc. of application to the biography domain. EMNLP. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In Proc. of ICLR. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In Proc. of ICLR. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of INTERSPEECH. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report, OpenAI. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Lan- guage Engineering, 3(1):57–87. Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierar- chical neural network models. In Proc. of AAAI. Georgios P. Spithourakis and Sebastian Riedel. 2018. Numeracy for language models: Evaluating and im- proving their ability to predict numbers. In Proc. of ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Trieu H. Trinh and Quoc V. Le. 2019. Do language models have common sense? In Proc. of ICLR. Joerg Ueberla. 1994. Analysing a simple language model·some general conclusions for language models for speech recognition. Computer Speech & Language, 8(2):153 – 176. Oriol Vinyals and Quoc V. Le. 2015. A neural con- versational model. Proc. of ICML Deep Learning Workshop. Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wiki- data: A free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78–85. Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, and Rob Fergus. 2013. Regularization of neural net- works using dropconnect. In Proc. of ICML. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in data-to-document gener- ation. In Proc. of EMNLP. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bot- tleneck: A high-rank RNN language model. In Proc. of ICLR. Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Proc. of EMNLP.
{ "id": "1608.00318" }
1906.06669
One Epoch Is All You Need
In unsupervised learning, collecting more data is not always a costly process unlike the training. For example, it is not hard to enlarge the 40GB WebText used for training GPT-2 by modifying its sampling methodology considering how many webpages there are in the Internet. On the other hand, given that training on this dataset already costs tens of thousands of dollars, training on a larger dataset naively is not cost-wise feasible. In this paper, we suggest to train on a larger dataset for only one epoch unlike the current practice, in which the unsupervised models are trained for from tens to hundreds of epochs. Furthermore, we suggest to adjust the model size and the number of iterations to be performed appropriately. We show that the performance of Transformer language model becomes dramatically improved in this way, especially if the original number of epochs is greater. For example, by replacing the training for 10 epochs with the one epoch training, this translates to 1.9-3.3x speedup in wall-clock time in our settings and more if the original number of epochs is greater. Under one epoch training, no overfitting occurs, and regularization method does nothing but slows down the training. Also, the curve of test loss over iterations follows power-law extensively. We compare the wall-clock time of the training of models with different parameter budget under one epoch training, and we show that size/iteration adjustment based on our proposed heuristics leads to 1-2.7x speedup in our cases. With the two methods combined, we achieve 3.3-5.1x speedup. Finally, we speculate various implications of one epoch training and size/iteration adjustment. In particular, based on our analysis we believe that we can reduce the cost to train the state-of-the-art models as BERT and GPT-2 dramatically, maybe even by the factor of 10.
http://arxiv.org/pdf/1906.06669
Aran Komatsuzaki
cs.LG, stat.ML
null
null
cs.LG
20190616
20190616
9 1 0 2 n u J 6 1 ] G L . s c [ 1 v 9 6 6 6 0 . 6 0 9 1 : v i X r a # ONE EPOCH IS ALL YOU NEED Aran Komatsuzaki School of Mathematics Georgia Institute of Technology Atlanta, GA 30332, USA [email protected] # ABSTRACT In unsupervised learning, collecting more data is not always a costly process un- like the training. For example, it is not hard to enlarge the 40GB WebText used for training GPT-2 by modifying its sampling methodology considering how many webpages there are in the Internet. On the other hand, given that training on this dataset already costs tens of thousands of dollars, training on a larger dataset naively is not cost-wise feasible. In this paper, we suggest to train on a larger dataset for only one epoch unlike the current practice, in which the unsupervised models are trained for from tens to hundreds of epochs. Furthermore, we suggest to adjust the model size and the number of iterations to be performed appropri- ately. We show that the performance of Transformer language model becomes dramatically improved in this way, especially if the original number of epochs is greater. For example, by replacing the training for 10 epochs with the one epoch training, this translates to 1.9-3.3x speedup in wall-clock time in our settings and more if the original number of epochs is greater. Under one epoch training, no overfitting occurs, and regularization method does nothing but slows down the training. Also, the curve of test loss over iterations follows power-law exten- sively. We compare the wall-clock time of the training of models with different parameter budget under one epoch training, and we show that size/iteration ad- justment based on our proposed heuristics leads to 1-2.7x speedup in our cases. With the two methods combined, we achieve 3.3-5.1x speedup. Finally, we spec- ulate various implications of one epoch training and size/iteration adjustment. In particular, based on our analysis we believe that we can reduce the cost to train the state-of-the-art models as BERT and GPT-2 dramatically, maybe even by the factor of 10. # INTRODUCTION Recently, unsupervised models have been growing rapidly and becoming more promising than previ- ously expected. Notable examples include GPT-2, BERT and Sparse Transformer. The performance of these models can be scaled up stably if they are given more parameters and more training data. However, there is still a visible gap between their performance, especially that of generative models, and that of human. From the current trend, it seems that the gap can be filled up if we can keep scal- ing up and developing architectures for better long-range coherence such as Transformer-XL (Dai et al., 2019) and Sparse Transformer (Child et al., 2019). Since the training of the best generative models already costs tens of thousands of dollars, naively scaling up is not a practical option. In fact, there is one obvious feature of the currently dominant practice in machine learning that can be modified for significant performance gain. This is to train for multiple epochs. While multi- epoch training is reasonable in data-scarce settings such as supervised classification, this turns out to be inefficient for data-abundant unsupervised learning according to our results. This may be a trivial observation, but many recent papers have used multi-epoch training as shown in Table 1. While many papers do not report the number of epochs, it is reasonable to assume that the number is between 10 and 200. Note that GPT, BERT and GPT-2 were trained on an original dataset created by their authors. Hence, they could have increased the dataset size and reduced the number of epochs for better performance. On the other hand, many other papers have trained their model on a standard 1 Table 1: The number of epochs used for the training. Model Epochs GPT (Radford et al., 2018) SPN (Menick & Kalchbrenner, 2018) BERT (Devlin et al., 2018) Mesh Transformer (Shazeer et al., 2018) Transformer-XL (Dai et al., 2019) GPT-2 (Radford et al., 2019) Sparse Transformer (Child et al., 2019) 100 Not reported 40 10 Not reported Not reported (20 or 100) 70 - 120 dataset for many epochs to make the comparison with the previous state-of-the-art fair. Also, there are many papers that do not report the number of epochs used for the training. Since the total computational resources spent on the training is proportional to not only the number of parameters but also the number of epochs, the current practice should be reconsidered. For example, we need to create larger standard datasets, and the models have to be trained for only one epoch for a fair comparison. # 2 RELATED WORKS While we are not aware of any work that investigates training for one epoch with enlarged dataset, there are many works that suggest the training on larger dataset (with a large number of epochs) to improve the performance. Notably, the result of GPT (Radford et al., 2018), BERT (Devlin et al., 2018) and GPT-2 (Radford et al., 2019) implies that the training on a large dataset leads to a significant improvement in performance. Some works have analyzed the relationship between the performance and the dataset size (Sun et al., 2017; Hestness et al., 2017). Our work is most closely related to Hestness et al. (2017), which showed that by training a language model on the subsets of 1 Billion Word Language Model Benchmark (LM1B) (Chelba et al., 2013) of the varying size the performance and the dataset size follow a robust power law given a sufficient parameter budget. This comparison is more precise than the aforementioned work. The difference from our work is that as the dataset size increases, they also let the model to consume more computational resources by increasing the parameter budget and fixing the number of epochs. Also, they do not investigate the trade-off between the parameter budget and the number of iterations unlike our work. This means that the performance improvement promised by their result can be achieved only if computational resources are increased. In our work, we achieve to improve the performance without increasing the computational resources. One of the experiments of Popel & Bojar (2018) shows that larger neural machine translation (NMT) dataset results in higher BLEU with the same parameter budget and the same number of iterations, i.e., with smaller number of epochs. Since it is generally not easy to enlarge a NMT dataset, the training with a small number of epochs was not investigated in the paper. # 3 METHODS 3.1 ONE EPOCH TRAINING AND SIZE/ITERATION ADJUSTMENT 3.1.1 TO SEE PERFORMANCE IMPROVEMENT OVER CONVENTIONAL SETTING First, we describe one epoch training conversion procedure, which converts a conventional multi- epoch training into a more efficient one epoch training. 1. The dataset size is increased (e.g. by sampling from Internet a la WebText), so that, while training for the same number of iterations as before, the same sample is never reused. 2. Any regularization method is eliminated. 2 This process substantially improves the performance with the same computation cost unless the model size is much smaller compared with the dataset size, in which case the improvement is less. Then, in order to further improve the performance without increasing the computation cost, we adjust the model size and the number of iterations to be performed while keeping their product con- stant. We find that, if the ratio of the number of tokens in the dataset over the number of parameters of the model is closer to 5, this is likely to give the optimal performance under one epoch training given the cost constraint. We denote the (initial) number of parameters by P (or P0) and the (initial) number of tokens to be processed by T (or T0). Note that T = cI, where c is the number of tokens per minibatch and I is the number of iterations. 3. We set P and T according to some heuristics. For example, we can perform this by setting the ratio T /P as close to 5 as possible while keeping their product constant, or equivalently by solving the following: arg min P,T | log(5) − log(T /P )| subject to P T = P0T0. In practice, the range of P has only a small number of elements, since more choices do not usually result in a significant speedup. Therefore, finding P and T is quite simple. 3.1.2 HOW TO USE THEM IN PRACTICE The above operations are useful to see performance improvement with one epoch training and model size adjustment over the original multi-epoch setting. In practice, one is more interested in how to use these techniques in practice, rather than starting from a given multi-epoch training prototype. From our argument so far, this is quite simple to do. 1. Choose the number of iterations I. Note that the total computation cost scales up quadratically with respect to I, since the optimal model size scales up linearly with respect to I. From this and the available computation budget, one can choose the value of I. The optimal model size is then chosen as in the above method. Let us recall that we have T = cI. We use the same notations as before. 2. We set P according to some heuristics. For example, we can perform this by setting the ratio T /P as close to 5 as possible while keeping their product constant, or equivalently by solving the following: arg min P | log(5) − log(T /P )|. 3. The model with size P is trained for I iterations for one epoch without any regularization method. JUSTIFICATIONS FOR ONE EPOCH TRAINING The justifications are quite intuitive. First, note that one epoch training substantially improves the diversity of the samples processed by the model over the course of training. Training for E epochs is roughly equivalent to training on a shuffled dataset consisting of E copies of the original dataset for one epoch. This means that the diversity of the original dataset is E times less than that of the one epoch training. Greater dataset size also implies greater diversity, and both of these are known to improve the performance (Hestness et al., 2017; Radford et al., 2019). For example, WebText led to a better generation quality than LM1B not only due to the greater dataset size and larger context size but also due to the diversity of the dataset. Also, under one epoch training, sampling from the training data distribution is practically identical to sampling from the underlying data manifold (and therefore from that of the test dataset). This is unlike the multi-epoch training, since sampling each sample again cannot occur when one samples from the data manifold, whose cardinality is practically infinite. This sampling discrepancy is a primary cause of overfitting, which is a well-known fact, and the lack of overfitting observed in one epoch training supports this. Overfitting is usually exacerbated as the number of iterations (and hence the number of epochs) increases. This leads to better speedup with one epoch training when 3 the number of epochs of the original multi-epoch training is larger. Finally, note that averaging the train loss per minibatch measured on the past n minibatches is approximately equal to the test loss if n is small enough. Hence, validation with test/validation dataset is not crucial. # 4 EXPERIMENTS Unless specified otherwise, the hyperparameters are as described here. We train base Transformer decoder (Vaswani et al., 2017) with some modifications (as described below) for language model. The dataset used is 1 Billion Word Language Model Benchmark (LM1B). We do not use any reg- ularization method unless specified otherwise. Whenever the number of parameters is given below, it contains the number of parameters of softmax and the embedding. As in Devlin et al. (2018); Radford et al. (2019), we have df f = 4dmodel, dq = dk = dv = dmodel and h = dmodel , where h denotes the number of attention heads. Whenever we use the symbol d below, it stands for dmodel. More details on the dataset and hyperparameters are provided in Appendix. We are going to use the subsets of LM1B with varying size as in Hestness et al. (2017). Note that the test loss achieved in our experiments are inferior to the models of Hestness et al. (2017) and the state-of-the-art models for at least one of the following reasons: (1) the number of trained iterations and/or the model size is significantly smaller in our case, and (2) the vocabulary size was set 10,000 in Hestness et al. (2017) to save the computation (unlike about 800,000 in the most papers, including ours), which means their loss values are significantly smaller than what they would be if the vocabulary size was the same as ours. This experiment verifies that the performance of a language model is improved by the one epoch training, and regularization harms the training under the one epoch training, i.e., if no sample is reused in the training. We denote ”training with one epoch training” by S, ”training for multiple epochs” by M and ”using dropout” by D. For example, if a model is trained with single epoch training and p = 0.1, we denote it by SD. We train the Transformer for the four cases: S, M , SD and M D. We set dmodel = 512 and train for 65,000 iterations. The number of parameters of the model is 45M. We also set p = 0.1 for the cases in which dropout is used, unless specified otherwise. The dataset size is 45M tokens (processed in 6,500 iterations) for the multi-epoch case and 450M tokens for the single epoch case. Hence, 10 epochs are performed in total for the former case. This choice of dataset size is due to the popular custom of training a language model whose size is close to the number of tokens of the dataset. The result is shown in the top left figure of Fig. 1. The other cases in the figure are provided to demonstrate how the magnitude of speedup differs depending on whether the model is more overparametrized or underparametrized. For example, the Right case has a smaller model size (i.e. more underparametrized). Likewise, the Bottom case is more overparametrized. The speedup (1.9x) of Right is smaller than the speedup (3.3x) of Left and Bottom. Thus, the speedup is smaller if the model is more underparametrized, which is expected. For the Right case and the Bottom case, only the best-performing dropout probability is shown (e.g. p = 0 for the Right case and p = 0.1 for the Bottom case). In any case, the model performs worse if p > 0.1. # 4.1 ONE EPOCH TRAINING Here, the speedup is calculated as follows. First, we compute the number of iterations for M or M D to achieve the best loss. Also, we compute the number of iterations for S to achieve the best loss achieved by M and M D. The speedup is defined to be the ratio of the former quantity over the latter. For the case of the left figure of Fig. 1, the former is 65,000, whereas the latter is 20,000. Thus, the speedup is 65000 20000 ≈ 3.3 times. In the table, ”E = 10” refers to the speedup for the first 10 epochs are trained, whereas ”E = 5” refers to the speedup for the first 5 epochs are trained. The speedup for the former case is just as explained above, and the speedup for the latter case can be calculated likewise by ignoring the data on the plot of Fig. 1 after the first 5 epochs. The result of Table 2 suggests that the speedup (E = 10) is greater than the speedup (E = 5). This implies that the speedup is greater if the number of epochs of the original multi-epoch training is greater. 4 @s @N @s0 @NnD 5000 10000 50000 Htorations (Log-scale) es en Test Loss (Logssvale) 5000 10000 50000 Htorations (Log-scale) @s @N @s0 @NnD es en Test Loss (Logssvale) 5000 10000 50000 5000 10000 50000 Htorations (Log-scale) Htorations (Log-scale) 5000 10000 50000 Htorations (Log-scale) 5000 10000 50000 Htorations (Log-scale) Figure 1: Learning curve of LM for 65,000 iterations on subsets of LM1B with different configura- tions. d 512 Left Right 256 Bottom 1024 Parameters Epochs 45M 18M 128M 10 10 10 Iters./Epoch 6500 6500 6500 Speedup (E = 10) 3.3 1.9 3.3 Speedup (E = 5) 1.8 1.5 2.6 Table 2: Configuration of each figure of Fig. 1. When dropout is used, the curve is shifted upward. The magnitude of shift increases as p increases or as the regularization becomes stronger, which slows down the training. Furthermore, the left figure of Fig. 1 suggests that, if dropout is used, the speed of the training does not change much whether one epoch training or multi-epoch training is used. This suggests that each sample cannot be memorized well under dropout, which is well-known. If the availability of data is limited (e.g. in supervised learning) and if unsupervised pretraining does not help, one can attempt to mitigate the gap between S and M D with the adaptive dropout as described in Appendix. Note that this adaptive dropout is much more inefficient than one epoch training if the data is plentiful, which is our assumption. 4.2 POWER LAW Under the one epoch training, analyzing the training becomes simpler, since regularization does not need to be taken into consideration. The left figure of Fig. |2/shows the log-log plot of the curve of test loss over the iterations for different widths. Each curve has a structure as depicted in the right figure of Fig.|4] Observe that the curve first enters a super-polynomial region, where the loss decreases faster than any polynomial, since the parameters are not yet saturated with many training samples. Then, the curve enters a linear region, which is, in fact, a power-law region, since the plot is log-log. This means on this region the test loss follows £ = aa~", where x is the number of iterations and a and k are some constant. As the parameters are oversaturated with the training samples, the loss stagnates, and the curve enters sub-polynomial region convergent to a constant. As the parameter budget increases, 5 @ 1-512 @ oH1024 @ 4-256 Test Loss (Logssvale) 5000 10000 50000 Htorations (Log-scale) @ 1-512 @ oH1024 @ 4-256 5000 10000 seoco 100000 Heratione (Log-scale) @ 1-512 @ oH1024 @ 4-256 @ 1-512 @ oH1024 @ 4-256 Test Loss (Logssvale) 5000 10000 50000 5000 10000 seoco 100000 Htorations (Log-scale) Heratione (Log-scale) Figure 2: (Left) Log-log plot of learning curve over iterations. (Right) Log-log plot of the learning curve scaled according to the per-iteration FLOPS with respect to the d = 512 curve, which is fixed at its original position as with the scaling of the x-axis. 377 | — 2-Layer LSTMs — 4 Layer LSTMs s(m) = 5.08e% 2-781, — Depth-5 RHNs B —-— 2-Layer LSTMs Trend = 4Layer LSTMs Trend == Depth-5 RHNs Trend Model Num Params, Millions (Log-scale) & 70 3.6 19 mn 1.0 po ga 92 98 gos 9% 99 py ‘Training Data Set Size, Millions of Words (Log-scale) — 2Layer LSTMs 377 | — 2-Layer LSTMs —= 4Layer LSTMs — 4 Layer LSTMs s(m) = 5.08e% 2-781, — Depth-5 RHNs — Depth-5 RHNs B = 2-Layer LSTMs Trend —-— 2-Layer LSTMs Trend —-— 4-Layer LSTMs Trend = 4Layer LSTMs Trend —— Depth-5 RHNs Trend == Depth-5 RHNs Trend e(m) = 12.0 m®* = a N Model Num Params, Millions (Log-scale) & 3.73 70 3.6 e(m) = 11.9 m4065 *SS 3.39 SS 19 e(m) = 11.7 70.005 Sx mn 1.0 po gh 98 9h ph 9h 9h 9h) phe po ga 92 98 gos 9% 99 py Training Data Set Size, Millions of Words (Log-scale) ‘Training Data Set Size, Millions of Words (Log-scale) — 2Layer LSTMs —= 4Layer LSTMs — Depth-5 RHNs = 2-Layer LSTMs Trend —-— 4-Layer LSTMs Trend —— Depth-5 RHNs Trend e(m) = 12.0 m®* = a N Minimum Validation Loss (Log-scale) 3.73 e(m) = 11.9 m4065 *SS 3.39 SS e(m) = 11.7 70.005 Sx po gh 98 9h ph 9h 9h 9h) phe Training Data Set Size, Millions of Words (Log-scale) Figure 3: Learning curve of LM on subsets of LM1B with varying size (cited from Hestness et al. (2017)). the super-polynomial region and the power-law region expands, which contributes to the superior performance of larger models. Unlike the multi-epoch setting, the loss decreases more steeply, monotonically and for longer iterations. While the gap of loss among each model is small at the beginning, it increases as the iteration increases. The left figure of Fig. 4 shows the line fit for the power law region of each model. Notably, the power law exponent (about −0.067) is approximately equal to the power law exponent of found in Hestness et al. (2017), which is shown in the left figure of Fig. 3 for convenience. The major difference between their experiment and ours is that they trained a model with different parameter budget sufficiently large for each dataset size for many epochs until the best loss is achieved. The right figure of Fig. 3 shows the parameter budget for each dataset size. 4.3 SIZE/ITERATION ADJUSTMENT As far as large-scale Transformer language model is concerned, modifying the architecture with a method such as neural architecture search leads to a smaller gain compared with scaling up the model with more data or parameter. Hence, it suffices to consider changing depth and width only. For simplicity, we consider changing only width. Let us say we train a model with the number of parameters P for I iterations. Then, the total FLOPS of the training is proportional to P I, assuming that the GPU is used efficiently, which is easy at large-scale. We are interested in finding the range of optimal I for a given number of parameters (or more conveniently, width). For this, we remap the curves in the left figure of Fig. 2 to adjust for the difference in per-iteration FLOPS and derive the range of optimal number of iterations for a given model size as described in Appendix, which is 6 @ = 812x-0067F=0989 © = 7.91K~0,0676R*=0992 oN Super-polynomial —_ power-law Region Sub-polynomial 5000 10090 59000 Region Region Test Loss (Log-scale) orators (Log scale Iterations (Log-scale) oN Super-polynomial —_ power-law Region Sub-polynomial Region Region Test Loss (Log-scale) Iterations (Log-scale) @ = 812x-0067F=0989 © = 7.91K~0,0676R*=0992 5000 10090 59000 orators (Log scale Figure 4: (Left) Log-log plot of partial learning curve of LM over iterations with a line fit. (Right) Sketch of learning curve over iterations. d 256 512 1024 Parameters Optimal Iters. 18M 45M 128M [0, 30000] [12000, 84000] [28000, ∞) (Optimal Tokens)/Params. [0, 11.5] [1.8, 12.9] [1.5, ∞) Table 3: Optimal number of iterations and ratio. shown in the right figure of Fig 2 and Table 3, respectively. The range of optimal number of iterations is then converted to the number of tokens processed and divided by the number of parameters of the model, which is shown in the rightmost column of Table 3. By taking the intersection of the ranges, we obtain [1.8, 11.5]. Since the geometric mean of the boundary values is 1.8 × 11.5 ≈ 5, in our heuristic for adjustment we try to make the ratio of the number of processed tokens over the number of parameters, or T /P , as close to 5 as possible. This result suggests that five words per parameter can be the most efficiently compressed, at least in our setting. Since we have I = 65000, the optimal width among {256, 512, 1024} is d = 512, which follows from either our heuristics or the right figure of Fig. 2. Each configuration undergoes a speedup due to changing the width to d = 512. Table 4 summarizes the speedup achieved with size/iteration adjustment and the total speedup combined with that of one epoch training. The result agrees with our intuition that size/iteration adjustment would lead to a better speedup if the original size/iteration proportion is more skewed. In terms of the combined speedup, this also agrees with this same intuition. 5 # IMPLICATIONS AND REMARKS 5.1 # STATE-OF-THE-ART MODELS ARE LIKELY TO UNDERGO BETTER SPEEDUP Table 5 shows the number of epochs and the ratio of the tokens processed over the number of parameters of the state-of-the-art models. The original multi-epoch training is first converted to the one epoch counterpart, which results in an increase of the dataset size by the factor of the number of epochs. This contributes to the ratio being substantially larger than 5. Combined with the number of epochs being far greater than 10, it is reasonable to expect that we can accelerate the training of the old d 512 Left Right 256 Bottom 1024 new d 512 512 512 Speedup Combined Speedup 1 2.7 1.3 3.3 × 1 = 3.3 1.9 × 2.7 ≈ 5.1 3.3 × 1.3 ≈ 4.3 Table 4: Speedup with size/iteration adjustment and total speedup. 7 Table 5: The number of epochs used for the training. Model Epochs BERT Mesh Transformer GPT-2 40 10 20 (or 100) 340 1.6-57 106 (or 530) state-of-the-art models with the factor substantially greater than our 3.3-5.1x speedup, maybe even by the factor of 10. 5.2 RANGE OF APPLICABILITY One epoch training and size/iteration adjustment are also applicable to many unsupervised learning algorithms on data of any modality as well as semi-supervised learning as (H´enaff et al., 2019). Regularization methods are crucial for and more dominant in many computer vision tasks. Hence, they may benefit from one epoch training even better. It is very likely that our results hold for larger- scale models due to the nature of our methods. We should perform more comprehensive studies to measure how much speedup is achieved in each setting and refine our heuristics for size/iteration adjustment. 5.3 EFFICIENTNET SCALING WITH THE NUMBER OF ITERATIONS Tan & Le (2019) showed that, for image classification, scaling up a model by jointly searching for the scaling factor of each scaling component (e.g. depth) with grid search leads to a dramatic improvement over scaling up one or two components only without extensive search. In our case, there are three scaling factors: depth, width, the number of iterations. It is reasonable to expect that this will give more favorable scaling than heuristics. 5.4 CAVEATS ON FINE-TUNING One notable usage of unsupervised models such as GPT-2 and BERT is fine-tuning the pre-trained model to a small specialized dataset. Since we suggest that no regularization method is used for the training of the model, one would suspect that the lack of regularization would cause overfitting during the fine-tuning process. We argue this may not be necessarily the case. Note that GPT-2 does not use any regularization method, and Devlin et al. (2018) suggests to fine-tune BERT for only a few epochs. It is also important to note that one-epoch training improves the performance of the pre-trained model and therefore requires a smaller number of iterations on the fine-tuning dataset to reach to the same performance. 5.5 SAMPLE EFFICIENCY OF LEFT-TO-RIGHT LANGUAGE MODEL AND BERT BERT is known to be more sample efficient than other language models such as the left-to-right language model as shown in papers as Devlin et al. (2018). We believe that one of the reasons is that, for BERT, the mask (hence the input and the target) of a sample is different for each epoch. Under the one epoch training, this advantage vanishes. Hence, we believe it diminishes the efficiency gap between BERT and the left-to-right language model. If the efficiency gap is small, the latter model is far more preferable than BERT. For example, the text generation capability of BERT is poorer than its left-to-right counterpart. Hence, it requires the softmax to be trained from scratch for each task upon the fine-tuning, possibly different softmax for each task. On the other hand, left-to-right language models can, in principle, perform, with not necessarily better performance, any task BERT can perform without fine-tuning, even with zero-shot learning. For example, GPT-2 can perform text classification by not only predicting the label (e.g. ”Math”) but also by generating a text (e.g. ”It’s about math.”). This adds far more flexibility to GPT-2, and GPT-2 can be trained with various tasks seamlessly. By exploiting this fact, one can combine various notable (training) datasets to the general-purpose dataset as WebText for training. After the training, without fine-tuning the model may perform well on their test datasets. The performance may be improved by instead adding a few 8 copies of the added datasets instead of just one. Also, note that, while fine-tuning performs well in general, it has been scarcely investigated on whether training on narrower distribution suddenly right after the pretraining would lead to something similar to catastrophic forgetting. After the pretraining is finished, it may be more effective to simply mix the samples of the task of the interest to the samples from the pretraining dataset to continue the training without performing the conventional fine-tuning. As it was mentioned in Radford et al. (2018) that fine-tuning of GPT converges within a few epochs, it is reasonable to assume the same for GPT-2. Given the strong influence of the pretraining method on the final performance on the takes Under the one epoch training, GPT-2 may not need fine-tuning process if the task is known during the pre-training process, as described below. 5.6 SHIFT OF ATTENTION FROM REGULARIZATION TO MODEL CAPACITY Deep learning research has almost always been performed under multi-epoch training with regu- larization method, so the resulting generalization was more important than improving the actual capacity of the model. It has been believed that large-scale training is the only viable way to mea- sure the actual model capacity, as the regularization is not needed in this setting. This has limited the research into improving the actual model capacity, which can be afforded only by affluent re- search groups, and proliferated the works on improving the regularization methods. However, one epoch training may shift the attention to the improvement of model capacity. The methods prone to overfitting and regarded not viable may be reexamined. Some promising ideas that tend to overfit include mixture-of-experts (Shazeer et al., 2017) and optimization methods exploiting the second order information such as K-FAC (Osawa et al., 2018). 5.7 CREATION OF NEW DATASETS AND COMPARISON OF MODELS As mentioned above, we should create new standard datasets on which a model is trained for one epoch only. We consider the case of language model, but a similar argument holds for other tasks and the data of other modality. A good candidate is a subset of a dataset similar to WebText. Since one can compare model improvement more or less regardless of the model size due to the lack of regularization, subset larger than 400M tokens may not be always necessary. Note that the total computation cost scales quadratically with respect to the dataset size. Since the model has to be trained for one epoch only, substantially larger datasets can be also used for a small amount of computation if necessary. There are two ways to compare models. 1) The dataset creator first makes a plot similar to the right figure of Fig. 2 using a subset with, say, 400M tokens and identifies the optimal model sizes on the iterations corresponding to, say, 100M tokens, 200M tokens and 400M tokens. For example, in our experiment the optimal model size on 100M tokens would have the width of 512. Then, the model designer would compare the performance of the models with this optimal model size according to their performance after they are trained on the dataset of the chosen size for one epoch. 2) This method is more precise than the above method. The model designer makes a plot similar to the right figure of Fig. 2 using a subset with 400M tokens and his proposed model. The plot is then combined with the plot of the state-of-the-art model by adjusting for the difference in per-iteration FLOPS. 5.8 DATA AUGMENTION WITH INTERNET One can also exploit the Internet by augmenting the dataset for the task of the interest by searching for data relevant to the task and adding it to the dataset. Often, poor performance has been attributed to architecture or optimization. However, the actual cause is often because the dataset does not contain a sufficient amount of information useful for the task of the interest. If the model’s poor performance is alleviated by this data augmentation, it is likely due to none other than the insufficient information. This is inevitable, since, for example, if the training dataset consists of randomly sampled webpages, the distribution of data on Internet does not necessarily align with what a person would have processed in his life. This mismatch would result in the sample complexity of the language model being poorer than that of human, which is permissible to a certain extent but can be improved by a mismatch-aware dataset sampling strategy. 9 5.9 ON SAMPLING DATA FROM INTERNET WebText consists of the webpages that are sampled by selecting the links posted to Reddit with more than 3 karmas. In order to increases the number of webpages by more than several order of magnitude, the strategy of sampling should be more lenient to the corrupt webpages. Trinh & Le (2018) pointed out that CommonCrawl contains a large portion of corrupt samples, which makes it unsuitable for the training. The proportion of the corrupt samples in CommonCrawl is substantially higher than 50%, hence most of the resources are spent on training on the useless corrupt samples. One can study how the performance depends on the proportion of corrupt samples in the dataset. From this, one can estimate the upper bound of the proportion of corrupt samples such that the performance degradation is not significant. For example, if the proportion can be merely 10%, the waste of resources is rather negligible. Having some corrupt samples in the dataset to some extent may be beneficial for the model to distinguish the corrupt webpages from the non-corrupt ones. Within the dataset containing the webpages of both types, the distribution of the corrupt samples must be easily separated from that of the non-corrupt samples from the perspective of the trained model. When the dataset contains some corrupt samples, one can prevent them from generating the output that resembles the corrupt samples as follows. For example, note that it is very unlikely that corrupt text is followed by non- corrupt text in the same webpage, or vice versa. Hence, it is interesting to check whether a model conditioned on a non-corrupt text almost always generates non-corrupt text. If a model happens to start generating a corrupt text either conditionally or unconditionally, this may be detected by the perplexity of the generated output. Corrupt text tends to have the perplexity too high or too low. This fact may be also exploited at the training stage. For example, the samples with unusual perplexity can be discarded from the training dataset. The sampling method of WebText can be easily expanded or generalized. In addition to using the karmas of Reddit and its variants on other webpages, we can utilize the meta data of each webpage and statistical analysis of the content. For example, non-corrupt webpages must have non-negligible amount of traffic from human users rather than bots, and they must have a clearly distinct access pattern. Also, the distribution of characters or the most frequent words of the corrupt webpages may be different from that of the non-corrupt webpages (e.g. Zipf’s law). # 6 CONCLUSION The following summarizes our work: • The conventional multi-epoch training can be improved by enlarging the dataset, adjusting the model size and the number of iterations appropriately and training for one epoch only. A heuristics for the adjustment was devised. • Overfitting does not occur in one epoch training, and regularization does nothing but slows down the training. • The loss curve over the iterations for a given model size follows power-law rather exten- sively. • Based on our analysis, we believe we can reduce the cost to train the state-of-the-art models as BERT and GPT-2 dramatically, maybe even by the factor of 10. • One epoch training and size/iteration adjustment are promising for not only language model but also many other unsupervised or semi-supervised learning tasks on data of any modal- ity. • We can possibly efficiently scale up a model using an analog to EfficientNet scaling with not only the conventional scaling factors such as depth and width but also the number of iterations as the scaling factors. • The sample efficiency gap between BERT and left-to-right language model is likely to diminish with one epoch training. GPT-2 may replace BERT due to its flexibility. • Since the overfitting does not occur with one epoch training, more attention will be paid to improving model capacity, which becomes easier to observe. The methods prone to overfitting and regarded not viable may be reexamined. 10 • We should create new standard datasets to evaluate and compare newly proposed models under one epoch training and size/iteration adjustment. We have provided two possible evaluation methods with such datasets. • We discuss about data augmentation with Internet and how to efficiently sample data from Internet to expand the training dataset. Future works will hopefully verify our results and claims on larger-scale and on other kinds of tasks. They can also continue to explore for further implications of one epoch training and model size adjustment, as they have been scarcely investigated. # ACKNOWLEDGMENTS We are grateful to Lukasz Kaiser and Isaac Poulton for his valuable feedback on our work. # REFERENCES A. Baevski and M. Auli. Adaptive Input Representations for Neural Language Modeling. ArXiv e-prints, September 2018. C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling. ArXiv e-prints, December 2013. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating Long Sequences with Sparse Transformers. arXiv e-prints, art. arXiv:1904.10509, Apr 2019. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv e-prints, art. arXiv:1901.02860, Jan 2019. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. ArXiv e-prints, October 2018. Olivier J. H´enaff, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. arXiv e-prints, art. Data-Efficient Image Recognition with Contrastive Predictive Coding. arXiv:1905.09272, May 2019. J. Hestness, S. Narang, N. Ardalani, G. Diamos, H. Jun, H. Kianinejad, M. M. A. Patwary, Y. Yang, and Y. Zhou. Deep Learning Scaling is Predictable, Empirically. ArXiv e-prints, December 2017. Jacob Menick and Nal Kalchbrenner. Generating High Fidelity Images with Subscale Pixel Net- works and Multidimensional Upscaling. arXiv e-prints, art. arXiv:1812.01608, December 2018. Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Yokota, and Satoshi Matsuoka. Large-Scale Distributed Second-Order Optimization Using Kronecker-Factored Approximate Curvature for Deep Convolutional Neural Networks. arXiv e-prints, art. arXiv:1811.12019, Nov 2018. Martin Popel and Ondˇrej Bojar. Training Tips for the Transformer Model. arXiv e-prints, art. arXiv:1804.00247, Mar 2018. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving Language Understanding by Generative Pre-Training. ArXiv e-prints, June 2018. A. Radford, W. Jeff, R. Child, D. Luan, Amodei D., and Sutskever I. Language Models are Unsu- pervised Multitask Learners. February 2019. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. arXiv e-prints, art. arXiv:1701.06538, Jan 2017. 11 Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanan- takool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-TensorFlow: Deep Learning for Supercomputers. arXiv e-prints, art. arXiv:1811.02084, Nov 2018. Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting Unreasonable Ef- fectiveness of Data in Deep Learning Era. arXiv e-prints, art. arXiv:1707.02968, Jul 2017. Mingxing Tan and Quoc V. Le. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv e-prints, art. arXiv:1905.11946, May 2019. Trieu H. Trinh and Quoc V. Le. A Simple Method for Commonsense Reasoning. arXiv e-prints, art. arXiv:1806.02847, Jun 2018. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polo- sukhin. Attention Is All You Need. ArXiv e-prints, June 2017. 12 # 7 APPENDIX 7.1 FURTHER DETAILS ON DATASET AND HYPERPARAMETERS LM1B consists of 28 million sentences (training dataset) from news articles with about 10 thousand sentences for test dataset (Chelba et al., 2013). Each sentence is curtailed to the first 50 words. Each iteration consists of a minibatch of 256 sentences, and the average length of a sentence is 27. We set the number of layers to 6. We use word-level tokens and (not tied) adaptive input and softmax (Baevski & Auli, 2018). The cutoff for adaptive softmax/input is [4000, 20000, 100000] to save the memory budget and computation cost at the cost of slightly degraded performance. We do not use checkpoint averaging or label smoothing. We use PyTorch with a single V100 GPU and mixed precision training. Unlike the Transformer of Vaswani et al. (2017), LayerNorm is placed before the self-attention module and ReLU in our case as in Child et al. (2019). 7.2 ADAPTIVE DROPOUT If the availability of data is limited (e.g. in supervised learning) and if unsupervised pretraining does not help, one can attempt to mitigate this problem with the adaptive dropout as described below. The dropout probability is a monotonically increasing function of the number of epochs trained. In particular, it is set zero for the first epoch. It is likely that the gap between one epoch training and multi-epoch training (see the gap in the left figure of Fig. 1) with this dropout is smaller. However, based on the trend, the gap is still likely to increase as the number of epochs increases. Thus, this method is much more inefficient than one epoch training if the data is plentiful. Since the assumed setting is very rare, we do not investigate this direction further. 7.3 FURTHER DETAILS ON FIG. 2 In this section, we discuss how to convert the left figure of Fig. 2 to the right figure and how to obtain the range of the optimal number of iterations for each model size. The curve for d = 512 is fixed at the same position, but other curves are moved to take into account the difference in per-iteration FLOPS. The per-iteration FLOPS of d = 1024 and d = 256 is 3 times larger and 2.5 times smaller than that of d = 512, respectively. Hence, the curve of d = 1024 is moved left by the factor of 3. A similar operation is performed on the curve of d = 256. The d = 256 curve and the d = 512 curve intersects at 12,000 iterations, and the d = 512 curve and the d = 1024 curve intersects at 84,000 iterations. This means that if the model with d = 512 is used, the number of iterations should be greater than 12,000 and less than 84,000 to minimize the computation cost. On the other hand, if the model with d = 256 is used, the number of iterations should be less than 12000 × 2.5 = 30000. Likewise, if the model with d = 1024 is used, the number of iterations should be greater than 84000 13
{ "id": "1811.02084" }
1906.06423
Fixing the train-test resolution discrepancy
Data-augmentation is key to the training of neural networks for image classification. This paper first shows that existing augmentations induce a significant discrepancy between the typical size of the objects seen by the classifier at train and test time. We experimentally validate that, for a target test resolution, using a lower train resolution offers better classification at test time. We then propose a simple yet effective and efficient strategy to optimize the classifier performance when the train and test resolutions differ. It involves only a computationally cheap fine-tuning of the network at the test resolution. This enables training strong classifiers using small training images. For instance, we obtain 77.1% top-1 accuracy on ImageNet with a ResNet-50 trained on 128x128 images, and 79.8% with one trained on 224x224 image. In addition, if we use extra training data we get 82.5% with the ResNet-50 train with 224x224 images. Conversely, when training a ResNeXt-101 32x48d pre-trained in weakly-supervised fashion on 940 million public images at resolution 224x224 and further optimizing for test resolution 320x320, we obtain a test top-1 accuracy of 86.4% (top-5: 98.0%) (single-crop). To the best of our knowledge this is the highest ImageNet single-crop, top-1 and top-5 accuracy to date.
http://arxiv.org/pdf/1906.06423
Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou
cs.CV, cs.LG
null
null
cs.CV
20190614
20220120
2 2 0 2 n a J 0 2 ] V C . s c [ 4 v 3 2 4 6 0 . 6 0 9 1 : v i X r a # Fixing the train-test resolution discrepancy Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Herv´e J´egou # Facebook AI Research # Abstract Data-augmentation is key to the training of neural networks for image classification. This paper first shows that existing augmentations induce a significant discrepancy between the size of the objects seen by the classifier at train and test time: in fact, a lower train resolution improves the classification at test time! We then propose a simple strategy to optimize the classifier performance, that employs different train and test resolutions. It relies on a computationally cheap fine-tuning of the network at the test resolution. This enables training strong classifiers using small training images, and therefore significantly reduce the training time. For instance, we obtain 77.1% top-1 accu- racy on ImageNet with a ResNet-50 trained on 128×128 im- ages, and 79.8% with one trained at 224×224. A ResNeXt-101 32x48d pre-trained with weak supervision on 940 million 224×224 images and further optimized with our technique for test resolution 320×320 achieves 86.4% top- 1 accuracy (top-5: 98.0%). To the best of our knowledge this is the highest ImageNet single-crop accuracy to date1. # Introduction with a detrimental effect on the test-time performance of mod- els. We then show that this problem can be solved by jointly optimizing the choice of resolutions and scales at training and test time, while keeping the same RoC sampling. Our strategy only requires to fine-tune two layers in order to compensate for the shift in statistics caused by the changing the crop size. This allows us to retain the advantages of existing pre-processing protocols for training and testing, including augmenting the training data, while compensating for the distribution shift. Our approach is based on a rigorous analysis of the effect of pre-processing on the statistics of natural images, which shows that increasing the size of the crops used at test time compensates for randomly sampling the RoCs at training time. This analysis also shows that we need to use lower resolu- tion crops at training than at test time. This significantly im- pacts the processing time: halving the crop resolution leads to a threefold reduction in the network evaluation speed and reduces significantly the memory consumption for a typical CNN, which is especially important for training on GPUs. For instance, for a target test resolution of 224×224, training at resolution 160×160 provides better results than the standard practice of training at resolution 224×224, while being more efficient. In addition we can adapt a ResNet-50 train at resolu- tion 224×224 for the test resolution 320×320 and thus obtain top-1 accuracy of 79.8% (single-crop) on ImageNet. Convolutional Neural Networks [21] (CNNs) are used ex- tensively in computer vision tasks such as image classifica- tion [20], object detection [30], inpainting [42], style trans- fer [11] and even image compression [31]. In order to obtain the best possible performance from these models, the training and testing data distributions should match. However, often data pre-processing procedures are different for training and testing. For instance, in image recognition the current best training practice is to extract a rectangle with random coordi- nates from the image, which artificially increases the amount of training data. This region, which we call the Region of Clas- sification (RoC), is then resized to obtain a crop of a fixed size (in pixels) that is fed to the CNN. At test time, the RoC is instead set to a square covering the central part of the image, which results in the extraction of a so called “center crop”. This reflects the bias of photographers who tend center impor- tant visual content. Thus, while the crops extracted at train- ing and test time have the same size, they arise from different RoCs, which skews the distribution of data seen by the CNN. Over the years, training and testing pre-processing proce- dures have evolved to improve the performance of CNNs, but so far they have been optimized separately [8]. In this paper, we first show that this separate optimization has led to a sig- nificant distribution shift between training and testing regimes 1Update: Since the publication of this paper at Neurips, we have improved this state of the art by applying our method to EfficientNet. See our note [39] for results and details. Alternatively, we leverage the improved efficiency to train high-accuracy models that operate at much higher resolution at test time while still training quickly. For instance, we achieve an top-1 accuracy of 86.4% (single-crop) on ImageNet with a ResNeXt-101 32x48d pre-trained in weakly-supervised fashion on 940 million public images. Finally, our method makes it possible to save GPU memory, which could in turn be exploited by optimization: employing larger batch sizes usually leads to a better final performance [15]. # 2 Related work Image classification is a core problem in computer vision. It is used as a benchmark task by the community to measure progress. Models pre-trained for image classification, usually on the ImageNet database [9], transfer to a variety of other ap- plications [27]. Furthermore, advances in image classification translate to improved results on many other tasks [12, 18]. Recent research in image classification has demonstrated improved performance by considering larger networks and higher resolution images [17, 25]. For instance, the state of the art in the ImageNet ILSVRC 2012 benchmark is currently held by the ResNeXt-101 32x48d [25] architecture with 829M parameters using 224×224 images for training. The state of the art for a model learned from scratch is currently held by the EfficientNet-b7 [37] with 66M parameters using 600×600 1 standard pre-processing input images < - ee ‘3 & <> 224 ssid 2 dal 224 our scaling strategy adjust with test 128 adjust with train 384 Figure 1: Selection of the image regions fed to the network at training time and testing time, with typical data-augmentation. The red region of classification is resampled as a crop that is fed to the neural net. For objects that have as similar size in the input image, like the white horse, the standard augmentations typically make them larger at training time than at test time (second column). To counter this effect, we either reduce the train-time resolution, or increase the test-time resolution (third and fourth column). The horse then has the same size at train and test time, requiring less scale invariance for the neural net. Our approach only needs a computationally cheap fine-tuning. images for training. In this paper, we focus on the ResNet-50 architecture [13] due to its good accuracy/cost tradeoff (25.6M parameters) and its popularity. We also conduct some ex- periments using the PNASNet-5-Large [24] architecture that exhibits good performance on ImageNet with a reasonable training time and number of parameters (86.1M) and with the ResNeXt-101 32x48d [25] weakly supervised because it was the network publicly available with the best performance on ImageNet. train PDF ——— test PDF ——— frequency 20% 40% 60% 80% % of the image area Data augmentation is routinely employed at training time to improve model generalization and reduce overfitting. Typi- cal transformations [3, 5, 35] include: random-size crop, hori- zontal flip and color jitter. In our paper, we adopt the standard set of augmentations commonly used in image classification. As a reference, we consider the default models in the PyTorch library. The accuracy is also improved by combining multi- ple data augmentations at test time, although this means that several forward passes are required to classify one image. For example, [13, 20, 35] used ten crops (one central, and one for each corner of the image and their mirrored versions). Another performance-boosting strategy is to classify an image by feed- ing it at multiple resolutions [13, 33, 35], again averaging the predictions. More recently, multi-scale strategies such as the feature pyramid network [23] have been proposed to directly integrate multiple resolutions in the network, both at train and test time, with significant gains in category-level detection. Feature pooling. A recent approach [5] employs p-pooling instead of average pooling to adapt the network to test reso- lutions significantly higher than the training resolution. The authors show that this improves the network’s performance, in accordance with the conclusions drawn by Boureau et al. [6]. Similar pooling techniques have been employed in im- age retrieval for a few years [29, 38], where high-resolution images are required to achieve a competitive performance. These pooling strategies are combined [38] or replace [29] the RMAC pooling method [38], which aggregates a set of regions extracted at lower resolutions. Figure 2: Empirical distribution of the areas of the RoCs as a fraction of the image areas extracted by data augmentation. The data augmentation schemes are the standard ones used at training and testing time for CNN classifiers. The spiky distri- bution at test time is due to the fact that RoCs are center crops and the only remaining variability is due to the different im- age aspect ratios. Notice that the distribution is very different at training and testing time. # 3 Region selection and scale statistics Applying a Convolutional Neural Network (CNN) classifier to an image generally requires to pre-process the image. One of the key steps involves selecting a rectangular region in the input image, which we call Region of Classification (RoC). The RoC is then extracted and resized to a square crop of a size compatible with the CNN, e.g., AlexNet requires a 224 × 224 crop as input. While this process is simple, in practice it has two subtle but significant effects on how the image data is presented to the CNN. First, the resizing operation changes the apparent size of the objects in the image (section 3.1). This is important because CNNs do not have a predictable response to a scale change (as opposed to translations). Second, the choice of dif- ferent crop sizes (for architectures such as ResNet that admit non-fixed inputs) has an effect on the statistics of the network activations, especially after global pooling layers (section 3.2). This section analyses in detail these two effects. In the discus- 2 sion, we use the following conventions: The “input image” is the original training or testing image; the RoC is a rectangle in the input image; and the “crop” is the pixels of the RoC, rescaled with bilinear interpolation to a fixed resolution, then fed to the CNN. # 3.1 Scale and apparent object size If a CNN is to acquire a scale-invariant behavior for object recognition, it must learn it from data. However, resizing the input images in pre-processing changes the distribution of ob- jects sizes. Since different pre-processing protocols are used at training and testing time2, the size distribution differs in the two cases. This is quantified next. # 3.1.1 Relation between apparent and actual object sizes We consider the following imaging model: the camera projects the 3D world onto a 2D image, so the apparent size of the objects is inversely proportional to their distance from the camera. For simplicity, we model a 3D object as an upright square of height and width R × R at a distance Z from the camera, and fronto-parallel to it. Hence, its image is a r × r rectangle, where the apparent size r is given by r = f R/Z where f is the focal length of the camera. Thus we can ex- press the apparent size as the product r = f · r1 of the focal length f , which depends on the camera, and of the variable r1 = R/Z, whose distribution p(r1) is camera-independent. While the focal length is variable, the field of view angle θFOV of most cameras is usually in the [40◦, 60◦] range. Hence, for HW where an image of size H × W one can write f = k k−1 = 2 tan(θFOV/2) ≈ 1 is approximately constant. With this definition for f , the apparent size r is expressed in pixels. # 3.1.2 Effect of image pre-processing on the apparent ob- ject size Now, we consider the effect of rescaling images on the appar- ent size of objects. If an object has an extent of r × r pixels in the input image, and if s is the scaling factor between input image and the crop, then by the time the object is analysed by the CNN, it will have the new size of rs × rs pixels. The scaling factor s is determined by the pre-processing protocol, discussed next. Train-time scale augmentation. As a prototypical aug- mentation protocol, we consider RandomResizedCrop in PyTorch, which is very similar to augmentations used by other toolkits such as Caffe and the original AlexNet. RandomResizedCrop takes as input an H x W image, selects a RoC at random, and resizes the latter to output a Kain X Kain Crop. The RoC extent is obtained by first sam- pling a scale parameter o such that 0? ~ U([o?.,03.]) and an aspect ratio a such that na ~ U((Ina_,Ina,]). Then, the size of the RoC in the input image is set to Hroc X Wroc = o?aHW x \/o? HW/a. The RoC is resized anisotropically with factors (Kirain/ Hroc, Kirain/Wroc) to generate the output image. Assuming for simplicity that the input image is square (i.e. H = W) and that a = 1, the scaling factor from input 2At training time, the extraction and resizing of the RoC is used as an opportunity to augment the data by randomly altering the scale of the objects, in this manner the CNN is stimulated to be invariant to a wider range of object scales. 3 image to output crop is given by: √ √ s = KtrainKtrain HRoCWRoC = 1 σ · Ktrain√ HW . (1) By scaling the image in this manner, the apparent size of the object becomes rtrain = s · r = sf · r1 = kKtrain σ · r1. (2) Since kKtrain is constant, differently from r, rtrain does not de- pend on the size H × W of the input image. Hence, pre- processing standardizes the apparent size, which otherwise would depend on the input image resolution. This is impor- tant as networks do not have built-in scale invariance. Test-time scale augmentation. As noted above, test-time augmentation usually differs from train-time augmentation. The former usually amounts to: isotropically resizing the im- age so that the shorter dimension is K image and then extract- ing a Ktest × Ktest crop (CenterCrop) from that. Under the assumption that the input image is square (H = W ), the scaling factor from input image to crop rewrites as s = K image test / rtest = s · r = kK image test · r1. (3) This has a a similar size standardization effect as the train-time augmentation. Lack of calibration. Comparing eqs. (2) and (3), we con- clude that the same input image containing an object of size r1 results in two different apparent sizes if training or testing pre-processing is used. These two sizes are related by: rtest rtrain = σ · K image test Ktrain . (4) such as AlexNet In practice, K image test /Ktrain ≈ 1.15; however, the scaling factor σ is sam- pled (with the square law seen above) in a range [σ−, σ+] = [0.28, 1]. Hence, at testing time the same object may appear as small as a third of what it appears at training time. For standard values of the pre-processing parameters, the expected value of this ratio w.r.t. σ is r image 2 oF — 93 B| =] =F. #080, F=5:3—>5, 6) Ttrain train 3 of — oF where F captures all the sampling parameters. # 3.2 Scale and activation statistics In addition to affecting the apparent size of objects, pre- processing also affects the activation statistics of the CNN, especially if its architecture allows changing the size of the input crop. We first look at the receptive field size of a CNN activation in the previous layer. This is the number of input spatial locations that affect that response. For the convolu- tional part of the CNN, comprising linear convolution, sub- sampling, ReLU, and similar layers, changing the input crop size is almost neutral because the receptive field is unaffected by the input size. However, for classification the network must be terminated by a pooling operator (usually average pooling) in order to produce a fixed-size vector. Changing the size of the input crop strongly affects the activation statistics of this layer. 1.0 0.8 0.6 resolution 64 resolution 128 0.4 —— resolution 224 0.2 — resolution 320 —— resolution 448 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Figure 3: Cumulative density function of the vectors com- ponents on output of the spatial average pooling operator, for a standard ResNet-50 trained at resolution 224, and tested at different resolutions. The distribution is measured on the val- idation images of Imagenet. Activation statistics. We measure the distribution of activa- tion values after the average pooling in a ResNet-50 in fig. 3. As it is applied on a ReLU output, all values are non-negative. At the default crop resolution of Ktest = Ktrain = 224 pixels, the activation map is 7×7 with a depth of 2048. At Ktest = 64, the activation map is only 2×2: pooling only 0 values be- comes more likely and activations are more sparse (the rate of 0’s increases form 0.5% to 29.8%). The values are also more spread out: the fraction of values above 2 increases from 1.2% to 11.9%. Increasing the resolution reverts the effect: with Ktest = 448, the activation map is 14×14, the output is less sparse and less spread out. This simple statistical observations shows that if the distri- bution of activations changes at test time, the values are not in the range that the final classifier layers (linear & softmax) were trained for. # 3.3 Larger test crops result in better accuracy Despite the fact that increasing the crop size affects the activa- tion statistics, it is generally beneficial for accuracy, since as discussed before it reduces the train-test object size mismatch. For instance, the accuracy of ResNet-50 on the ImageNet val- idation set as Ktest is changed (see section 5) are: Ktest accuracy 64 29.4 128 65.4 224 77.0 256 78.0 288 78.4 320 78.3 384 77.7 448 76.6 Thus for Ktest = 288 the accuracy is 78.4%, which is greater than 77.0% obtained for the native crop size Ktest = Ktrain = 224 used in training. In fig. 5, we see this result is general: better accuracy is obtained with higher resolution crops at test time than at train time. In the next section, we ex- plain and leverage this discrepancy by adjusting the network’s weights. # 4 Method Based on the analysis of section 3, we propose two improve- ments to the standard setting. First, we show that the differ- ence in apparent object sizes at training and testing time can be removed by increasing the crop size at test time, which ex- plains the empirical observation of section 3.3. Second, we slightly adjust the network before the global average pooling 4 layer in order to compensate for the change in activation statis- tics due to the increased size of the input crop. # 4.1 Calibrating the object sizes by adjusting the crop size Equation (5) estimates the change in the apparent object sizes during training and testing. If the size of the intermediate im- age K image is increased by a factor α (where α ≈ 1/0.80 = 1.25 in the example) then at test time, the apparent size of the objects is increased by the same factor. This equalizes the ef- fect of the training pre-processing that tends to zoom on the objects. However, increasing K image test with Ktest fixed means looking at a smaller part of the object. This is not ideal: the object to identify is often well framed by the photographer, so the crop may show only a detail of the object or miss it altogether. Hence, in addition to increasing K image , we also test increase the crop size Ktest to keep the ratio K image test /Ktest con- stant. However, this means that Ktest > Ktrain, which skews the activation statistics (section 3.2). The next section shows how to compensate for this skew. # 4.2 Adjusting statistics before spatial pooling At this point, we have selected the “correct” test resolution for the crop but we have skewed activation statistics. Hereafter we explore two approaches to compensate for this skew. Parametric adaptation. We fit the output of the average pooling layer (section 3.2) with a parametric Fr´echet distri- bution at the original Ktrain and final Ktest resolutions. Then, we define an equalization mapping from the new distribution back to the old one via a scalar transformation, and apply it as an activation function after the pooling layer (see Ap- pendix A). This compensation provides a measurable but lim- ited improvement on accuracy, probably because the model is too simple and does not differentiate the distributions of dif- ferent components going through the pooling operator. Adaptation via fine-tuning. Increasing the crop resolution at test time is effectively a domain shift. A natural way to compensate for this shift is to fine-tune the model. In our case, we fine-tune on the same training set, after switching from Ktrain to Ktest. Here we choose to restrict the fine-tuning to the very last layers of the network. A take-away from the distribution analysis is that the spar- sity should be adapted. This requires at least to include the batch normalization that precedes the global pooling into the fine-tuning. In this way the batch statistics are adapted to the increased resolution. We also use the test-time augmentation scheme during fine-tuning to avoid incurring further domain shifts. Figure 4 shows the pooling operator’s activation statistics before and after fine-tuning. After fine-tuning the activation statistics closely resemble the train-time statistics. This hints that adaptation is successful. However, as discussed above, this does not imply an improvement in accuracy. # 5 Experiments Benchmark data. We experiment on the ImageNet-2012 benchmark [32], reporting validation performance as top-1 ac- curacy. It has been argued that this measure is sensitive to er- Ktest = 64 Ktest = 128 Ktest = 224 Ktest = 448 Figure 4: CDF of the activations on output of the average pooling layer, for a ResNet-50, when tested at different resolutions Ktest. Compare the state before and after fine-tuning the batch-norm. rors in the ImageNet labels [34]. However, the top-5 metrics, which is more robust, tends to saturate with modern architec- tures, while the top-1 accuracy is more sensitive to improve- ments in the model. To assess the significance of our results, we compute the standard deviation of the top-1 accuracy: we classify the vali- dation images, split the set into 10 folds and measure the accu- racy on 9 of them, leaving one out in turn. The standard devi- ation of accuracy over these folds is ∼ 0.03% for all settings. Thus we report 1 significant digit in the accuracy percentages. In the supplemental material, we also report results on the Fine-Grained Visual Categorization challenges iNaturalist and Herbarium. Fine-tuning data-augmentation. We experimented three data-augmentation for fine-tuning: The first one (test DA) is resizing the image and then take the center crop, The sec- ond one (test DA2) is resizing the image, random horizontal shift of the center crop, horizontal flip and color jittering. The last one (train DA) is the train-time data-augmentation as de- scribed in the previous paragraph. A comparison of the performance of these data augmenta- tion is made in the section C. The test DA data-augmentation described in this paragraph being the simplest. Therefore test DA is used for all the results reported with ResNet-50 and PNASNet-5-Large in this paper except in Table 2 where we use test DA2 to have slightly better performances in order to compare ours results with the state of the art. Architectures. We use standard state-of-the-art neural net- work architectures with no modifications, We consider in particular ResNet-50 [13]. For larger experiments, we use PNASNet-5-Large [24], learned using “neural architecture search” as a succession of interconnected cells. It is accurate (82.9% Top-1) with relatively few parameters (86.1 M). We use also ResNeXt-101 32x48d [25], pre-trained in weakly- supervised fashion on 940 million public images with 1.5K hashtags matching with 1000 ImageNet1K synsets. It is accu- rate (85.4% Top-1) with lot of parameters (829 M). Training protocol. We train ResNet-50 with SGD with a learning rate of 0.1 × B/256, where B is the batch size, as in [15]. The learning rate is divided by 10 every 30 epochs. With a Repeated Augmentation of 3, an epoch pro- cesses 5005 × 512/B batches, or ∼90% of the training im- ages, see [5]. In the initial training, we use B = 512, 120 epochs and the default PyTorch data augmentation: horizon- tal flip, random resized crop (as in section 3) and color jitter- ing. To finetune, the initial learning rate is 0.008 same de- cay, B = 512, 60 epochs. The data-augmentation used for fine-tuning is described in the next paragraph. For ResNeXt- 101 32x48d we use the pretrained version from PyTorch hub repository [2]. We use almost the same fine-tuning as for the ResNet-50. We also use a ten times smaller learning rate and a batch size two times smaller. For PNASNet-5-Large we use the pretrained version from Cadene’s GitHub reposi- tory [1]. The difference with the ResNet-50 fine-tuning is that we modify the last three cells, in one epoch and with a learn- ing rate of 0.0008. We run our experiments on machines with 8 Tesla V100 GPUs and 80 CPU cores to train and fine-tune our ResNet-50. For ResNeXt-101 32x48d all reported results are obtained with test DA2. We make a comparison of the results obtained between testDA, testDA2 and train DA in section C. The baseline experiment is to increase the resolution with- out adaptation. Repeated augmentations already improve the default PyTorch ResNet-50 from 76.2% top-1 accuracy to 77.0%. Figure 5(left) shows that increasing the resolution at test time increases the accuracy of all our networks. E.g., the accuracy of a ResNet-50 trained at resolution 224 increases from 77.0 to 78.4 top-1 accuracy, an improvement of 1.4 per- centage points. This concurs with prior findings in the litera- ture [14]. # 5.1 Results Improvement of our approach on a ResNet-50. Fig- ure 5(right) shows the results obtained after fine-tuning the last batch norm in addition to the classifier. With fine-tuning we get the best results (79%) with the classic ResNet-50 trained at Ktrain = 224. Compared to when there is no fine-tuning, the Ktest at which the maximal accuracy is obtained increases from Ktest = 288 to 384. If we prefer to reduce the train- ing resolution, Ktrain = 128 and testing at Ktrain = 224 yields 77.1% accuracy, which is above the baseline trained at full test resolution without fine-tuning. Multiple resolutions. To improve the accuracy, we classify the image at several resolutions and average the classification scores. Thus, the training time remains the same but there is a modest increase in inference time compared to process- ing only the highest-resolution crop. With Ktrain = 128 and Ktest = [256, 192], the accuracy is 78.0%. With Ktrain = 224 5 80 80 75 75 a a 8 70 8 70 a a & & % 65 Train resolution 64 % 65 Train resolution 64 ay —— Train resolution 128 N 7 — Train resolution 128 fom . . Q . A e 60 —— Train resolution 224 e 60 —— Train resolution 224 Train resolution 384 — Train resolution 384 55 e = Accuracy with train resolution 55 Accuracy with train resolution Best accuracy Best accuracy 50 50 64 96128 192224 312 384 0448 64 128 224 384 480 Test resolution (pixels) Test resolution (pixels) Figure 5: Top-1 accuracy of the ResNet-50 according to the test time resolution. Left: without adaptation, right: after resolution adaptation. The numerical results are reported in Appendix C. A comparison of results without random resized crop is reported in Appendix D. and Ktest = [384, 352], we improve the single-crop result of 79.0% to 79.5%. Application to larger networks. The same adaptation method can be applied to any convolutional network. In Ta- ble 1 we report the result on the PNASNet-5-Large and the IG-940M-1.5k ResNeXt-101 32x48d [25]. For the PNASNet- 5-Large, we found it beneficial to fine-tune more than just the batch-normalization and the classifier. Therefore, we also ex- periment with fine-tuning the three last cells. By increasing the resolution to Ktest = 480, the accuracy increases by 1 percentage point. By combining this with an ensemble of 10 crops at test time, we obtain 83.9% accuracy. With the ResNeXt-101 32x48d increasing the resolution to Ktest = 320, the accuracy increases by 1.0 percentage point. We thus reached 86.4% top-1 accuracy. # 5.2 Beyond the current state of the art Table 2 compares our results with competitive methods from the literature. Our ResNet-50 is slightly worse than ResNet50- D and MultiGrain, but these do not have exactly the same architecture. On the other hand our ResNet-50 CutMix, which has a classic ResNet-50 architecture, outperforms oth- ers ResNet-50 including the slightly modified versions. Our fine-tuned PNASNet-5 outperforms the MultiGrain version. To the best of our knowledge our ResNeXt-101 32x48d sur- passes all other models available in the literature. With 86.4% Top-1 accuracy and 98.0% Top-5 accuracy it is the first model to exceed 86.0% in Top-1 accuracy and 98.0% in Top-5 accuracy on the ImageNet-2012 benchmark [32]. It exceeds the previous state of the art [25] by 1.0% absolute in Top-1 accuracy and 0.4% Top-5 accuracy. Speed-accuracy trade-off. We consider the trade-off be- tween training time and accuracy (normalized as if it was run on 1 GPU). The full table with timings are in supplementary Section C. In the initial training stage, the forward pass is 3 to 6 times faster than the backward pass. However, during fine-tuning the ratio is inverted because the backward pass is applied only to the last layers. In the low-resolution training regime (Ktrain = 128), the additional fine-tuning required by our method increases the training time from 111.8 h to 124.1 h (+11%). This is to obtain an accuracy of 77.1%, which outperforms the network trained at the native resolution of 224 in 133.9 h. We produce a fine- tuned network with Ktest = 384 that obtains a higher accuracy than the network trained natively at that resolution, and the training is 2.3× faster: 151.5 h instead of 348.5 h. Ablation study. We study the contribution of the differ- ent choices to the performance, limited to Ktrain = 128 and Ktrain = 224. By simply fine-tuning the classifier (the fully connected layers of ResNet-50) with test-time augmentation, we reach 78.9% in Top-1 accuracy with the classic ResNet-50 initially trained at resolution 224. The batch-norm fine-tuning and improvement in data augmentation advances it to 79.0%. The higher the difference in resolution between training and testing, the more important is batch-norm fine-tuning to adapt to the data augmentation. The full results are in the supple- mentary Section C. # 5.3 Transfer learning tasks We have used our method in transfer learning tasks to validate its effectiveness on other dataset than ImageNet. We evalu- ated it on the following datasets: iNaturalist 2017 [16], Stan- ford Cars [19], CUB-200-2011 [41], Oxford 102 Flowers [26], Oxford-IIIT Pets [28], NABirds [40] and Birdsnap [4]. We used our method with two types of networks for transfer learn- ing tasks: SENet-154 [3] and InceptionResNet-V2 [36]. For all these experiments, we proceed as follows. (1) we initialize our network with the weights learned on ImageNet (using models from [1]). (2) we train it entirely for several epochs at a certain resolution. (3) we fine-tune with a higher resolution the last batch norm and the fully connected layer. Table 3 summarizes the models we used and the perfor- mance we achieve. We can see that in all cases our method improves the performance of our baseline. Moreover, we no- tice that the higher the image resolution, the more efficient the method is. This is all the more relevant today, as the quality of the images increases from year to year. # 6 Conclusion We have studied extensively the effect of using different train and test scale augmentations on the statistics of nat- ural images and of the network’s pooling activations. We have shown that, by adjusting the crop resolution and via 6 Table 1: Application to larger networks: Resulting top-1 accuracy Model Train Fine-tuning Test resolution used resolution Classifier Batch-norm Three last Cells 331 384 395 416 448 480 PNASNet-5-Large 331 - - - 82.7 83.0 83.2 83.0 83.0 828 PNASNet-5-Large 331 v v - 82.7 83.4 83.5 834 83.5 83.4 PNASNet-5-Large 331 v v v 82.7 83.3 834 83.5 83.6 83.7 Classifier Batch-norm Three last conv layer 224 288 320 ResNeXt-101 32x48d 224 v v - 85.4 86.1 86.4 Table 2: State of the art on ImageNet with ResNet-50 architectures and with all types of architecture (Single Crop evaluation) Models Extra Training Data Train Test #Parameters Top-1(%) Top-5 (%) ResNet-50 Pytorch - 224 224 25.6M 76.1 92.9 ResNet-50 mix up - 224 224 25.6M 717 94.4 ResNet-50 CutMi - 224 224 25.6M 78.4 94.1 ResNet-50-D - 224 224 25.6M 79.3 94.6 MultiGrain R50-AA-500 - 224 500 25.6M 79.4 94.8 ResNet-50 Billion-scale v 224 224 25.6M 81.2 96.0 Our ResNet-50 - 224 384 25.6M 79.1 94.6 Our ResNet-50 CutMix - 224 320 25.6M 79.8 94.9 Our ResNet-50 Billion-scale@ 160 v 160 = 224 25.6M 81.9 96.1 Our ResNet-50 Billion-scale@224 v 224 320 25.6M 82.5 96.6 PNASNet-5 (N = 4, F = 216) [24] - 331 33 86.1M 82.9 96.2 MultiGrain PNASNet @ 500px 331 500 86.1M 83.6 96.7 AmoebaNet-B (6,512) [17] - 480 480 577M 84.3 97.0 EfficientNet-B7 - 600 600 66M 84.4 97.1 Our PNASNet-5 - 331 480 86.1M 83.7 96.8 ResNeXt-101 32x8d [25] v 224 224 88M 82.2 96.4 ResNeXt-101 32x16d v 224 224 193M 84.2 97.2 ResNeXt-101 32x32d [25] v 224 224 466M 85.1 97.5 ResNeXt-101 32x48d v 224 224 829M 85.4 97.6 Our ResNeXt-101 32x48d v 224 320 829M 86.4 98.0 Table 3: Transfer learning task with our method and comparison with the state of the art. We only compare ImageNet-based transfer learning results with a single center crop for the evaluation (if available, otherwise we report the best published result) without any change in architecture compared to the one used on ImageNet. We report Top-1 Accuracy(%). Dataset Models Baseline With Our Method State-Of-The-Art Models iNaturalist 2017 [16] Stanford Cars [19] CUB-200-2011 [41] Oxford 102 Flowers [26] Oxford-IIIT Pets [28] NABirds [40] Birdsnap [4] SENet-154 SENet-154 SENet-154 InceptionResNet-V2 SENet-154 SENet-154 SENet-154 74.1 94.0 88.4 95.0 94.6 88.3 83.4 75.4 94.4 88.7 95.7 94.8 89.2 84.3 IncResNet-V2-SE [16] EfficientNet-B7 [37] MPN-COV [22] EfficientNet-B7 [37] AmoebaNet-B (6,512) [17] PC-DenseNet-161 [10] EfficientNet-B7 [37] 67.3 94.7 88.7 98.8 95.9 82.8 84.3 7 a simple and light-weight parameter adaptation, it is possi- ble to increase the accuracy of standard classifiers signifi- cantly, everything being equal otherwise. We have also shown that researchers waste resources when both training and test- ing strong networks at resolution 224 × 224; We introduce a method that can “fix” these networks post-facto and thus improve their performance. An open-source implementa- tion of our method is available at https://github.com/ facebookresearch/FixRes. # References [1] Pre-trained pytorch models. https://github. com/Cadene/pretrained-models.pytorch. Accessed: 2019-05-23. [2] Pytorch hub models. https://pytorch.org/ hub/facebookresearch_WSL-Images_ resnext/. Accessed: 2019-06-26. Squeeze-and- excitation networks. arXiv preprint arXiv:1709.01507, 2017. [4] T. Berg, J. Liu, S. W. Lee, M. L. Alexander, D. W. Ja- cobs, and P. N. Belhumeur. Birdsnap: Large-scale fine- grained visual categorization of birds. In Conference on Computer Vision and Pattern Recognition, 2014. [5] Maxim Berman, Herv´e J´egou, Andrea Vedaldi, Iasonas Kokkinos, and Matthijs Douze. Multigrain: a unified im- age embedding for classes and instances. arXiv preprint arXiv:1902.05509, 2019. [6] Y-Lan Boureau, Jean Ponce, and Yann LeCun. A theo- retical analysis of feature pooling in visual recognition. In International Conference on Machine Learning, 2010. [7] Tan Kiat Chuan, Liu Yulong, Ambrose Barbara, Tulig Melissa, and Belongie Serge. The herbarium challenge 2019 dataset. arXiv preprint arXiv:1906.05372, 2019. [8] Ekin Dogus Cubuk, Barret Zoph, Dandelion Man´e, Vi- jay Vasudevan, and Quoc V. Le. Autoaugment: Learn- arXiv preprint ing augmentation policies from data. arXiv:1805.09501, 2018. [9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical im- In Conference on Computer Vision and age database. Pattern Recognition, pages 248–255, 2009. [10] Abhimanyu Dubey, Otkrist Gupta, Pei Guo, Ramesh Raskar, Ryan Farrell, and Nikhil Naik. Training with confusion for fine-grained visual classification. arXiv preprint arXiv:1705.08016, 2017. [11] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural net- works. In Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016. [12] Albert Gordo, Jon Almazan, Jerome Revaud, and Diane Larlus. End-to-end learning of deep visual representa- tions for image retrieval. International journal of Com- puter Vision, 124(2):237–254, 2017. 8 [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Conference on Computer Vision and Pattern Recogni- tion, June 2016. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. [15] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image clas- arXiv sification with convolutional neural networks. preprint arXiv:1812.01187, 2018. [16] Grant Van Horn, Oisin Mac Aodha, Yang Song, Alexan- der Shepard, Hartwig Adam, Pietro Perona, and Serge J. Belongie. The inaturalist challenge 2017 dataset. arXiv preprint arXiv:1707.06642, 2017. [17] Yanping Huang, Yonglong Cheng, Dehao Chen, Hy- oukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng training of giant neural Chen. Gpipe: Efficient arXiv preprint networks using pipeline parallelism. arXiv:1811.06965, 2018. [18] Simon Kornblith, Jonathon Shlens, and Quoc V. Le. Do better imagenet models transfer better? arXiv preprint arXiv:1805.08974, 2018. [19] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei- Fei. 3d object representations for fine-grained catego- rization. In 4th International IEEE Workshop on 3D Rep- resentation and Recognition (3dRR-13), 2013. [20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hin- Imagenet classification with deep convolutional In Advances in Neural Information ton. neural networks. Processing Systems, 2012. [21] Yann LeCun, Bernhard Boser, John S Denker, Don- nie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. [22] Peihua Li, Jiangtao Xie, Qilong Wang, and Wangmeng Zuo. Is second-order information helpful for large-scale arXiv preprint arXiv:1703.08050, visual recognition? 2017. [23] Tsung-Yi Lin, Piotr Doll´ar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyra- In Proceedings of mid networks for object detection. the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117–2125, 2017. [24] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neu- ral architecture search. In International Conference on Computer Vision, September 2018. [25] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the In European limits of weakly supervised pretraining. Conference on Computer Vision, 2018. [26] M-E. Nilsback and A. Zisserman. Automated flower In Pro- classification over a large number of classes. ceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, 2008. [27] Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. Learning and transferring mid-level image repre- sentations using convolutional neural networks. In Con- ference on Computer Vision and Pattern Recognition, 2014. [28] O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawa- har. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [29] Filip Radenovi´c, Giorgos Tolias, and Ondrej Chum. Fine-tuning cnn image retrieval with no human annota- IEEE Transactions on Pattern Analysis and Ma- tion. chine Intelligence, 2018. [30] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, 2015. [31] Oren Rippel and Lubomir Bourdev. Real-time adaptive image compression. In International Conference on Ma- chine Learning, 2017. [32] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexan- der C. Berg, and Li Fei-Fei. Imagenet large scale visual International journal of Com- recognition challenge. puter Vision, 2015. [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Interna- tional Conference on Learning Representations, 2015. [34] Pierre Stock and Moustapha Cisse. Convnets and ima- genet beyond accuracy: Understanding mistakes and un- covering biases. In European Conference on Computer Vision, 2018. [35] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and In A. Rabinovich. Going deeper with convolutions. Conference on Computer Vision and Pattern Recogni- tion, 2015. [36] Christian Szegedy, Sergey Ioffe, and Vincent Van- Inception-v4, inception-resnet and the impact arXiv preprint houcke. of residual connections on learning. arXiv:1602.07261, 2016. [37] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019. [38] Giorgos Tolias, Ronan Sicre, and Herv´e J´egou. Particu- lar object retrieval with integral max-pooling of cnn ac- tivations. arXiv preprint arXiv:1511.05879, 2015. [39] Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Herv´e J´egou. Fixing the train-test resolution discrep- ancy: Fixefficientnet, 2020. 9 [40] G. Van Horn, S. Branson, R. Farrell, S. Haber, J. Barry, P. Ipeirotis, P. Perona, and S. Belongie. Building a bird recognition app and large scale dataset with citizen sci- entists: The fine print in fine-grained dataset collection. 2015. [41] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Be- longie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, 2011. Image de- noising and inpainting with deep neural networks. In Ad- vances in Neural Information Processing Systems, pages 341–349, 2012. [43] Ismet Zeki Yalniz, Herv´e J´egou, Kan Chen, Manohar Paluri, and Dhruv Kumar Mahajan. Billion-scale semi- arXiv supervised learning for image classification. preprint arXiv:1905.00546, 2019. [44] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong clas- arXiv preprint sifiers with localizable features. arXiv:1905.04899, 2019. [45] Hongyi Zhang, Moustapha Ciss´e, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk mini- mization. arXiv preprint arXiv:1710.09412, 2017. # Supplementary material for “Fixing the train-test resolution discrepancy” In this supplementary material we report details and results that did not fit in the main paper. This includes the estimation of the parametric distribution of activations in Section A, a small study on border/round-off effects of the image size for a convolutional neural net in Section B and more exhaustive result tables in Section C. Section E further demonstrates the interest of our approach through our participation to two competitive challenges in fine-grained recognition. # A Fitting the activations # A.1 Parametric Fr´echet model after average-pooling In this section we derive a parametric model that fits the distribution of activations on output of the spatial pooling layer. The output the the last convolutional layer can be well approximated with a Gaussian distribution. Then the batch-norm centers the Gaussian and reduces its variance to unit, and the ReLU replaces the negative part with 0. Thus the ReLU outputs an equal mixture of a cropped unit Gaussian and a Dirac of value 0. The average pooling sums n = 2 x 2 ton = 14 x 14 of those distributions together. Assuming independence of the inputs, it can be seen as a sum of n’ cropped Gaussians, where n’ follows a discrete binomial distribution. Unfortunately, we found this composition of distributions is not tractable in close form. Instead, we observed experimentally that the output distribution is close to an extreme value distribution. This is due to the fact that only the positive part of the Gaussians contributes to the output values. In an extreme value distribution that is the sum of several (arbitrary independent) distributions, the same happens: only the highest parts of those distributions contribute. Thus, we model the statistics of activations as a Fr´echet (a.k.a. inverse Weibull) distribution. This is a 2-parameter distribu- tion whose CDF has the form: P (x, µ, σ) = e−(1+ ξ σ (x−µ))−1/ξ # With ξ a positive constant, µ ∈ R, σ ∈ R∗ +. We observed that the parameter ξ can be kept constant at 0.3 to fit the distributions. Figure 6 shows how the Fr´echet model fits the empirical CDF of the distribution. The parameters were estimated using least-squares minimization, excluding the zeros, that can be considered outliers. The fit is so exact that the difference between the curves is barely visible. To correct the discrepancy in distributions at training and test times, we compute the parameters µref , σref of the distribution observed on training images time for Ktest = Ktrain. Then we increase Ktest to the target resolution and measure the parameters µ0, σ0 again. Thus, the transformation is just an affine scaling, still ignoring zeros. When running the transformed neural net on the Imagenet evaluation, we obtain accuracies: K image test accuracy 64 29.4 128 65.4 224 77 256 78 288 78.4 448 76.5 Hence, the accuracy does not improve with respect to the baseline. This can be explained by several factors: the scalar distribution model, however good it fits to the observations, is insufficient to account for the individual distributions of the activation values; just fitting the distribution may not be enough to account for the changes in behavior of the convolutional trunk. Resolution: 64 Resolution: 128 Resolution: 224 Resolution: 448 Figure 6: Fitting of the CDF of activations with a Fr´echet distribution. 10 Table 4: Matching distribution before the last Relu application to ResNet-50: Resulting top-1 accuracy % on ImageNet validation set Model Train Adapted Fine-tuning Test resolution used resolution Distribution Classifier Batch-norm 64 224 288 352 384 ResNet-50 224 294 77.0 78.4 78.1 77.7 ResNet-50 224 v - 29.8 77.0 77.7 773 76.8 ResNet-50 224 - 40.6 77.1 78.6 78.9 78.9 ResNet-50 224 - 41.7 77.1 785 78.9 79.0 ResNet-50 224 v - 41.8 77.1 785 78.8 78.9 KAA <q 78.5 Top 1 accuracy ~o NON Pal Pa 9 fo} uw fo} ~ 2 uw 76.0 224 232 240 248 256 264 272 280 288 Test resolution (pixels) Figure 7: Evolution of the top-1 accuracy of the ResNet-50 trained with resolution 224 according to the testing resolution (no finetuning). This can be considered a zoom of figure 5 with 1-pixel increments. # A.2 Gaussian model before the last ReLU activation Following the same idea as what we did previously we looked at the distribution of activations by channel before the last ReLU according to the resolution. We have seen that the distributions are different from one resolution to another. With higher resolutions, the mean tends to be closer to 0 and the variance tends to become smaller. By acting on the distributions before the ReLU, it is also possible to affect the sparsity of values after spatial-pooling, which was not possible with the previous analysis based on Frechet’s law. We aim at matching the distribution before the last ReLU with the distribution of training data at lower resolution. We compare the effect of this transformation before/after fine tuning with the learnt batch-norm approach. The results are summarized in Table 4. We can see that adapting the resolution by changing the distributions is effective especially in the case of small resolutions. Nevertheless, the adaptation obtained by fine-tuning the the batch norm improves performs better in general. # B Border and round-off effects Due to the complex discrete nature of convolutional layers, the accuracy is not a monotonous function of the input resolution. There is a strong dependency on the kernel sizes and strides used in the first convolutional layers. Some resolutions will not match with these parameters so we will have a part of the images margin that will not be taken into account by the convolutional layers. In Figure 7, we show the variation in accuracy when the resolution of the crop is increased by steps of 1 pixel. Of course, it is possible to do padding but it will never be equivalent to having a resolution image adapted to the kernel and stride size. Although the global trend is increasing, there is a lot of jitter that comes from those border effects. There is a large drop just after resolution 256. We observe the drops at each multiple of 32, they correspond to a changes in the top-level activation map’s resolution. Therefore we decided to use only sizes that are multiples of 32 in the experiments. # C Result tables Due to the lack of space, we report only the most important results in the main paper. In this section, we report the full result tables for several experiments. Table 5 report the numerical results corresponding to Figure 5 in the main text. Table 6 reports the full ablation study results (see Section 5.1). Table 7 reports the runtime measurements that Section 5.1 refers to. Table 8 reports a comparaison between test DA and test DA2 that Section 5 refers to. 11 test \ train 64 128 160 224 384 test \ train 64 128 224 384 64 128 224 288 384 448 480 63.2 68.2 55.3 42.4 23.8 13.0 9.7 48.3 73.3 75.7 73.8 69.6 65.8 63.9 40.1 71.2 77.3 76.6 73.8 71.5 70.2 29.4 65.4 77.0 78.4 77.7 76.6 75.9 12.6 48.0 70.5 75.2 78.2 78.8 78.7 64 128 224 288 384 448 480 63.5 71.3 66.9 62.4 55.0 49.7 46.6 53.7 73.4 77.1 76.6 74.8 73.0 72.2 41.7 67.7 77.1 78.6 79.0 78.4 78.1 27.5 55.7 71.9 75.7 78.2 78.8 79.0 Table 5: Top-1 validation accuracy for different combinations of training and testing resolution. Left: with the standard training procedure, (no finetuning, no adaptation of the ResNet-50). Right: with our data-driven adaptation strategy and test-time augmentations. Train Fine-tuning Test resolution (top-1 accuracy) resolution | Classifier Batch-norm Data aug. 64 128 224 288 384 448 = = 48.3 73.3 75.7 73.8 69.6 65.8 v - tran DA | 52.8 73.30 77.1 76.35 73.2 71.7 128 v - testDA | 53.39 73.4 77.1 764 744 72.3 v v tran DA | 53.0 73.3 77.1 76.5 744 71.9 v v testDA | 53.7 73.4 77.1 766 74.8 73.0 - - 294 65.4 77.0 784 77.7 76.6 v tran DA | 39.9 67.5 77.0 78.6 78.9 78.0 224 v - testDA | 40.6 67.3. 77.1 78.6 78.9 77.9 v v tran DA | 40.4 67.5 77.0 78.6 78.9 78.0 v v testDA | 41.7. 67.7) 77.1 78.6 79.0 78.4 Table 6: Ablation study: Accuracy when enabling or disabling some components of the training method. Train DA: training- time data augmentation during fine-tuning, test DA: test-time one. Resolution Train time per batch (ms) Resolution fine-tuning (ms) Performance train test backward forward backward forward Total time (h) accuracy 128 160 224 384 128 160 224 384 29.0 ±4.0 30.2 ±3.2 35.0 ±2.0 112.4 ±6.2 12.8 ±2.8 14.5 ±3.4 15.2 ±3.2 18.2 ±3.9 111.8 119.7 133.9 348.5 73.3 75.1 77.0 78.2 160 224 224 288 30.2 ±3.2 35.0 ±2.0 14.5 ±3.4 15.2 ±3.2 119.7 133.9 77.3 78.4 128 160 224 224 224 384 29.0 ±4.0 30.2 ±3.2 35.0 ±2.0 12.8 ±2.8 14.5 ±3.4 15.2 ±3.2 4.4 ±0.9 4.4 ±0.9 8.2 ±1.3 14.4 ±2.5 14.4 ±2.5 18.0 ±2.7 124.1 131.9 151.5 77.1 77.6 79.0 Table 7: Execution time for the training. Training and fine-tuning times are reported for a batch of size 32 for training and 64 for fine-tuning, on one GPU. Fine-tuning uses less memory than training therefore we can use larger batch size. The total time is the total time spent on both, with 120 epochs for training and 60 epochs of fine-tuning on ImageNet. Our approach corresponds to fine-tuning of the batch-norm and the classification layer. Models Train Test Top-1 test DA (%) ResNext-101 32x48d ResNext-101 32x48d 224 224 288 320 86.0 86.3 86.1 86.4 ResNet-50 224 320 79.0 79.1 ResNet-50 CutMix 224 384 79.7 79.8 Table 8: Comparisons of performance between data-augmentation test DA and test DA2 in the case of fine-tuning batch-norm and classifier. 12 test \ train 64 128 224 384 64 96 128 160 224 256 384 440 60.0 61.6 54.2 42.4 21.7 15.3 4.3 2.3 48.7 65.0 70.8 72.4 69.8 66.4 44.8 33.6 28.1 50.9 63.5 69.7 74.6 75.2 71.7 67.1 11.5 29.8 46.0 57.0 68.8 72.1 76.7 77.0 Table 9: Top-1 validation accuracy for different combinations of training and testing resolution. ResNet-50 train with resize and random crop with a fixed size instead of random resized crop. 80 eT oS a o wu o Train resolution 64 —— Train resolution 128 —— Train resolution 224 — Train resolution 384 Accuracy with train resolution 4 Best accuracy Top-1 accuracy + fo} w oS N fs} H o 64 96 128 160 224 256 384. 440 Test resolution (pixels) Figure 8: Top-1 accuracy of the ResNet-50 according to the test time resolution. ResNet-50 train with resize and random crop with a fixed size instead of random resized crop. # D Impact of Random Resized Crop In this section we measure the impact of the RandomResizedCrop illustrated in the section 5. To do this we did the same experiment as in section 5 but we replaced the RandomResizedCrop with a Resize followed by a random crop with a fixed size. The figure 8 and table 9 shows our results. We can see that the effect observed in the section 5 is mainly due to the Random Resized Crop as we suggested with our analysis of the section 3. # E Fine-Grained Visual Categorization contests: iNaturalist & Herbarium In this section we summarize the results we obtained with our method during the CVPR 2019 iNaturalist [16] and Herbarium [7] competitions3. We used the approach lined out in Subsection 5.3, except that we adapted the preprocessing to each dataset and added a few “tricks” useful in competitions (ensembling, multiple crops). # E.1 Challenges The iNaturalist Challenge 2019 dataset contains images of 1010 animal and plant species, with a training set of 268,243 images and a test set of 35,351 images. The main difficulty is that the species are very similar within the six main families (Birds, Reptiles, Plants, Insects, Fungi and Amphibians) contained in the dataset. There is also a very high variability within the classes as the appearance of males, females and juveniles is often very different. What also complicates the classification is the size of the area of interest which is very variable from one image to another, sometimes the images are close-ups on the subject, sometimes we can hardly distinguish it. As a preprocessing, all images have been resized to have a maximum dimension of 800 pixels. The Herbarium contest requires to identify melastome species from 683 herbarium specimenina. The training set contain 34,225 images and the test set contain 9,565 images. The main difficulty is that the specimina are very similar and not always intact. In this challenge the particularity is that there is no variability in the background: each specimen is photographed on a white sheet of paper. All images have been also resized to have a maximum dimension of 800 pixels. # 3https://www.kaggle.com/c/herbarium-2019-fgvc6 # https://www.kaggle.com/c/inaturalist-2019-fgvc6 13 [Naturalist Train Fine-tuning Test Model used resolution Layer 4 Classifier Batch-norm | resolution SE-ResNext-101-32x4d 448 - v v 704 SENet-154 448 v v v 672 Inception-ResNet-V2 491 - v v 681 ResNet-152-MPN-COV (22] 448 - - - 448 final score: 86.577 % Rank: 5/214 Herbarium Train Fine-tuning Test Model used resolution Layer 4 Classifier Batch-norm | resolution SENet-154 448 - v v 707 ResNet-50 384 - v v 640 final score: 88.845 % Rank: 4/22 Table 10: Our best ensemble results for the Herbarium and INaturalist competitions. # E.2 Ensemble of classifiers In both cases we used 4 different CNNs to do the classification and we averaged their results, which are themselves from 10 crops of the image. We chose 4 quite different in their architectures in order to obtain orthogonal classification results. We tried to include the ResNet-50, but it was significantly worse than the other models, even when using an ensemble of models, probably due to its limited capacity. We used two fine-tuning stages: (1) to adapt to the new dataset in 120 epochs and (2) to adapt to a higher resolution in a few epochs. We chose the initial training resolution with grid-search, within the computational constraints. We did not skew the sampling to balance the classes. The rationale for this is that the performance measure is top-1 accuracy, so the penalty to misclassify infrequent classes is low. # E.3 Results Table 10 summarizes the parameters of our submission and the results. We report our top-performing approach, 3 and 1 points behind the winners of the competition. Note that we just used our method off-the-shelf and therefore used much fewer evaluations on the public part of the test set (5 for iNaturalist and 8 for Herbarium). The number of CNNs that at are combined in our ensemble is also smaller that two best performing ones. In addition, for iNaturalist we did not train on data from the 2018 version of the contest. In summary, our participation was a run with minimal if no tweaking, where we obtain excellent results (5th out of more than 200 on iNaturalist), thanks to the test-time resolution adaptation exposed in this article. 14
{ "id": "1602.07261" }
1906.06247
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
Mode connectivity is a surprising phenomenon in the loss landscape of deep nets. Optima -- at least those discovered by gradient-based optimization -- turn out to be connected by simple paths on which the loss function is almost constant. Often, these paths can be chosen to be piece-wise linear, with as few as two segments. We give mathematical explanations for this phenomenon, assuming generic properties (such as dropout stability and noise stability) of well-trained deep nets, which have previously been identified as part of understanding the generalization properties of deep nets. Our explanation holds for realistic multilayer nets, and experiments are presented to verify the theory.
http://arxiv.org/pdf/1906.06247
Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, Rong Ge
cs.LG, stat.ML
null
null
cs.LG
20190614
20200106
0 2 0 2 n a J 6 ] G L . s c [ 2 v 7 4 2 6 0 . 6 0 9 1 : v i X r a # Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets Rohith Kuditipudi Duke University [email protected] Xiang Wang Duke University [email protected] Holden Lee Princeton University [email protected] Yi Zhang Princeton University [email protected] Zhiyuan Li Princeton University [email protected] Wei Hu Princeton University [email protected] Sanjeev Arora Princeton University and Institute for Advanced Study [email protected] # Rong Ge Duke University [email protected] # Abstract Mode connectivity (Garipov et al., 2018; Draxler et al., 2018) is a surprising phenomenon in the loss landscape of deep nets. Optima—at least those discovered by gradient-based optimization—turn out to be connected by simple paths on which the loss function is almost constant. Often, these paths can be chosen to be piece-wise linear, with as few as two segments. We give mathematical explanations for this phenomenon, assuming generic properties (such as dropout stability and noise stability) of well-trained deep nets, which have previously been identified as part of understanding the generalization properties of deep nets. Our explanation holds for realistic multilayer nets, and experiments are presented to verify the theory. 1 # Introduction Efforts to understand how and why deep learning works have led to a focus on the optimization landscape of the training loss. Since optimization to near-zero training loss occurs for many choices of random initialization, it is clear that the landscape contains many global optima (or near-optima). However, the loss can become quite high when interpolating between found optima, suggesting that these optima occur at the bottom of “valleys” surrounded on all sides by high walls. Therefore the phenomenon of mode connectivity (Garipov et al., 2018; Draxler et al., 2018) came as a surprise: optima (at least the ones discovered by gradient-based optimization) are connected by simple paths in the parameter space, on which the loss function is almost constant. In other words, the optima are not walled off in separate valleys as hitherto believed. More surprisingly, the paths connecting discovered optima can be piece-wise linear with as few as two segments. Mode connectivity begs for theoretical explanation. One paper (Freeman and Bruna, 2016) attempted such an explanation for 2-layer nets, even before the discovery of the phenomenon in multilayer nets. However, they require the width of the net to be exponential in some relevant parameters. Others (Venturi et al., 2018; Liang et al., 2018; Nguyen et al., 2018; Nguyen, 2019) require special structure in their networks where the number of neurons needs to be greater than the number of training data points. Thus it remains an open 1 problem to explain mode connectivity even in the 2-layer case with realistic parameter settings, let alone for standard multilayer architectures. At first sight, finding a mathematical explanation of the mode connectivity phenomenon for multilayer nets—e.g., for a 50-layer ResNet on ImageNet—appears very challenging. However, the glimmer of hope is that since the phenomenon exists for a variety of architectures and datasets, it must arise from some generic property of trained nets. The fact that the connecting paths between optima can have as few as two linear segments further bolsters this hope. Strictly speaking, empirical findings such as in (Garipov et al., 2018; Draxler et al., 2018) do not show connectivity between all optima, but only for typical optima discovered by gradient-based optimization. It seems an open question whether connectivity holds for all optima in overparametrized nets. Section 5 answers this question, via a simple example of an overparametrized two-layer net, not all of whose optima are connected via low-cost paths. Thus to explain mode connectivity one must seek generic properties that hold for optima obtained via gradient-based optimization on realistic data. A body of work that could be a potential source of such generic properties is the ongoing effort to understand the generalization puzzle of over-parametrized nets—specifically, to understand the “true model capacity”. For example, Morcos et al. (2018) note that networks that generalize are insensitive to linear restrictions in the parameter space. Arora et al. (2018) define a noise stability property of deep nets, whereby adding Gaussian noise to the output of a layer is found to have minimal effect on the vector computed at subsequent layers. Such properties seem to arise in a variety of architectures purely from gradient-based optimization, without any explicit noise-injection during training—though of course using small-batch gradient estimates is an implicit source of noise-injection. (Sometimes training also explicitly injects noise, e.g. dropout or batch-normalization, but that is not needed for noise stability to emerge.) Since resilience to perturbations arises in a variety of architectures, such resilience counts as a “generic” property for which it is natural to prove mode connectivity as a consequence. We carry this out in the current paper. Note that our goal here is not to explain every known detail of mode connectivity, but rather to give a plausible first-cut explanation. First, in Section 3 we explain mode connectivity by assuming the network is trained via dropout. In fact, the desired property is weaker: so long as there exists even a single dropout pattern that keeps the training loss close to optimal on the two solutions, our proof constructs a piece-wise linear path between them. The number of linear segments grows linearly with the depth of the net. Then, in Section 4 we make a stronger assumption of noise stability along the lines of Arora et al. (2018) and show that it implies mode connectivity using paths with 10 linear segments. While this assumption is strong, it appears to be close to what is satisfied in practice. (Of course, one could explicitly train deep nets to satisfy the needed noise stability assumption, and the theory applies directly to them.) # 1.1 Related work The landscape of the loss function for training neural networks has received a lot of attention. Dauphin et al. (2014); Choromanska et al. (2015) conjectured that local minima of multi-layer neural networks have similar loss function values, and proved the result in idealized settings. For linear networks, it is known (Kawaguchi, 2016) that all local minima are also globally optimal. Several theoretical works have explored whether a neural network has spurious valleys (non-global minima that are surrounded by other points with higher loss). Freeman and Bruna (2016) showed that for a two-layer net, if it is sufficiently overparametrized then all the local minimizers are (approximately) connected. However, in order to guarantee a small loss along the path they need the number of neurons to be exponential in the number of input dimensions. Venturi et al. (2018) proved that if the number of neurons is larger than either the number of training samples or the intrinsic dimension (infinite for standard architectures), then the neural network cannot have spurious valleys. Liang et al. (2018) proved similar results for the binary classification setting. Nguyen et al. (2018); Nguyen (2019) relaxed the requirement on overparametrization, but still require the output layer to have more direct connections than the number of training samples. Some other papers have studied the existence of spurious local minima. Yun et al. (2018) showed that in most cases neural networks have spurious local minima. Note that a local minimum need only have loss no 2 larger than the points in its neighborhood, so a local minimum is not necessarily a spurious valley. Safran and Shamir (2018) found spurious local minima for simple two-layer neural networks under a Gaussian input distribution. These spurious local minima are indeed spurious valleys as they have positive definite Hessian. # 2 Preliminaries Notations For a vector v, we use ||u|| to denote its 2 norm. For a matrix A, we use ||A|| to denote its operator norm, and ||A||~ to denote its Frobenius norm. We use [n] to denote the set {1,2,...,n}. We use I, to denote the identity matrix in R"*". We use O(-),®(-) to hide constants and use O(-), Q(-) to hide oly-logarithmic factors. Neural network In most of the paper, we consider fully connected neural networks with ReLU activations. Note however that our results can also be extended to convolutional neural networks (in particular, see Remark 1 and the experiments in Section 6). Suppose the network has d layers. Let the vector before activation at layer i be xi, i ∈ [d], where xd is just the output. For convenience, we also denote the input x as x0. Let Ai be the weight matrix at i-th layer, so that we have xi = Aiφ(xi−1) for 2 ≤ i ≤ d and x1 = A1x0. For any layer i, 1 ≤ i ≤ d, let the width of the layer be hi. We use [Ai]j to denote the j-th column of Ai. Let the maximum width of the hidden layers be hmax := max{h1, h2, . . . , hd−1} and the minimum width of the hidden layers be hmin := min{h1, h2, . . . , hd−1}. We use Θ to denote the set of parameters of neural network, and in our specific model, Θ = Rh1×h0 × Rh2×h1 × · · · × Rhd×hd−1 which consists of all the weight matrices {Ai}’s. Throughout the paper, we use fθ, θ ∈ Θ to denote the function that is computed by the neural network. For a data set (x, y) ∼ D, the loss is defined as LD(fθ) := E(x,y)∼D[l(y, fθ(x))] where l is a loss function. The loss function l(y, ˆy) is convex in the second parameter. We omit the distribution D when it is clear from the context. Mode connectivity and spurious valleys Fixing a neural network architecture, a data set D and a loss function, we say two sets of parameters/solutions 04 and 6 are e-connected if there is a path a(t): R> © hat is continuous with respect to ¢ and satisfies: 1. 7(0) = 04; 2. 7(1) = 08 and 3. for any ¢ € [0,1], L(faiy) < max{L(foa), L(foz)} + ¢. If € = 0, we omit € and just say they are connected. If all local minimizers are connected, then we say that the loss function has the mode connectivity property. However, as we later show in Section 5, this property is very strong and is not true even for overparametrized two-layer nets. Therefore we restrict our attention to classes of low-cost solutions that can be found by the gradient-based algorithms (in particular in Section 3 we focus on solutions that are dropout stable, and in Section 4 we focus on solutions that are noise stable). We say the loss function has «mode connectivity property with respect to a class of low-cost solutions C, if any two minimizers in C are €-connected. Mode connectivity is closely related to the notion of spurious valleys and connected sublevel sets (Venturi et al., 2018). If a loss function has all its sublevel sets ({θ : L(fθ) ≤ λ}) connected, then it has the mode connectivity property. When the network only has the mode connectivity property with respect to a class of solutions C, as long as the class C contains a global minimizer, we know there are no spurious valleys in C. However, we emphasize that neither mode connectivity or lack of spurious valleys implies any local search algorithm can efficiently find the global minimizer. These notions only suggest that it is unlikely for local search algorithms to get completely stuck. # 3 Connectivity of dropout-stable optima In this section we show that dropout stable solutions are connected. More concretely, we define a solution @ to be e-dropout stable if we can remove a subset of half its neurons in each layer such that the loss remains 3 steady. Definition 1. (Dropout Stability) A solution @ is e-dropout stable if for alli such that 1 <i < d, there exists a subset of at most |h;/2| hidden units in each of the layers j from i through d—1 such that after rescaling the outputs of these hidden units (or equivalently, the corresponding rows and/or columns of the relevant weight matrices) by some factor r} and setting the outputs of the remaining units to zero, we obtain a parameter 6; such that L(fo,) < L(fo) +€- Intuitively, if a solution is e-dropout stable then it is essentially only using half of the network’s capacity. We show that such solutions are connected: Theorem 1. Let 04 and 6" be two €-dropout stable solutions. Then there exists a path in parameter space x: [0,1] + © between 04 and 0” such that L( fry) < max{L(foa), L(fox)} +€ for 0 <t <1. In other words, letting C be the set of solutions that are e-dropout stable, a ReLU network has the e-mode connectivity property with respect to C. Our path construction in Theorem 1 consists of two key steps. First we show that we can rescale at least half the hidden units in both θA and θB to zero via continuous paths of low loss, thus obtaining two parameters θA Lemma 1. Let 0 be an e-dropout stable solution and let 0; be specified as in Definition 1 for 1 <i <d. Then there exists a path in parameter space 7 : [0,1] + © between 6 and 6, passing through each 6; such that L( fray) < L(fo) + € forO<t <1. Though naively one might expect to be able to directly connect the weights of @ and 6; via interpolation, such a path may incur high loss as the loss function is not convex over ©. In our proof of Lemma 1, we rely on a much more careful construction. The construction uses two types of steps: (a) interpolate between two weights in the top layer (the loss is convex in the top layer weights); (b) if a set of neurons already have their output weights set to zero, then we can change their input weights arbitrarily. See Figure 1 for an example path for a 3-layer network. Here we have separated the weight matrices into equally sized blocks: A3g= [ 3 R3 ]. Ag a | e and A, = a | The path consists of 6 steps alternating between type (a) and type (b). Note that for all the type (a) steps, we only update the top layer weights; for all the type (b) steps, we only change rows of a weight matrix (inputs to neurons) if the corresponding columns in the previous matrix (outputs of neurons) are already 0. In Section A we show how such a path can be generalized to any number of layers. We then show that we can permute the hidden units of θA 1 such that its non-zero units do not intersect 1 , thus allowing us two interpolate between these two parameters. This is formalized in the with those of θB following lemma and the proof is deferred to supplementary material. Lemma 2. Let 6 and 6’ be two solutions such that at least [h;/2] of the units in the i” hidden layer have been set to zero in both. Then there exists a path in parameter space 7 : [0,1] + © between 6 and 6’ with 8 line segments such that L( fr(e)) < max{L( fo), L(fo)}- Theorem 1 follows immediately from Lemma 1 and Lemma 2, as one can first connect θA to its dropout 1 of θB using Lemma 2, and finally connect version θA 1 to θB using Lemma 1 again. θB 1 using Lemma 1, then connect θA 1 to dropout version θB Finally, our results can be generalized to convolutional networks if we do channel-wise dropout (Tompson et al., 2015; Keshari et al., 2018). Remark 1. For convolutional networks, a channel-wise dropout will randomly set entire channels to 0 and rescale the remaining channels using an appropriate factor. Theorem 1 can be extended to work with channel-wise dropout on convolutional networks. 1Note our results will also work if r is allowed to vary for each layer. 4 Az Ag Ay a) (taf me] [te] i @ [rolol [ete Ee} @ ® (nslo] Fete] Le] © a) [ort] HS] EE] @ ©) [ora] [et] Ee] o @ (rojo) Fe) HE} @ a tnajo) eft} FA] Figure 1: Example path, 6 line segments from a 3-layer network to its dropout version. Red denotes weights that have changed between steps while green denotes the zeroed weights that allow us to make these changes without affecting our output. # 4 Connectivity via noise stability In this section, we relate mode connectivity to another notion of robustness for neural networks—noise stability. It has been observed (Morcos et al., 2018) that neural networks often perform as well even if a small amount of noise is injected into the hidden layers. This was formalized in (Arora et al., 2018), where the authors showed that noise stable networks tend to generalize well. In this section we use a very similar notion of noise stability, and show that all noise stable solutions can be connected as long as the network is sufficiently overparametrized. We begin in Section 4.1 by restating the definitions of noise stability in (Arora et al., 2018) and also highlighting the key differences in our definitions. In Section 6 we verify these assumptions in practice. In Section 4.2, we first prove that noise stability implies dropout stability (meaning Theorem 1 applies) and then show that it is in fact possible to connect noise stable neural networks via even simpler paths than mere dropout stable networks. # 4.1 Noise stability First we introduce some additional notations and assumptions. In this section, we consider a finite and fixed training set S. For a network parameter 0, the empirical loss function is L(0) = ist Vieyres UY f(z). Here the loss function I(y, 9) is assumed to be -Lipschitz in g: for any 9, 7% € R’ and any y € R", we have |I(y, 9) — Uy, 9')| < 8||g — 9’||. Note that the standard cross entropy loss over the softmax function is \/2-Lipschitz. For any two layers i ≤ j, let M i,j be the operator for the composition of these layers, such that xi be the Jacobian of M i,j at input xi. Since the activation functions are ReLU’s, we xj = M i,j(xi). Let J i,j know M i,j(xi) = J i,j xi xi. Arora et al. (2018) used several quantities to define noise stability. We state the definitions of these quantities below. 5 Definition 2 (Noise Stability Quantities). Given a sample set S, the layer cushion of layer i is defined as µi := minx∈S # eArdcall We? Well” For any two layers i ≤ j, the interlayer cushion µi,j is defined as µi,j = minx∈S Furthermore, for any layer i the minimal interlayer cushion is defined as2 µi→ = mini≤j≤d µi,j. The activation contraction c is defined as c = maxx∈S, 1≤i≤d−1 Intuitively, these quantities measures the stability of the network’s output to noise for both a single layer and across multiple layers. Note that the definition of the interlayer cushion is slightly different from the original definition in (Arora et al., 2018). Specifically, in the denominator of our definition of interlayer cushion, we replace the Frobenius norm of J i,j xi by its spectral norm. In the original definition, the interlayer cushion is at most 1/ hi. With this new definition, the interlayer cushion need not depend on the layer width hi. The final quantity of interest is interlayer smoothness, which measures how close the network’s be- havior is to its linear approximation under noise. Our focus here is on the noise generated by the dropout procedure (Algorithm 1). Let θ = {A1, A2, ..., Ad} be weights of the original network, and let θi = {A1, ˆA2, . . . , ˆAi, Ai+1, . . . , Ad} be the result of applying Algorithm 1 to weight matrices from layer 2 to layer i.3 For any input x, let ˆxi i−1(t) be the vector before activation at layer i using parameters θt + θi(1 − t) and θt + θi−1(1 − t) respectively. Definition 3 (Interlayer Smoothness). Given the scenario above, define interlayer smoothness ρ to be the largest number such that with probability at least 1/2 over the randomness in Algorithm 1 for any two layers i, j satisfying for every 2 ≤ i ≤ j ≤ d, x ∈ S, and 0 ≤ t ≤ 1 i (@4(0) — JS (@4(0) 8 IM @_1() — J? @ a) . If the network is smooth (has Lipschitz gradient), then interlayer smoothness holds as long as ||#/(¢) — x'||, ||@¢_,(t) — 2° || is small. Essentially the assumption here is that the network behaves smoothly in the random directions generated by randomly dropping out columns of the matrices. Similar to (Arora et al., 2018), we have defined multiple quantities measuring the noise stability of a network. These quantities are in practice small constants as we verify experimentally in Section 6. Finally, we combine all these quantities to define a single overall measure of the noise stability of a network. Definition 4 (Noise Stability). For a network @ with layer cushion j;, minimal interlayer cushion pi, activation contraction c and interlayer smoothness p, if the minimum width layer hmin is at least Q(1) wide, p > 3d and ||6(#i(t)) ||. = O(1/Vhi)||O(44 (0) || for 1 <i < d—-1,0<t <1, we say the network 6 is e-noise stable for Bed?/? maxes (|| fo(x)|l) ij2_. Renin M2 <i<a (Hifi) € The smaller €, the more robust the network. Note that the quantity € is small as long as the hidden layer width hmin is large compared to the noise stable parameters. Intuitively, we can think of € as a single parameter that captures the noise stability of the network. # 4.2 Noise stability implies dropout stability We now show that noise stable local minimizers must also be dropout stable, from which it follows that noise stable local minimizers are connected. We first define the dropout procedure we will be using in Algorithm 1. 2Note that J i,i 3Note that A1 is excluded because dropping out columns in ˆA2 already drops out the neurons in layer 1; dropping out xi = Ihi and µi,i = 1. 6 Algorithm 1 Dropout (Ai, p) Input: Layer matrix Ai ∈ Rhi×hi−1 , dropout probability 0 < p < 1. Output: Returns ˆAi ∈ Rhi×hi−1 . 1: For each j ∈ [hi−1], let δj be an i.i.d. Bernoulli random variable which takes the value 0 with probability p and takes the value 1 1−p with probability (1 − p). 2: For each j ∈ [hi−1], let [ ˆAi]j be δj[Ai]j, where [ ˆAi]j and [Ai]j are the j-th column of ˆAi and Ai respectively. The main theorem that we prove in this section is: Theorem 2. Let 64 and 6? be two fully connected networks that are both €-noise stable, there exists a path with 10 line segments in parameter space 7 : [0,1] > © between 04 and 6" such that! L(fx(t)) < max{L(fga), L(foe)} + Ole) for0O<t<1. To prove the theorem, we will first show that the networks 64 and 67 are O(€)-dropout stable. This is captured in the following main lemma: Lemma 3. Let @ be an €-noise stable network, and let 0, be the network with weight matrices from layer 2 to layer d dropped out by Algorithm 1 with dropout probability Q(1/hmin) <p < 3. For any2<i<d, assume ||[Aj];|| = O(Yp)||Aille for 1 <j < hi-1. For any 0 <t <1, define the network on the segment from 6 to 0; as :=0+t(0, — 8). Then, with probability at least 1/4 over the weights generated by Algorithm 1, L(fo,) < L(fo) + O(\/Be), for anyO<t<1. The main difference between Lemma 3 and Lemma 1 is that we can now directly interpolate between the original network and its dropout version, which reduces the number of segments required. This is mainly because in the noise stable setting, we can prove that after dropping out the neurons, not only does the output remains stable but moreover every intermediate layer also remains stable. From Lemma 3, the proof of Theorem 2 is very similar to the proof of Theorem 1. The detailed proof is given in Section B. The additional power of Lemma 3 also allows us to consider a smaller dropout probability. The theorem below allows us to trade the dropout fraction with the energy barrier « that we can prove—if the network is highly overparametrized, one can choose a small dropout probability p which allow the energy barrier € to be smaller. Theorem 3. Suppose there exists a network 0* with layer width h; for each layer i that achieves loss L(fo-), and minimum hidden layer width h*, = (1). Let 04 and 6* be two €-noise stable networks. For any dropout probability 1.5 maxi<j<a—1(hi/hi) < p < 3/4, if for any2<i<d,1 <j < hi-a, |I[Ail;|] = OC/P)|Aille then there exists a path with 13 line segments in parameter space m : [0,1] + © between 64 and 6" such that L(fa(t)) S max{L( fos) + O( Be), L( fon) + O( Be), L(for)} forO<t< 1. Intuitively, we prove this theorem by connecting θA and θB via the neural network θ∗ with narrow hidden layers. The detailed proof is given in Section B. # 5 Disconnected modes in two-layer nets The mode connectivity property is not true for every neural network. Freeman and Bruna (2016) gave a counter-example showing that if the network is not overparametrized, then there can be different global minima of the neural network that are not connected. Venturi et al. (2018) showed that spurious valleys can exist for 2-layer ReLU nets with an arbitrary number of hidden units, but again they do not extend their result columns in A1 would drop out input coordinates, which is not necessary. 4Here O(-) hides log factors on relevant factors including |S|, d, ||x||,1/e and h;||Aj|| for layers i € [d]. 7 to the overparametrized setting. In this section, we show that even if a neural network is overparametrized—in the sense that there exists a network of smaller width that can achieve optimal loss—there can still be two global minimizers that are not connected. In particular, suppose we are training a two-layer ReLU student network with h hidden units to fit a dataset generated by a ground truth two-layer ReLU teacher network with ht hidden units such that the samples in the dataset are drawn from some input distribution and the labels computed via forward passes through the teacher network. The following theorem demonstrates that regardless of the degree to which the student network is overparametrized, we can always construct such a dataset for which global minima are not connected. Theorem 4. For any width h and and convex loss function|: Rx RR such that I(y, §) is minimized when y =9, there exists a dataset generated by ground-truth teacher network with two hidden units (i.e. hy = 2) and one output unit such that global minimizers are not connected for a student network with h hidden units. Our proof is based on an explicit construction. The detailed construction is given in Section C. # 6 Experiments We now demonstrate that our assumptions and theoretical findings accurately characterize mode connectivity in practical settings. In particular, we empirically validate our claims using standard convolutional architectures— for which we treat individual filters as the hidden units and apply channel-wise dropout (see Remark 1)—trained on datasets such as CIFAR-10 and MNIST. Training with dropout is not necessary for a network to be either dropout-stable or noise-stable. Recall that our definition of dropout-stability merely requires the existence of a particular sub-network with half the width of the original that achieves low loss. Moreover, as Theorem 3 suggests, if there exists a narrow network that achieves low loss, then we need only be able to drop out a number of filters equal to the width of the narrow network to connect local minima. ——————— s © 08 3 g 06 — loss 2 04 —— accuracy 8 02 g 00 05 06 07 08 09 1 - dropout probability (1- p) 1.0 s © 08 3 g 06 —— loss 2 04 —— accuracy 3 go 0 ——— errr 00 02 04 O06 O08 1.0 path parameter (t) 1.0 s © 08 3 g 06 — loss 2 04 —— accuracy 8 02 g 00 2 4 6 8 10 Hidden layer width (# of filters) ——————— 1.0 1.0 s s s © 08 © 08 © 08 3 3 3 g 06 — loss g 06 —— loss g 06 — loss 2 04 —— accuracy 2 04 —— accuracy 2 04 —— accuracy 8 02 3 8 02 g go g 00 0 ——— errr 00 05 06 07 08 09 00 02 04 O06 O08 1.0 2 4 6 8 10 1 - dropout probability (1- p) path parameter (t) Hidden layer width (# of filters) Figure 2: Results for convolutional networks trained on MNIST. First, we demonstrate in the left plot in Figure 2 on MNIST that 3-layer convolutional nets (not counting the output layer) with 32 3 x 3 filters in each layer tend to be fairly dropout stable—both in the original sense of Definition 1 and especially if we relax the definition to allow for wider subnetworks—despite the fact that no dropout was applied in training. For each trial, we randomly sampled 20 dropout networks with exactly |32(1 — p)| non-zero filters in each layer and report the performance of the best one. In the center plot, we verify for p = 0.2 we can construct a linear path 7(t) : R + © from our convolutional net to a dropout version of itself. Similar results were observed when varying p. Finally, in the right plot we demonstrate the existence of 3-layer convolutional nets just a few filters wide that are able to achieve low loss on MNIST. Taken together, these results indicate that our path construction in Theorem 3 performs well in practical settings. In particular, we can connect two convolutional nets trained on MNIST by way of first interpolating between the original nets and their dropped out versions with p = 0.2, and then connecting the dropped out versions by way of a narrow subnetwork with at most |32p| non-zero filters. 8 : AB 0.8 > fa 0.100 0.125 0.150 0.175 Tl o22 13 14 3 layer cushion pu; contraction ¢ g 0.6 — loss 8 z — accuracy y 0.4 43 0.0 0.2 0.4 0.6 0.8 1.0 0) 02 6 interlayer cushion 4; — = interlayer smoothness ps Path parameter (t) 0.8 > fa 3 g 0.6 — loss 8 z — accuracy y 0.4 43 0.0 0.2 0.4 0.6 0.8 1.0 Path parameter (t) Figure 3: Left) Distribution of layer cushion, activation contraction, interlayer cushion and interlayer smoothness of the 6-th layer of a VGG-11 network on the training set. The other layers’ parameters are exhibited in Section D.3. Right) The loss and training accuracy along the path between two noise stable VGG-11 networks described in Theorem 3. We also demonstrate that the VGG-11 (Simonyan and Zisserman, 2014) architecture trained with channel- wise dropout (Tompson et al., 2015; Keshari et al., 2018) with p = 0.25 at the first three layers5 and p = 0.5 at the others on CIFAR-10 converges to a noise stable minima—as measured by layer cushion, interlayer cushion, activation contraction and interlayer smoothness. The network under investigation achieves 95% training and 91% test accuracy with channel-wise dropout activated, in comparison to 99% training and 92% test accuracy with dropout turned off. Figure 3 plots the distribution of the noise stability parameters over different data points in the training set, from which we can see they behave nicely. Interestingly, we also discovered that networks trained without channel-wise dropout exhibit similarly nice behavior on all but the first few layers. Finally, in Figure 3, we demonstrate that the training loss and accuracy obtained via the path construction in Theorem 3 between two noise stable VGG-11 networks θA and θB remain fairly low and high respectively—particularly in comparison to directly interpolating between the two networks, which incurs loss as high as 2.34 and accuracy as low as 10%, as shown in Section D.2. Further details on all experiments are provided in Section D.1. # Acknowledgments Rong Ge acknowledges funding from NSF CCF-1704656, NSF CCF-1845171 (CAREER), the Sloan Fellowship and Google Faculty Research Award. Sanjeev Arora acknowledges funding from the NSF, ONR, Simons Foundation, Schmidt Foundation, Amazon Research, DARPA and SRC. # References Arora, S., Ge, R., Neyshabur, B., and Zhang, Y. (2018). Stronger generalization bounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296. Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B., and LeCun, Y. (2015). The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pages 192–204. Dauphin, Y. N., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., and Bengio, Y. (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pages 2933–2941. 5we find the first three layers are less resistant to channel-wise dropout. 9 Draxler, F., Veschgini, K., Salmhofer, M., and Hamprecht, F. A. (2018). Essentially no barriers in neural network energy landscape. arXiv preprint arXiv:1803.00885. Freeman, C. D. and Bruna, J. (2016). Topology and geometry of half-rectified network optimization. arXiv preprint arXiv:1611.01540. Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D. P., and Wilson, A. G. (2018). Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems, pages 8789–8798. Kawaguchi, K. (2016). Deep learning without poor local minima. In Advances in neural information processing systems, pages 586–594. Keshari, R., Singh, R., and Vatsa, M. (2018). Guided dropout. arXiv preprint arXiv:1812.03965. Liang, S., Sun, R., Li, Y., and Srikant, R. (2018). Understanding the loss surface of neural networks for binary classification. In International Conference on Machine Learning, pages 2840–2849. Morcos, A. S., Barrett, D. G., Rabinowitz, N. C., and Botvinick, M. (2018). On the importance of single directions for generalization. arXiv preprint arXiv:1803.06959. Nguyen, Q. (2019). On connected sublevel sets in deep learning. arXiv preprint arXiv:1901.07417. Nguyen, Q., Mukkamala, M. C., and Hein, M. (2018). On the loss landscape of a class of deep neural networks with no bad local valleys. arXiv preprint arXiv:1809.10749. Safran, I. and Shamir, O. (2018). Spurious local minima are common in two-layer relu neural networks. In International Conference on Machine Learning, pages 4430–4438. Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Tompson, J., Goroshin, R., Jain, A., LeCun, Y., and Bregler, C. (2015). Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 648–656. Tropp, J. A. (2012). User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434. Venturi, L., Bandeira, A. S., and Bruna, J. (2018). Spurious valleys in two-layer neural network optimization landscapes. arXiv preprint arXiv:1802.06384. Yun, C., Sra, S., and Jadbabaie, A. (2018). A critical view of global optimality in deep learning. arXiv preprint arXiv:1802.03487. 10 # A Proofs for connectivity of dropout-stable optima Proof of Lemma 1. Without loss of generality, suppose for each 0; that the subset of |h;/2] non-zero hidden units in each layer are all indexed between 1 and |h;/2]. For 1 <i < d, we can partition A; into quadrants such that A; = fi c |: (Here, L; € Rl /21*l/2]. Tf h; is odd, when we write L; in the i| Ri other quadrants we implicitly pad it with zeros in a consistent manner.) Similarly, we can partition A; such that A, = 2 and A, such that Ag = [ La | Ra ]. We will sometimes use the notation A; to refer to 1 the value of A; at a given point on our path, while Ae will always refer to the value of A; at 0. We now proceed to prove via induction the existence of a path from @ to 6; for all 1 whose loss is bounded by L(f@) +, from which the main result immediately follows. Base case: from @ to 6q_; As a base case of the induction, we need to construct a path from 6 to 04-1, such that the loss is bounded by L(f») + €. First, note that setting a particular subset of columns (e.g. the right half of columns) in A; to zero is equivalent to setting the corresponding rows (e.g. the bottom half of rows) of A;_1 to zero. So from the fact that L(fo,_,) < L(fo) + it follows that we can equivalently replace Ag with [ rl 0 ] without increasing our loss by more than e. In fact, because our loss function is convex over Aq we can actually interpolate Aq between AS and In fact, because our loss function is convex over Aq we can actually interpolate Aq between AS and [ rl | 0 ] while keeping our loss below L(fg) + € at every point along this subpath. Then, because Ry = 0 we can modify both Dg_1 and Rq—1 any way we'd like without affecting the output : : Lo cs . . of our network. In particular, we can interpolate Ag_; between Ao, and AS while keeping our loss constant long this subpath, thus arriving at 0a_1. our loss constant long this subpath, thus arriving at θd−1. From θk to θk−1 79 Suppose we have found a path from @ to 6% such that (1) A9* = [ rL4| 0 ], (2) Age = mh : for Suppose we have found a path from @ to 6% such that (1) A9* L? | ce k<i<d, (3) A = |: and (4) A%* = A? for i < k, L? | ce 0} 0 L? | ce 0} 0 L(fo) +. Note that @4_1 satisfies all these assumptions, including in particular (2) as there are of course no A; between Ag_; and Ag. Now let us extend this path to 0,_1. k<i<d, (3) A = |: and (4) A%* = A? for i < k, such that the loss along the path is at most First, because the rightmost columns of A; are zero for k < i < d, we can modify the bottom rows of A; ; . . . Lh | cf for k <i < d without affecting the output of our network. In particular, we can set Az to Hr 7 ; ke 0 # 0 ze # 0 rLθ i as well as A; to ze _ for k <i<d. From the fact that the loss is convex over Ay and that Pr a L(fo,1) < L(fe) + €, it then follows that we can set Aq to [ 0 rL', | via interpolation while keeping our loss below L(fg) + ¢. In particular, note that because the off-diagonal blocks of A; are zero for k <i < d, interpolating between the leftmost columns of Ag being non-zero and the rightmost columns of Aq being non-zero simply amounts to interpolating between the outputs of the two subnetworks comprised respectively of the first |h;/2| and last [h;/2| rows of A; for k <i < d. Once we have the leftmost columns of Aq set to zero and A; in block-diagonal form for k < i < d, we can proceed to modify the top rows of A, however we’d like without affecting the output of our network. 0 Specifically, let us set Ay to rh 3 . We can then reset Aq to [ rl | 0 ] via interpolation—this time k without affecting our loss since the weights of our two subnetworks are equivalent—and afterwards set D;, to zero and R; to zero for k < i < d—again without affecting our loss since the rightmost columns of Ag are now zero, meaning that the bottom rows of A; have no affect on our network’s output. without affecting our loss since the weights of our two subnetworks are equivalent—and afterwards set D;, to zero and R; to zero for k < i < d—again without affecting our loss since the rightmost columns of Ag are now zero, meaning that the bottom rows of A; have no affect on our network’s output. Le | 0 Tr for k <i<dand Aq = [ rLf | 0 ]. And so we are Following these steps, we will have A; = Le | 0 Tr 0 for k <i<dand Aq = [ rLf | 0 ]. And so we are now free to set the bottom rows of A; to zero without affecting our loss, thus arriving at 0,1. Following these steps, we will have A; = 11 Lemma 4. Let 6 be a parameter such that at least [h;/2] of the units in each hidden layer have been set to zero. Then we can achieve an arbitrary permutation of the non-zero hidden units of 6 via a path consisting of just 5 line segments such that our loss is constant along this path. Proof. Let 7 : [hi] + [hi] be some permutation over the units in layer 7. Without loss of generality, suppose all non-zero units in layer i are indexed between 0 and |h;/2|, and define z’ : [|h;/2]] + [hi] \ [[hi/2]] as any one-to-one mapping such that 7’(i) = (i) if r(i) € [hi] \ [Lhi/2]]. Note that when we refer to a unit j as “set to zero”, we mean that both row j of A; and column j of Aji have been set to zero. To permute the units of layer 7, we can first simultaneously copy the non-zero rows of A; into a subset o: the rows that have been set to zero. Specifically, for 7 € [|h;/2]] we can copy row j of A; into row z’(j) via interpolation and without affecting our loss, due to the fact that column 7’(j) in Aj+1 is set to zero. We can then set column j of A;,1 to zero while copying its value to column 7’(j), again via interpolation and without affecting our loss since rows j and 7‘(j) of A; are now equivalent. Following these first two steps, the first |h;/2| columns of A;,1 will have been set to zero. Thus, for all J € [Lhi/2]] such that m(7) € [h;/2] we can copy row x’(j) of A; into row 7(j) without affecting our loss. We can then set column 7’(j) of Aj41 to zero while copying its value into column 7(j) via interpolation an without affecting our loss since rows 7'(j) and m(j) of A; are now equivalent. Setting row 7’(j) to zero—again for all j € [[hi/2)] such that 7(j) € [h;/2]—completes the permutation for layer i. Note that because we leave the output of layer i unchanged throughout the course of permuting the units of layer i, it follows that we can perform all swaps across all layers simultaneously. And so from the fact that permuting each layer can be done in 5 steps—each of which consists of a single line segment in parameter space—the main result immediately follows. Proof of Lemma 2. Without loss of generality, suppose for @ that the subset of |h;/2] non-zero hidden units in each layer i are all indexed between 0 and [h;/2]. Note that when we refer to a unit as “set to zero", we mean that both the corresponding row of A; and column of A;;1 have been set to zero. Adopting our notation in Lemma 1, we can construct a path from 6 to 6’ as follows. First, from the fact that the second half of units in each hidden layer i have been set to zero in @ we have 6 6 7 hat A? zs | Ae a 3 for 1 <i<d,and AÂ¥= [ Lo | 0 ]. Similarly, half the rows of A® are for 1 <i<d,and AÂ¥= [ Lo | 0 ]. Similarly, half are zero for 1 <i < d, and half the columns of Ag zero, half the rows and columns of Ag are zero for 1 <i < d, and half the columns of Ag are zero. Note that he indices of the non-zero units in 6’ may intersect with those of the non-zero units in 6. For 1 <i < d, let B; denote the submatrix of A; corresponding to the non-zero rows and columns of AM, Because A? are block-diagonal for 1 < i < d and the rightmost columns of A§ are zero, starting from 6 we can modify the bottom rows of A; for 1 <i < d any way we’d like without affecting our loss—as done in our . Li| 0], Lé ath construction for Lemma 1. In particular, let us set A; to Ete] for 1 <i<dand A; to a]: i 1 . Li| 0], Lé ath construction for Lemma 1. In particular, let us set A; to Ete] for 1 <i<dand A; to a]: i 1 Then, from the fact that our loss function is convex over Ag it follows that we can set Aq to [ 0 | BY ] via interpolation while keeping our loss below max{L(fg), L(fo-)}. Finally, from the fact that the leftmost columns of Aq are now zero and A; are still block-diagonal for 1 < i < d, it follows that we can set L; to zero Then, from the fact that our loss function is convex over Ag it follows that we can set Aq to [ 0 | BY ] via interpolation while keeping our loss below max{L(fg), L(fo-)}. Finally, from the fact that the leftmost columns of Aq are now zero and A; are still block-diagonal for 1 < i < d, it follows that we can set L; to zero ; . . . . 0} 0 . or 1 <i <d without affecting our loss—thus making A; equal to tae | for 1 <i<dand A; equal to ; . . . . . or 1 <i <d without affecting our loss—thus making A; equal to tae | for 1 <i<dand A; equal to cal To complete our path from @ to 6’ we now simply need to permute the units of each hidden layer so as to return the elements of BY to their original positions in A; for each 7. From Lemma 4 it follows that we can accomplish this permutation via 5 line segments in parameter space without affecting our loss. Combined with the previous steps above, we have constructed path from 6 to 0’ consisting of a total of 8 line segments whose loss is bounded by max{L( fo), L(fo’)}- Proof of Theorem 1. First, from Lemma 1 we know we can construct paths from both 64 to 64 and 6? to 02 while keeping our loss below L(fga) + € and L(fgz) + respectively. From Lemma 2 we know that we 12 ] can construct a path from 6 to 67 such that the loss along the path is bounded by max{L(fga), L(for)}- The main result then follows from the fact that L(fga) < L(fga) + € and L(fgz) < L(foz) + € due to 64 an 0 both being e-dropout stable. # B Proofs for connectivity via noise stability In this section, we give detailed proofs showing that noise stability implies connectivity. In the following lemma, we first show that the network output is stable if we randomly dropout columns in a single layer using Algorithm 1. Lemma 5. For any layer 2<i<d, lee G={(UM,a)}™, be a set of matrix/vector pairs of size m where U € Rx and x € R'-: satisfying |||.. =O (44). Given Aj, let A; € R'*"*-1 be the output of hima Algorithm 1 with dropout probability 0 < p< 3. Assume ||{Ai];|| = O(/P)||Aille for 1 <j < hi-1. Given any0 <6 <1, lete’ =O (ee), with probability at least 1— 6, we have for any (U,x) € G that JU (A; — Aa)2|| < €||Aille||U |||]. Further assuming min = (22). we know with probability at least 1—0, no less than 3p fraction of columns in A; are zero vectors. Intuitively, this lmma upper-bounds the change in the network output after dropping out a single layer. In the lemma, we should think of x as the input to the current layer, A; as the layer matrix and U as the Jacobian of the network output with respect to the layer output. If the activation pattern does not change after the dropping out, U Aja is exactly the output of the dropped out network and ||U(A; — Ai)a|| is the change in the network output. Proof of Lemma 5. Fixing 2 <i < d and one pair (U,x) € G, we show with probability at least 1 — 2, |U(A; — Aj)al] < || Ail e||U|||lal]. Let Ux be the k-th column of U. Then by definition of A; in the algorithm, we know U(A; = Ai)x = $7 Up [Aalajx5 (6; — 1) = Ss (= ts) «;(6; — 1), 7 \E where δj is an i.i.d. Bernoulli random variable which takes the value 0 with probability p and takes the value 1 1−p with probability (1 − p). Let [Ai]j be the j-th column of Ai. Because p ≤ 3 4 , 1 1−p = O(1) (any p bounded away from 1 will work). Hence the norm for each individual term can be bounded as follows. | (= tila 2,(d;—1)| 20 (24) WTAddsI <0 (#2) jonas , (Aleta) Vimin ° where (*) uses the assumption that ||z||o. = o( cll ) and (}) holds because we assume ||[Aj];|| = O(/p)|| Alle for 1 <j < hit. 13 For the total variance, we have =z |(S ean.) ZACH < Dla Lill inf (0-0 «e+ (25-1) <a-n) ® iow Ale o (HE) p(i+ 2) < ||UAi||;-O (2) “p O (gael) Amin ; where inequality («) uses the assumption that |||. =O ($24). Then, by the vector Bernstein inequality a1 (Lemma 8), we know given 0 < 6 < 1, there exists «’ = O (v rsp | ; With probability at least 1 — 2, we have (A; = Adar < €llAdlellUllle # (A; = Adar < €llAdlellUllle Taking the union bound over all (U, x) pairs in G, we know that with probability at least 1 — 6, for any (U,2) €G, |U(A, = Aida) < eAillel|Ullll Suppose Amin = Q (sn); then by the Chernoff bound, we know with probability at least 1 — 6, the dropped out fraction is at least 3p. Taking another union bound concludes our proof. Now we are ready to prove Lemma 3. The idea is similar to (Arora et al., 2018), but we give the proo here for completeness. Lemma 3. Let @ be an €-noise stable network, and let 0, be the network with weight matrices from layer 2 to layer d dropped out by Algorithm 1 with dropout probability Q(1/hmin) <p < 3. For any2<i<d, assume ||[Aj];|| = O(Yp)||Aille for 1 <j < hi-1. For any 0 <t <1, define the network on the segment from 6 to 0; as :=0+t(0, — 8). Then, with probability at least 1/4 over the weights generated by Algorithm 1, L(fo,) < L( fo) + O(\/Be), for anyO <t <1. Proof of Lemma 3. We first bound the difference between the dropped out network θ1 and the original network θ. Bounding ||fo(x) — fo,(a)||: We first show that with probability at least 1/2 — 4, || fo(x) — fo,(«)|| = jx? — #4] < € Wels )||, where e’ will be specified later. For any layer i > 1 and letting Fe be he vector before activation at layer j if the weights Ay,..., A; are replaced by A,..., Ai. According to Lemma 5, for any layer 2 < i < d, given 0 <6 <1, let ¢ =O ( ee | , with {/ femin gnin TH probability at least 1 − δ/d over ˆAi, we have 3 é Hl |U (Ai — Aa)ar|| < = [Alle llU [lle] (1) for any (U, x) ∈ {(J i,j i−1))|x ∈ S, i ≤ j ≤ d}. By taking a union bound over i, we know inequality (1) holds with probability at least 1 − δ for every i. Recall that the interlayer smoothness holds with probability 14 at least 1/2. Taking another union, we know with probability at least 1/2 − δ, interlayer smoothness holds and inequality (1) holds for every 2 ≤ i ≤ d. Next, conditioning on the success of these two events, we will inductively prove for any 1 ≤ i ≤ d, for any i ≤ j ≤ d, ||} — a) || < (i/d)e'||x?]. For the base case i = 1, since we are not dropping out any weight matrix, the inequality is trivial. For any <i-—1<d-1, suppose ||#]_, — 2 || < e'||x4|| for any i— 1 <j < d; we prove the induction hypothesis For the base case i = 1, since we are not dropping out any weight matrix, the inequality is trivial. For any 1 <i-—1<d-1, suppose ||#]_, — 2 || < e'||x4|| for any i— 1 <j < d; we prove the induction hypothesis holds for layer i. For any i ≤ j ≤ d we have \!@} — 29 || = ||(@ — 44.4) + @L,— 2) < le} — 21 |i + (eh, — 2". By the induction hypothesis, we know the second term can be bounded by (i — 1)e’||x4||/d. Therefore, in order to complete the induction step, it suffices to show that the first term is bounded by ¢’||a7||/d. For simplicity, we also denote a t as #1. Let A; = A; — Aj. We can decompose the error into two terms: \@} — #1 = |" (Aig (#1) — M(A:6 (8) = | (A,@(@"4)) — M9 (A. (@- > + Ji3 (08) ~ J (04) < |Jp2 (Aio@™))|| + | (Aio@™) — M4 (Avo ) - wan we) 2) \@} The first term of (2) can be bounded as follows: I Ao(@™)|| <(Cpipis/Oed)|,2 Adio] Lemma 5 < (€wipi /6ed)|| J? ille||2*-"|| @ (ReLU) is 1-Lipschitz < (epipi- /3cd)||- ied x || Induction hypothesis, ; / i-1 e -2|| < (i vel | < \|x'-}|| S (Cui /3O)I J, < (eu; /3d)|| J, = (e:/3d) I? < (¢/3d)||x"|| t1)|| J, t1)|| lle" Activation Contraction Layer Cushion xi = Aiφ(xi−1) Interlayer Cushion The second term of (2) can be bounded as: IM" (Aig(@~*)) — M9 (Aid (@™)) — Jy (Ao) = (ars — 389) Ae(@)) — a9 — J2)(4,0(8")] < art — F(A eI + NE = Te) (Aide) (4) Both terms of (4) can be bounded using the interlayer smoothness condition. For the second term of (4), notice that 4;¢(#'~!) = #t_,. Thus by the induction hypothesis, we know |Aie(@~*) — #'l| = [#1 — 2'|| $ @— De a'||/d < ella’. (5) # Now, by interlayer smoothness, (e's — J) (e's — J) (Aid (@) || = [|e — TY (a" + (Aio(@"™) — 2") | < WA @'™) = #'Ille ~ pllz"l| ©) elle'[flle’|| _ elle" 3d|z*]| 3d (6) 15 (3) where in (*) we use (5) and the assumption ρ ≥ 3d. For the first term of (4), we know ˆAiφ(ˆxi−1) = i−1 + ∆iφ(ˆxi−1). Therefore by the induction hypothesis and (3) for i = j, ˆxi Aga) — 2" || < [#4 — 2'|| + Ao @"™) || < @— Della" |/d + € |la"||/3d < ela", so again we have Ward — F3)(A,o(@-))|| < (€/30) |e". (7) Together, (7) and (6) show that (4) is < 2c \|a4||. Together with (3) we obtain from 2 that ||é7—4#7_,|| < £ Ss + ||), and hence that: |? —xil)<- fe" ‘I , completing the induction step. Conditioning on the success of interlayer smoothness and inequality (1), we’ve shown, ||} — 29|| < @/d)e'||a" |) for any i <j <d. Recall that with probability at least 1/2—, interlayer smoothness holds and inequality (1) holds for every 2 <i < d. Thus, let e’ =O ( ee , we know with probability at least 1/2 — 6, Af Fonin mi THT || fo(x) — fo, (x) | = lla" — €4|| < e'l| fol@)II. Bounding || fo(x) — fo,(x)|| for any fixed t: The proof for a fixed network on the path is almost the same as the proof for the end point. Instead of considering #/, now we consider £7(t), which is the vector before activation at layer j if the weights Ag,..., A; are replaced by Ao + t(Ag — Ao),..., Ai + t(A; — Ay). We can still use Lemma 5 to bound the noise produced by replacing the weight matrix at a single layer because (Ai + (A; — Ai) — Ada = JU (Ar — Adar < JA — Ader Thus, we can still use the above induction proof to show that for any fixed 0 < t < 1, let e’ =O (==) Tonin min pp, 2<i<d with probability at least 1/2 — 6, # i→ with probability at least 1/2 − δ, Il f(x) — fo,(x)|| < €'l| fo(a) |. Bounding || f(x) — fo,(x)|| for every t: Finally, we show that || fo(2)— fo, («)|| is bounded for every point on pe? d? maxes (|| fo(«) ||?) EEE the path via an ¢/-net argument. Similar to previous steps, letting «’ = O / Punin mi lenin tain (UT HE) we know that with probability at least 1/2 − δ, I|fo(a) — fo, (x) || < €/2. Next, we show that on the path, the network output is smooth in terms of the parameters. According to Algorithm 1, we know for any 2 <i < d, we have ||Aj|| < 4||Ail|, so || Ai — Ai] < 5 || Ail]. For any 2 <i <d, let Aj, = Aj + t(A; — Aj). Note || Ajz|] < (1 — t)||Aal] + ¢]| Ail] < 4|] Ail]. For any t,t’ and any 2 <i < d, let OF be 6 with the weight matrix at every layer 2 < j < i replaced by (A/ + t/(AJ — A4)). For convenience, we also denote 6; as 6},,. Given 7 < 1/2, for any 7 < t < 1—7 and for any —7T < & <7, we can bound I foun (2) — fa,(2))| as follows: Ifoere (2) — foe) S YO Mos.) — for, @ totte 2<i<d 16 : The output of layer i — 1 is the same for the two networks, of norm < ||x|| Ti ||Aj,t+n||- Hence the output of layer i differs by at most: #|l2|||| A; — All T1jz1 \|Ajr+«|| and the output differs by xla'|)\| A; — i-1 d d Ail| ja |Aje+rlh TTjais1 |Ajell < 54n||2| Tj || Aj||- Hence Ife. (2) — fo (a) < 2 54Iale TT Ail 2<i<d 1<j<d <stdxllel| T] Aull 1<j<d , € Thus, given T < 5535 — 78 = F54dmax Tel Ti<j<a ArT , we know for any 7 <¢ < 1—7 and for any —t <a <7, # 2·5dd max x∈S Il fora («) — fo(a)|| < €'/2. (8) There exists a set Q = {®} with size O(1/r) such that for any network on the path, the distance to the closest network in Q is no more than r. If we can prove for any % € Q, ||fo(x) — fo,(x)|| < €/2, we immediately know for any network 6, on the path || fo(x) — fo,,(x)|| < € by inequality (8). perd? maxes (|| fo(«)|I2) log (=e Tmin min (p17 min win (HEHE) By a union bound over Q, letting «’ = O ( at least 1/2 — 4, ) ) ; we know with probability || fol) — fo, (2)|| < €'/2, for any θt ∈ Q. Setting δ = 1/4, we know there exists ; mdha max ||¢l| TT, <5<a IlA5ll ped? maxes (| fo(2)||?) log ee) Amin main (yu? yu? min gin iH) =O & =O such that with probability at least 1/4, Il fe(n) — fa.(@)|| < e for any x ∈ S and any 0 ≤ t ≤ 1. Since the loss function is β-Lipschitz, we further know that for any 0 ≤ t ≤ 1: √ L( fo.) < L( fo) + Be’ = L( fe) + Ope). Now, we are ready to prove the main theorem. Theorem 2. Let 64 and 6? be two fully connected networks that are both €-noise stable, there exists a path with 10 line segments in parameter space 7 : [0,1] > © between 04 and 6 such that® L(fmy) < max{L(foa), L(fos)} + O(6) for0<t<1. Proof of Theorem 2. Setting dropout probability p = 3/4, by Lemma 5 and Lemma 3, if Amin = Q (1), we know there exist 67 and 6? such that 1. in both networks, each weight matrix from layer 2 to layer d has at least half of columns as zero vectors; 2. L(foa) < L(foa) + O(c) and L(foz) < L( fon) + O(6), for any 0 <t <1, where 0A = 04 + t(94 — 04) and 6P = 6" + t(07 — 0). 6Here O(-) hides log factors on relevant factors including ||, d, ||2||,1/e and h,||A;|| for layers i € {d]. 17 Since the dropout fraction in both 67 and 6? is at least half, we can connect 67! and 6? as we did in Lemma 2, while ensuring the loss doesn’t exceed max{L( fg), L(foe)} + Ole). Connecting 64 to 64 an connecting 0, to 6? each take one line segment. By the construction in Lemma 2, connecting two dropped-ou networks 6/! and 6? takes 8 line segments. Thus, overall the path between 64 and 6% contains 10 line segments. Next, we show that if there exists a “narrow” neural network achiving small loss, we can get a lower energy barrier using a smaller dropout probability. Theorem 3. Suppose there exists a network 0* with layer width h; for each layer i that achieves loss L(fo-), and minimum hidden layer width hy, = (1). Let 04 and 6* be two €-noise stable networks. For any dropout probability 1.5 maxy<i<a—1(hi/hi) < p < 3/4, if for any2<i<d,1<j < hi-n, |\[Adsl] = OC/P)|Aille then there exists a path with 13 line segments in parameter space m : [0,1] + © between 64 and 6" such that L( Facey) < max{L(foa) + O/B.) L( fos) + O(/Be), L(fo-)} for 0S 01. Proof of Theorem 3. Since Myrin-maxi<i<a—1(h? /hi) > Rey = O(1), we have hmin = 9 (ae) By Lemma 5 and Lemma 3, there exist 6/1 and 6? such that 1. in both networks, each weight matrix from layer 2 to layer d has at least h∗ i columns set to zero; √ √ 2. L(fya) < L( fos) +O( pe) and L(fyp) < L( fon) +O(/Be), for any 0 < t < 1, where 64 = 04+4(0-04) and 6P = 6" + t(07 — 0). 2. L(fya) < L( fos) +O( pe) and 6P = 6" + t(07 — 0). From the fact that at least h* From the fact that at least h* units in layer i of both 64 and 6? have been set to zero for 1 < i < d— meaning that the corresponding rows of A; and columns of A;+1 are zero—it follows from Lemma 2 tha we can connect 6 to an arbitrary permutation of @* using 8 segments while keeping the loss on the path no more than max{L(foa), L(fo-)}- By choosing this permutation so that the non-zero units of 6* do no: intersect with those of 07, we can then connect 6* to 0? using just 3 segments as done in the first step o! our path construction in Lemma 2 seeing as there is no need to permute 6* a second time. Combining these paths together with the paths that interpolate between the original parameters 4 and 6 and their dropou versions 6! and 67, we obtain a path in parameter space 7 : [0,1] + © between 64 and 6% with 13 line segments such that L(f,(1)) < max{L(foa) + O(\/pe), L( foe) + O(/Pe), L(fo-)} forO <t <1. # C Proofs for disconnected modes in two-layer nets Proof of Theorem 4. Define our loss over parameter space such that L(fg) = 2S U(yi, fo(Xi)), where x; € R’*? is our i'* data sample, y; € R the associated label, and f(x;) = w’¢(Ax;) for @ = (w, A) € R+2)xh y R’, We can represent the data samples as rows in a matrix X € R"*("'+2)__with f; denoting the i*” “feature” (i.e. column) of X—and the labels as elements of y € R’™?, as illustrated in Figure 4. Choose k,1,m,n such that kk <1 <m<nwherek >h,l—-k>h,m—Il>2andn—m>h. When i < 1, let i, j=l i-l, j=2 tig = 41, i= j (mod h) -1, iF#j (modh),i<l 0, i#j (modh),k<iK<l. When l < i ≤ m, let -1, j<2,i=j (mod 2) tijy=40, g<2,i#7 (mod 2) 0, j>2Abl<i<m. 18 . When i > m, let 0, j<2 tijy=4—-l, j>2,i=7 (mod h) 0, g>2,t4 7 (mod h). Finally, let yi = 1 when i ≤ l and 0 otherwise. x1 x2 ... ... xk ... xl ... ... xm ... ... xn f1 1 2 ... ... ... ... l ... f2 0 1 ... ... ... ... l − 1 −I2 . . . 0 f3 1 −1 ... −1 ... ... ... f4 −1 1 . . . . . . · · · . . . . . . · · · −1 ... ... . . . Ih . . . 0 ... ... −Ih fh+2 −1 ... −1 1 ... . . . ... y = y1 ... ... ... ... ... yl ... ... ... ... ... yn 1 ... ... ... ... ... ... 0 ... ... ... ... ... ... ... ... . . . . . . . . . X = Figure 4: Our dataset. j=3 φ(fj) = y it follows that there exist networks with both two active hidden units and h active hidden units that achieve minimal loss, with the former corresponding to the ground truth teacher network which generated our dataset. Note in particular that the output layer weight corresponding to ¢(f2) in the network with two active hidden units is negative, whereas in the network with h active hidden units the output layer weights are all positive. Thus, any path between the two networks must pass at least one output layer weight is zero while the other h — 1 are positive. However, as shown in Lemma 6, there does not exist such a point in parameter space that achieves minimal loss. It follows that there exists a barrier in the loss landscape separating the original networks, bo’ adjusting k, 1, and m we can somewhat arbitrarily raise or lower this barrier. through a point in parameter space where h of which are global minima. Moreover, by Lemma 6. There does not exist a set of h —1 positive weights w; and vectors h; € span X such that h-1 f ier wid(hi) = y- Proof. We can think of each h; as the output a particular hid and w; as the output layer weight associated to this hidden unit. We then have h; = )> a;,jf;, where the coefficients a;,; are elements of A. den unit over all n samples in our datase First, if there did exist w; and h; such that an —, wi?(hi) = y, then it must be the case for all i that h; = do a,;£; where a; > 0 for all 7. Otherwise, there woul indexes | + 1 and n that would be impossible to eliminate in }> be non-zero elements in some h; between no w;$(h;) given that w; > 0 for all i. 19 Second, any linear combination of f; and f2 with positive coefficients would result in a vector whose first | elements are positive and increasing. In contrast, the first | elements of Y are constant. And so from the fact that there does not exist a;,; > 0 such that the first 1 elements of }> a;,;f; are decreasing—in particular ecause the first k elements and next | — k elements of yar! aijxj are periodic with length h—it follows hat aj,1,@i,2 = 0 for all h,. Thus, we need only consider linear combinations of f3 through f,42 with positive coefficients as candidates for h;. To this end, note that if a particular f; has zero coefficient in all of hy through hp_1, then ye wio(hi) will have zeros in every index congruent to 7 mod h and therefore cannot equal y. Hence by the pigeonhole rinciple, in order to have ean wio(h;) = y there must be some i such that h; = yar! ai jf; with at least wo coefficients being non-zero. However, in any linear combination yar ay jf; where aj,j,a;,;. > 0 for at least two distinct j,j’, the elements in indexes k + 1 to 1 will be greater than the elements in indexes 1 to k hat are congruent to 7 mod h and j’ mod h. In contrast, the first | elements of y are constant. Hence, similar to the case of f, and fz, there cannot exist h; = ar f; and positive coefficients w; such that j=3 %,j yh (h,) =Y. i-1 Wi? # D Experimental details and further results # D.1 Experimental details and hyperparameters For all experiments on MNIST, we used a convolutional architecture consisting of 3 convolutional layers followed by a fully-connected output layer. Each convolutional layer consisted of 32 3 × 3 filters and used sufficient padding so as to keep the layer’s output the same shape as its input. All networks were trained on an NVIDIA Tesla K20c GPU for 5000 iterations with a batch size of 64 using stochastic gradient descent with an initial learning rate of 0.1 and a decay rate of 1E−6. No significant hyperparameter tuning was applied. Images were normalized. For the left an corresponding to and accuracy over Specific to Figure and rescale these consisted of samp. d right plots in Figure 2, we report results averaged over 5 random trials and error bars he standard deviation over these trials. For the center plot we simply computed the loss a linear path between a particular convolutional net and a single dropout version of itself. 2, in applying dropout with probability p we randomly sample a subset of [32(1— p)| units units by 1/(1 — p) while setting the remaining units to zero. In the left plot, each trial ing 20 such dropout networks and reporting the performance of the network achieving the lowest loss. Losses and accuracies in all plots were computed on a random batch of 4096 training images. On CIFAR-10, we trained VGG-11 networks on an NVIDIA Titan X GPU for 300 epochs with SGD with a batch size of 128, with weight decay 5e-4, momentum 0.9, and an initial learning rate of 0.05 which is decayed by factor of 2 every 30 epochs. We used channel-wise dropout at all convolutional layers. The dropout rates are p = 0.25 at the first three layers and are p = 0.5 at the others. Ordinary dropout with p = 0.5 is used at every fully-connected layers except for the last one (the softmax layer). 20 # D.2 Straight interpolation between two models As demonstrated in Figure 5, a straight line interpolation between two noise stable model may incur large losses and poor accuracies. The models are the same as used in Figure 3. 2.0 , —— bss — excua rey 0.0 0.2 0.4 06 08 1.0 path parameter (t) Loss and accuracy f—) a S a S o Figure 5: Loss and accuracy from directly interpolating between two noise stable models. # D.3 Verification of noise stability conditions # D.3.1 Layer cushion layer 1 layer 2 layer 3 A Ox 0.30 0.35 layer cushion ju o2 0.125 0.150 0.175 0,200 0.225 0.4 layer cushion ju, layer cushion 44, layer 4 layer 5 layer 6 ay. 0.08 0.10 O12 O44 layer cushion ju 0.14 0.16 0.18 0.20 0.22 layer cushion 44, ©.100 0.125 0.150 0.175 layer cushion jy layer 7 layer 8 0.20 0.25 0.200 0.225 0.250 0.275 0.300 layer cushion ja, layer cushion py 21 # D.3.2 Interlayer cushion layerl A. 0.000 0.025 0.050 0.075 0.100 interlayer cushion uj; — > layer 4 0.00 0.05 0.10 interlayer cushion yj— > 0.15 0.2 layer 7 0.4 0.000 0.025 0.050 90.075 0.100 oO. interlayer cushion yi; — = fe) 0.8 layer2 interlayer cushion pj;— = layer 5S 4 > 1 0.2 interlayer cushion y;— > layer 8 0.6 0.00 oO. fe) 0.8 interlayer cushion yi;— = layer3 0.05 0.10 interlayer cushion y;~ - layer 6 .2 0.4 interlayer cushion sj; —~ > # D.3.3 Activation contraction layerl ris 1.25 1.50 1.75 2.00 contraction c layer 4 1.0 1.5 2.0 contraction c 1.2 layer 7 1.4 contraction c layer 2 1.5 2.0 contraction c layer 5 layer 3 1.5 contraction c layer 6 1.8 1.5 2.0 contraction c layer 8 1.50 contraction c 1.75 1.2 1.3 1.4 contraction c 22 # D.3.4 Interlayer smoothness layer 2 6 5 10 15 20 interlayer smoothness ps layer 5 © 10 20 30 40 interlayer smoothness po layer 3 ~ 6 10 20 30 interlayer smoothness ps layer 6 r © 10 20 Et interlayer smoothness ps layer 4 © 10 20 interlayer smoothness ps layer 7 Lh. o 5 ro 15 20 interlayer smoothness po layer 8 a © 5 1015. 20 interlayer smoothness p5 # E Tools We use matrix concentration bounds to bound the noise produced by dropping out one single layer (Lemma 5). Lemma 7 (Matrix Bernstein; Theorem 1.6 in (Tropp, 2012)). Consider a finite sequence {Zk} of independent, random matrices with dimension d1 × d2. Assume that each random matrix satisfies E[Z;,] = 0 and ||Z;,.|| < R almost surely. Define o? = max {|| B(Z4Zi) || B22} k k Then, for all t ≥ 0, {| Ll) 2 4} (t+ dese (Sars) >, Zell 2 ty < (th PCT RES) As a corollary, we have: Lemma 8 (Bernstein Inequality: Vector Case). Consider a finite sequence {vk} of independent, random vectors with dimension d. Assume that each random vector satisfies lux — E[vg]|| < R almost surely. Define o = STE Lllex — Elvs]l"]- k Then, for all t ≥ 0, 9 Pr {|| So (vm — Elva) || = th <(d+1)-exp (ia): k 23
{ "id": "1901.07417" }
1906.06032
Adversarial Training Can Hurt Generalization
While adversarial training can improve robust accuracy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary). Previous work has studied this tradeoff between standard and robust accuracy, but only in the setting where no predictor performs well on both objectives in the infinite data limit. In this paper, we show that even when the optimal predictor with infinite data performs well on both objectives, a tradeoff can still manifest itself with finite data. Furthermore, since our construction is based on a convex learning problem, we rule out optimization concerns, thus laying bare a fundamental tension between robustness and generalization. Finally, we show that robust self-training mostly eliminates this tradeoff by leveraging unlabeled data.
http://arxiv.org/pdf/1906.06032
Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C. Duchi, Percy Liang
cs.LG, stat.ML
null
null
cs.LG
20190614
20190826
9 1 0 2 g u A 6 2 ] G L . s c [ 2 v 2 3 0 6 0 . 6 0 9 1 : v i X r a # Adversarial Training Can Hurt Generalization # Aditi Raghunathan∗ 1 Sang Michael Xie∗ 1 Fanny Yang 1 John C. Duchi 1 Percy Liang 1 # Abstract While adversarial training can improve robust ac- curacy (against an adversary), it sometimes hurts standard accuracy (when there is no adversary). Previous work has studied this tradeoff between standard and robust accuracy, but only in the setting where no predictor performs well on both objectives in the infinite data limit. In this paper, we show that even when the optimal predictor with infinite data performs well on both objectives, a tradeoff can still manifest itself with finite data. Furthermore, since our construction is based on a convex learning problem, we rule out optimization concerns, thus laying bare a fundamental tension between robustness and generalization. Finally, we show that robust self-training mostly eliminates this tradeoff by leveraging unlabeled data. Robust test Robust train Standard test Standard train Standard training 3.5% - 95.2% 100% Adversarial training 45.8% 100% 87.3% 100% Table 1. Train and test accuracies standard and adversarially-trained models on CIFAR-10. Both have 100% training accuracy but very different test accuracies. In particular, adversarial training causes worse standard generalization. infinite data limit. However, we typically consider perturba- tions (such as imperceptible ¢,, perturbations) which do not change the output of the Bayes estimator, so that a predictor with both optimal standard and high robust accuracy exists. # 1. Introduction Another explanation could be that the hypothesis class is not rich enough to contain predictors that have optimal standard and high robust accuracy, even if they exist [9]. However, Table 1 shows that adversarial training achieves 100% standard and robust accuracy on the training set, suggesting that the hypothesis class is expressive enough in practice. Neural networks trained using standard training have very low accuracies on perturbed inputs commonly referred to as adversarial examples [12]. Even though adversarial training [4, 6] can be effective at improving the accuracy on such examples (robust accuracy), these modified training methods decrease accuracy on natural unperturbed inputs (standard accuracy) [6, 19]. Table 1 shows the discrepancy between standard and adversarial training on CIFAR-10. While adversarial training improves robust accuracy from 3.5% to 45.8%, standard accuracy drops from 95.2% to 87.3%. One explanation for a tradeoff is that the standard and robust objectives are fundamentally at conflict. Along these lines, Tsipras et al. [14] and Zhang et al. [19] construct learn- ing problems where the perturbations can change the output of the Bayes estimator. Thus no predictor can achieve both optimal standard accuracy and robust accuracy even in the *Equal contribution, in alphabetical order. 1Stanford University. Correspondence to: Aditi Raghunathan <[email protected]>, Sang Michael Xie <[email protected]>. ICML 2019 Workshop on Identifying and Understanding Deep Learning Phenomena, Long Beach, California, 2019. Copyright 2019 by the author(s). Having ruled out a conflict in the objectives and expressivity issues, Table 1 suggests that the tradeoff stems from the worse generalization of adversarial training either due to (i) the statistical properties of the robust objective or (ii) the dy- namics of optimizing the robust objective on neural networks. In an attempt to disentangle optimization and statistics, we ask does the tradeoff indeed disappear if we rule out opti- mization issues? After all, from a statistical perspective, the robust objective adds information (constraints on the outputs of perturbations) which should intuitively aid generalization, similar to Lasso regression which enforces sparsity [13]. Contributions. We answer the above question negatively by constructing a learning problem with a convex loss where adversarial training hurts generalization even when the opti- mal predictor has both optimal standard and robust accuracy. Convexity rules out optimization issues, revealing a funda- mental statistical explanation for why adversarial training requires more samples to obtain high standard accuracy. Furthermore, we show that we can eliminate the tradeoff in our constructed problem using the recently-proposed robust self-training [15, 1, 8, 18] on additional unlabeled data. Adversarial Training Can Hurt Generalization In an attempt to understand how predictive this example is of practice, we subsample CIFAR-10 and visualize trends in the performance of standard and adversarially trained models with varying training sample sizes. We observe that the gap between the accuracies of standard and adversarial training decreases with larger sample size, mirroring the trends observed in our constructed problem. Recent results from [1] show that, similarly to our constructed setting, robust self-training also helps to mitigate the trade-off in CIFAR-10. Standard vs. robust generalization. Recent work [11, 16, 5, 7] has focused on the sample complexity of learning a predictor that has high robust accuracy (robust generaliza- tion), a different objective. In contrast, we study the finite sam- ple behavior of adversarially trained predictors on the stan- dard learning objective (standard generalization), and show that adversarial training as a particular training procedure could require more samples to attain high standard accuracy. invariance to perturbations of training points by penalizing the worst-case loss over the invariance set B(xi) with respect to target yi. We consider regularized estimation and obtain the following standard and robust (adversarially trained) estimators for sample size n: , (2) n Se argmin | (f(#i)—yi)? +All | feF i i n n prob : x 2 5/2 €argmin max %i)—yi)- +A : 3 fn® € argm oe —H) +AllfP- @) We construct a P and f* such that both estimators above converge to f*, but such that the error of the robust estimator ft is larger than that of fs for small sample size n. 2.2. Construction In our construction, we consider linear predictors as “simple’ predictors that generalize well and staircase predictors as “complex” predictors that generalize poorly (Figure 1(a)). .° # 2. Convex learning problem: the staircase We construct a learning problem with the following proper- ties. First, fitting the majority of the distribution is statistically easy—it can be done with a simple predictor. Second, perturbations of these majority points are low in probability and require complex predictors to be fit. These two ingre- dients cause standard estimators to perform better than their adversarially trained robust counterparts with a few samples. Standard training only fits the training points, which can be done with a simple estimator that generalizes well; adver- sarial training encourages fitting perturbations of the training points making the estimator complex and generalize poorly. # 2.1. General setup We consider mapping x € Â¥ C Rtoy € R where (z,y) is a sample from the joint distribution P and conditional densities exist. We denote by P., the marginal distribution on XY. We generate the data as y = f*(x) + ov; where vy; N(0,1) and f* : X + R. For an example (x,y), we measure robustness of a predictor with respect to an invariance set B(«) that contains the set of inputs on which the predictor is expected to match the target y. Input distribution. In order to satisfy the property that a simple predictor fits most of the distribution, we define f* to be linear on the set Mine C V, where Xline = {0,1,2,...,s−1}, Px(Xline) = 1−δ, (4) for parameters 6 € (0, 1] and a positive integer s. Any predictor that fits points in Vine will have low (but not optimal) standard error when 6 is small. Perturbations. We now define the perturbations such that that fitting perturbations of the majority of the distribution requires complex predictors. We can obtain a staircase by flattening out the region around the points in Aine locally (Figure 1(a)). This motivates our construction where we treat points in Vine aS anchor points and the set Af, as local perturbations of these points: «+ ¢ for x € Mine. This is a simpler version of the commonly studied ¢,., perturbations in computer vision. For a point that is not an anchor point, we define B(x) as the invariance set of the closest anchor point |x]. Formally, for some € € (0,5), B(x) ={|2].Le] +6Le]-¢. (5) Output distribution. For any point in the support X , fi(e)=m|2],VreX, (6) The central premise of this work is that the optimal predictor is robust. In our construction, we let f* be robust by enforcing the invariance property (see Appendix A) f (x) = f (˜x), ∀˜x ∈ B(x). (1) Given training data consisting of n i.i.d. samples (7;,y;) ~P, our goal is to learn a predictor f € F. We assume that the hypothesis class F contains f* and consider the squared loss. Standard training simply minimizes the empirical risk over the training points. Robust training seeks to enforce for some parameter m. Setting the slope as m = 1 makes f* resemble a staircase. Such an f* satisfies the invariance property (1) that ensures that the optimal predictor for standard error is also robust. Note that f* (2) =mz (a simple linear function) when restricted to x in Mine. Note also that the invariance sets B(x) are disjoint. This is in contrast to the example in [19], where any invariant function is also globally constant. Our construction allows a non-trivial robust and accurate estimator. We generate the output by adding Gaussian noise to the opti- mal predictor f*, i.e., y= f*(a)+ov; where v; NV(0,1). Adversarial Training Can Hurt Generalization (a) Slope m = 1 (b) Small sample size (c) Large sample size (d) Slope m = 0 == Standard 8, — Robust 6 1 2 o @ 2 4 6 8 Figure 1. (a): An illustration of our convex problem with slope m = 1, with size of the circles proportional to probability under the data distribution. The dashed blue line shows a simple linear predictor that has low test error but not robust to perturbations to nearby low-probability points, while the solid orange line shows the complex optimal predictor f* that is both robust and accurate. (b): With small sample size (n = 40), any robust predictor that fits the sets B() is forced to be a staircase that generalizes poorly. (¢): With large sample size (n = 25000), the training set contains all the points from Aine and the robust predictor is close to f* by enforcing the right invariances. The standard predictor als has low error, but higher than the robust predictor. (d): An illustration of our convex problem when the slope m=0. The optimal predictor f* that is robust is a simple linear function. This setting sees no tradeoff for any sample size. # 2.3. Simulations We empirically validate the intuition that the staircase problem is sensitive to robust training by simulating training with various sample sizes and comparing the test MSE of the standard and robust estimators (2) and (3). We report final test errors here; trends in generalization gap (difference between train and test error) are nearly identical. See Appendix D for more details. Figure 2 shows the difference in test errors of the two estimators. For each sample size n, we compare the standard and robust estimators by performing a grid search over regularization parameters λ that individually minimize the test MSE of each estimator. With few samples, most training samples are from Xline and standard training learns a simple linear predictor that fits all of Xline. On the other hand, robust estimators fit the low probability perturbations X c line, leading to staircases that generalize poorly. Figure 1(b) visualizes the two estimators for small samples. However, as we increase the size of the training set, the training set contains all points from Xline, and robust estimators also generalize well despite being more complex. Furthermore, in this regime, robust estimators indeed see the expected “regularization” benefit where the robust objective helps fit points in the low probability regions X c line, even when they are not yet sampled in the training points. In general, we see that robust training has higher test error with a small sample size, but the difference in the test error of standard and robust estimators decreases as sample size increases, and robust training eventually obtains lower test error. that for our staircase example, an estimator trained even with the less demanding data augmentation sees a similar tradeoff with small training sets, due to increased complexity of the augmented estimator. # 2.4. Robust self-training mostly eliminates the tradeoff Section 2.3 shows that the gap between the standard errors of robust and standard estimators decreases as training sample size increases. Moreover, if we obtained training points span- ning Xline, then the robust estimator (staircase) would also generalize well and have lower error than the standard estima- tor. Thus, a natural strategy to eliminate the tradeoff is to sam- ple more training points. In fact, we do not need additional labels for the points on Xline—a standard trained estimator fits points on Xline with just a few labels, and can be used to gener- ate labels on additional unlabeled points. Recent works have proposed robust self-training (RST) to leverage unlabeled data for robustness [10, 1, 15, 8, 18]. RST is a robust variant of the popular self-training algorithm for semi-supervised learning [10], which uses a standard estimator trained on a few labels to generate psuedo-labels for unlabeled data as described above. See Appendix C for details on RST. For the staircase problem (m = 1), RST mostly eliminates the tradeoff and achieves similar test error to standard training (while also being robust, see Appendix C.2) as shown in Figure 2. # 3. Experiments on CIFAR-10 Another common approach to encoding invariances is data augmentation, where perturbations are sampled from B(x) and added to the dataset. Data augmentation is less demanding than adversarial training which minimizes loss on the worst-case point within the invariance set. We find In our staircase problem from Section 2, robust estimators perform worse on the standard objective because these pre- dictors are more complex, thereby generalizing poorly. Does this also explain the drop in standard accuracy we see for adversarially trained models on real datasets like CIFAR-10? Adversarial Training Can Hurt Generalization 0.025 —e £=1/255 a} “a €= 2/255 m= 3/255 se = 4255 eegeese & 2 8 8 8 ees & 5 & & Test MSE(Rob) - Test MSE(Std) Test Error{Rob) - Test Error(Std) (%) 0.0004 -- 10? Number of labeled samples Number of labeled samples (a) Staircase, m=1 (b) WRN40-2 on CIFAR-10 (a) Staircase, m = 1 # (b) WRN40-2 on CIFAR-10 (a) Staircase, m = 0 (b) Small CNN on MNIST 2 ° S a —®- Robust —t— RST Test MSE(Rob) - Test MSE(Std) 10? 103 Number of labeled samples (c) Staircase (m = 1): RST vs. Robust Figure 3. Difference between test errors (robust - standard) as a function of the # of training samples n. For each n, we choose the best regularization parameter λ for each of robust and standard training and take the difference. Negative numbers mean that robust training has a lower test MSE than standard training. (a) In the staircase problem with slope m = 0, the robust estimator consistently outperforms the standard estimator, showing a regularization benefit. (b) On MNIST , the adversarially trained model has lower test error (%) than the standard model. The difference in test errors is largest for small sample sizes and closes with more training samples. Shaded regions represent 1 STD. Figure 2. Difference between test errors (robust - standard) as a function of the # of training samples n. For each n, we choose the best regularization parameter λ for each of robust and standard training and plot the difference. Positive numbers show that the robust estimator has higher MSE than the standard estimator. (a) For the staircase problem with slope m = 1, we see that for small n, test loss of the robust estimator is larger. As n increases, the gap closes, and eventually the robust estimator has smaller MSE. (b) On subsampling CIFAR-10, we see that the gap between test errors (%) of standard and adversarially trained models decreases as the number of samples increases, just like the staircase construction in (a). Extrapolating, the gap should close as we have more samples. (c) Robust self-training (RST), using 1000 additional unlabeled points, achieves comparable test MSE to standard training (with the same amount of labeled data) and mostly eliminates the tradeoff seen in robust training. The shaded regions represent 1 STD. If we change our construction such that robust predictors are also simple, we see that adversarial training instead offers a regularization benefit. When m = 0, the optimal predictor (which is robust) is linear (Figure 1(d)). We find that adversarial training has lower standard error by enforcing invariance on B(x) making the robust estimator less sensitive to target noise (Figure 4(a)). Similarly, on MNIST , the adversarially trained model has lower test error than standard trained model. As we increase the sample size, both standard and adversarially trained models converge to obtain same small test error. We remark that our observation on MNIST is contrary to that reported in [14], due to a different initialization that led to better optimization (see Appendix Section D.2). We subsample CIFAR-10 by various amounts to study the effect of sample size on the standard test errors of standard and robust models. To train a robust model, we use the adver- sarial training procedure from [6] against ¢. perturbations of varying sizes (see Figure 2). The gap in the errors of the standard and adversarially trained models decreases as sample size increases, mirroring the trends in the staircase problem. Extrapolating the trends, more training data should eliminate the tradeoff in CIFAR-10. Similarly to the staircase example, [1] showed that robust self-training with additional unlabeled data improves robust accuracy and standard accuracy in CIFAR-10. See Appendix C for more details. # 5. Conclusion In this work, we shed some light on the counter-intuitive phenomenon where enforcing invariance respected by the optimal function could actually degrade performance. Being invariant could require complex predictors and consequently more samples to generalize well. Our experiments support that the tradeoff between robustness and accuracy observed in practice is indeed due to insufficient samples and additional unlabeled data is sufficient to mitigate this tradeoff. 4. Adversarial training can also help One of the key ingredients that causes the tradeoff in the staircase problem is the complexity of robust predictors. Adversarial Training Can Hurt Generalization # Acknowledgements We are grateful to Tengyu Ma for several helpful discussions. This work was funded by an Open Philanthropy Project Award and NSF Frontier Award Grant no. 1805310. AR was supported by Google Fellowship and Open Philanthropy AI Fellowship. FY was supported by the Institute for Theoretical Studies ETH Zurich and the Dr. Max R¨ossler and the Walter Haefner Foundation. FY and JCD were supported by the Office of Naval Research Young Investigator Award N00014-19-1-2288. [11] L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, and A. Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Process- ing Systems (NeurIPS), pages 5014–5026, 2018. [12] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Er- han, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014. [13] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. # References [1] Y. Carmon, A. Raghunathan, L. Schmidt, P. Liang, and J. C. Duchi. Unlabeled data improves adversarial robustness. arXiv preprint arXiv:1905.13736, 2019. [14] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations (ICLR), 2019. [2] S. Diamond and S. Boyd. CVXPY: A Python- embedded modeling language for convex optimization. Journal of Machine Learning Research (JMLR), 17 (83):1–5, 2016. [15] J. Uesato, J. Alayrac, P. Huang, R. Stanforth, A. Fawzi, and P. Kohli. Are labels required for improving adver- sarial robustness? arXiv preprint arXiv:1905.13725, 2019. [3] J. Friedman, T. Hastie, and R. Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics New York, NY, USA: Springer series in statistics New York, NY, USA:, 2001 2001. [16] D. Yin, K. Ramchandran, and P. Bartlett. Rademacher complexity for adversarially robust generalization. arXiv preprint arXiv:1810.11914, 2018. [17] S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference, 2016. [4] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2015. [5] J. Khim and P. Loh. Adversarial risk bounds for binary arXiv classification via function transformation. preprint arXiv:1810.09519, 2018. [6] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018. [18] R. Zhai, T. Cai, D. He, C. Dan, K. He, J. Hopcroft, and L. Wang. Adversarially robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019. [19] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan. Theoretically principled trade-off In International between robustness and accuracy. Conference on Machine Learning (ICML), 2019. [7] O. Montasser, S. Hanneke, and N. Srebro. VC classes are adversarially robustly learnable, but only improperly. arXiv preprint arXiv:1902.04217, 2019. [8] A. Najafi, S. Maeda, M. Koyama, and T. Miyato. Ro- bustness to adversarial perturbations in learning from in- complete data. arXiv preprint arXiv:1905.13021, 2019. [9] P. Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019. [10] C. Rosenberg, M. Hebert, and H. Schneiderman. Semi-supervised self-training of object detection models. In Proceedings of the Seventh IEEE Workshops on Application of Computer Vision, 2005. Adversarial Training Can Hurt Generalization # A. Consistency # of robust and standard estimators For the classification case, consistency requires label invariance, which is that We show that the invariance condition (restated, (7)) is a suffi- cient condition for the minimizers of the standard and robust objectives under P in the infinite data limit to be the same. f*(x)=f*(@) VEE B(2), @) argmax y p(y | x) = argmax y p(y | ˜x) ∀˜x ∈ B(x), (8) such that the adversary cannot change the label that achieves the maximum but can perturb the distribution. for all x ∈ X . Recall that y = f*(x) + ov; where vj _ N(0, 1), with f* (x) =Ely| a]. Therefore, if f* is in the hypothesis class F, then f* minimizes the standard objective for the square loss. If both fs (2) and f*> (3) converge to the same Bayes optimal f* as n — oo, we say that the two estimators fs and fie are consistent. In this section, we show that the invariance condition (7) implies consistency of fre and fe". The optimal standard classifier here is the Bayes opti- mal classifier {7 = argmax, p(y | x). Assuming that ff =argmax,p(y |x) is in F, then consistency follows by essentially the same argument as in the regression case. Theorem 2. (Classification) Consider the minimizer of the standard population 0-1 loss, {2 = argmin, El¢(f(«),y)] where (f(x), y) = f{argmax; f(z); = yh. Assuming (8) holds, we have that for any f, Bhmaxse p(n) €(F(x).y)] > Elmaxse nia) €(f2(0)-y)} such that f* is also optimal for the robust population 0-1 loss. Intuitively, from (7), since f* is invariant for all x in B(x), the maximum over B(x) in the robust objective is achieved by the unperturbed input x (and also achieved by any other element of B(«)). Hence the standard and robust loss of f* are equal. For any other predictor, the robust loss upper bounds the standard loss, which in turn is an upper bound on the standard loss of f* (since f* is Bayes optimal). Therefore f* also obtains optimal robust loss and fs and fib are consistent and converge to f* with infinite data. Proof. Replacing f* with f* and é(f(x), y) with the zero-one loss 1{argmax, f(a); = y} in the proof of Theorem | gives the result. In our staircase problem, from (1), we assume that the target y is generated as follows: y= f*(«x) +o; where vy; ~ IN (0, ), we see that the points within an invariance sets B(x) have the same target distribution (target distribution invariance). Formally, let ¢ be the square loss function, and the population loss be E(«.,y)~p[£(f(x),y)]- In this section, all expectations are taken over the joint distribution P. Theorem 1. (Regression) Consider the en etedtle stan- een squared loss, as = argmin, E{é(f(x),y)] where (f(x), y) = (f(x) — y)?. Assuming (7) holds, we have that for any f, Elmaxze aa) Lf(z), y)} = Elmaxge p(x) ¢(f*(x),y)], such that f* is also optimal for the robust population squared loss. f*()=f*(@) Vee Ba) => p(y|x)=ply|z) Vee B(x), (9) (10) for all x ∈ X . The target invariance condition above implies consistency in both the regression and classification case. # B. Convex staircase example # B.1. Data distribution Proof. Note that the optimal standard model is the Bayes estimator, such that f ∗(x) = E[y | x]. Then by condition (7), f ∗(˜x) = E[y | ˜x] = E[y | x] = f ∗(x) for all ˜x ∈ B(x). Thus the robust objective for f ∗ is =) __»/)2 EL max (f*(x),y)|=E] max (Ely|#]—y) =E[(Ely|2]—y)”] =ElC(f*(2), v) Distribution of “. We focus on a 1-dimensional regres- sion case. Let s be the total number of “stairs” in the staircase problem. Let sg < s be the number of stairs that have a large weight in the data distribution. Define 6 € [0,1] to be the probability of sampling a perturbation point, i.e. 7 € Xfi. which we will choose to be close to zero. The size of the perturbations is € € (0,3) ), which is bounded by 4 5 so that [we] =z, for any x € Kine. The standard deviation of the noise in the targets is 0 > 0. Finally, m € [0,1] is a parameter controlling the slope of the points in Nine. where the first equality follows because f* is the Bayes estimator and the second equality is from (7). N ure that for any ee Elmaxge p(x) l(f (x),y)] > Ele(f(z),y)] = E{¢(f*(2),y)], the theorem statement follows. Let w ∈ ∆s be a distribution over Xline where ∆s is the proba- bility simplex of dimension s. We define the data distribution with the following generative process for one sample x. First, sample a point i from Xline according to the categorical Adversarial Training Can Hurt Generalization distribution described by w, such that i ∼ Categorical(w). Second, sample x by perturbing i with probability δ such that i w.p. 1-6 c= Vi-e wp.d/2 ite wap. 6/2. Note that this is just a formalization of the distribution described in Section 2. The sampled x is in Xline with probability 1 − δ and X c line with probability δ, where we choose δ to be small. In addition, in order to exaggerate the difference between robust and standard estimators for small sample sizes, we set w such that the first s0 stairs have the majority of probability mass. To achieve this, we set the unnormalized probabilities of w as ˆwj = 1/s0 0.01 j < s0 j ≥ s0 where Q; ; = f &'(t);'(t);dt measures smoothness in terms of the second derivative. With respect to the regularized objectives (2) and (3), the norm regularizer is || f ||? =07. 08. We implement the optimization of the standard and robust objectives using the basis described in [3]. The regularization penalty matrix Ω computes second-order finite differences of the parameters θ. Suppose we have n samples of training in- puts X = {x1,...,xn} and targets y = {y1,...,yn} drawn from P . The standard spline objective solves the linear system ˆθstd = (Φ(X)T Φ(X)+λΩ)−1Φ(X)T y, where the i-th row of Φ(X) ∈ Rn×(3s+2) is Φ(xi). The n (x) = Φ(x)T ˆθstd. We solve the standard estimator is then ˆf std robust objective directly as a pointwise maximum of squared losses over the invariance sets (which is still convex) using CVXPY [2]. and define w by normalizing w = w/ >? , Wj. For our examples, we fix so = 5. In general, even though we can increase s to create versions of our example with more stairs, So is fixed to highlight the bad extrapolation behavior of the robust estimator. Distribution of ). We define the target distribu- tion as (Y | X = x) ~ N(m|z], 07), where |x] rounds x to the nearest integer. The invariance sets are B(x) = {|a] —e,|2], [2] +e}. We define the distribution such that for any 2, all points in B(x) have the same mean target value m| |. See Figure | for an illustration. Note that B(x) is defined such that ((9)) holds, since for any 1,22 € B(x), |1] = [ra] and thus p(y | 21) = p(y | v2). The conditional distributions are defined since p(%) >0 for any € B(x). # B.2. Model # B.3. Role of different parameters To construct an example where robustness hurts general- ization, the main parameters needed are that the slope m is large and that the probability δ of drawing samples from perturbation points X c line is small. When slope m is large, the complexity of the true function increases such that good generalization requires more samples. A small δ ensures that a low-norm linear solution has low test error. This example is insensitive to whether there is label noise, meaning that σ = 0 is sufficient to observe that robustness hurts generalization. If m ≈ 0, then the complexity of the true function is low and we observe that robustness helps generalization. In contrast, this example relies on the fact that there is label noise (σ > 0) so that the noise-cancelling effect of robust training improves generalization. In the absence of noise, robustness neither hurts nor helps generalization since both the robust and standard estimators converge to the true function (f ∗(x) = 0) with only one sample. Our hypothesis class is the family of cubic B-splines as defined in [3]. Cubic B-splines are piecewise cubic functions, where the endpoints of each cubic function are called the knots. In our example, we fix the knots to be T=[-€,0,¢,...,(s—1) —e,s—1,(s—1) +e], which places a knot on every point on the support of V. This ensures that the family is expressive enough to include f*, which is any function in F which satisfies f* (2) = m|.2] for all x in X. Cubic B-splines can be viewed as a kernel method with kernel feature map ® : Â¥ > R°°+?, where s is the number of stairs in the example. For some regularization parameter \ > 0 we optimize with the penalized smoothing spline loss function over parameters 0, U(fole)a)=(u- fola))? +a f(sg(yrde a) = (y−Φ(x)T θ)2 +λθT Ωθ, (12) # B.4. Plots of other values We show plots for a variety of quantities against number of samples n. For each n, we pick the best regularization parameter λ with respect to standard test MSE individually for robust and standard training. in the m = 1 (robustness hurts) and m = 0 (robustness helps) cases, with all the same parameters as before. In both cases, the test MSE and generalization gap (difference between training MSE and test MSE) are almost identical due to robust and standard training having similar training errors. In the m = 1 case where robustness hurts (Figure 6), robust training finds higher norm estimators for all sample sizes. With enough samples, standard training begins to increase the norm of its solution as it starts to converge to the true function (which is complex) and the robust train MSE starts to drop accordingly. Adversarial Training Can Hurt Generalization 100 0.75 0.50- 0.254 0.00 0.25 + 0.50 0.75 + -1.00 1.00 100 == Standard 0.754 Robust 0.75 0.50! 0.50- 0.25+ . 0.254 0.00; $e 0.00 =0.254 0.25 + =0.50+ 0.50 -0.75+ 0.75 + -1.00+ -1.00 (a) Small sample (n=40) (b) Large sample (n = 25000) — Robust —<— RST Test robust MSE 102 103 Number of labeled samples (a) Small sample (n = 40) (b) Large sample (n = 25000) (a) Robust training vs. RST Figure 4. Left: With small samples, the standard solution may overfit to noise, while adversarial training has a noise cancelling effect. Right: With large samples, both the robust and standard predictors have low test error, but the standard predictor is still more susceptible to noise. In the m = 0 case where robustness helps (Figure 7), the optimal predictor is the line f (x) = 0, which has 0 norm. The robust estimator has consistently low norm. With small sample size, the standard estimator has low norm but has high test MSE. This happens when the standard estimator is close to linear (has low norm), but the estimator has the wrong slope, causing high test MSE. However, in the infinite data limit, both standard and robust estimators converge to the optimal solution. # C. Robust self-training algorithm ia ee g Â¥ = Robust 2 0.06 —— RST F 0.04 0.02 TN 10? 10° Number of labeled samples (b) Standard training vs. RST Figure 5. Robust self-training (RST) improves test robust MSE (not just standard test MSE) over both standard and robust training. For each n, the regularization parameter λ is chosen with respect to the best test MSE over a grid search for each of robust, RST, and standard training. (a) shows that robust self-training improves robust error over robust training. (b) confirms that robust self-training also improves robust test error over standard training. We describe the robust self-training procedure, which per- forms robust training on a dataset augmented with unlabeled data. The targets for the unlabeled data are generated from a standard estimator trained on the labeled training data. Since the standard estimator has good standard generalization, the generated targets for the unlabeled data have low error on expectation. Robust training on the augmented dataset seeks to improve both the standard and robust test error of robust training (over just the labeled training data). Intuitively, robust self-training achieves these gains by mimicking the standard estimator on more of the data distribution (by using unlabeled data) while also optimizing the robust objective. In robust self-training, we are given n samples of training inputs X = {x1,...,xn} and targets y = {y1,...,yn} drawn from P . Suppose that we have additional m unlabeled samples Xu drawn from Px. Robust self-training uses the following steps for a given regularization λ: 1. Compute the standard estimator ˆf std n (2) on the labeled data (X, y) with regularization parameter λ. n (Xu) by evaluating the standard estimator obtained above on the unlabeled data Xu. 3. Construct an augmented dataset Xaug = X ∪ Xu, yaug = y ∪ yu. 4. Return a robust estimator ˆf rob n (3) with the augmented dataset (Xaug, yaug) as training data. # C.1. Results on CIFAR-10 We present relevant results from the recent work of [1] on robust self-training applied on CIFAR-10 augmented with unlabeled data in Table 2. The procedure employed in [1] is identical to the procedure describe above, using a modified version of adversarial training (TRADES) [19] as the robust estimator. # C.2. Robust self-training doesn’t sacrifice robustness In Section 2.4, we show that if we have access to additional unlabeled samples from the data distribution, robust self-training (RST) can mitigate the tradeoff in standard error between robust and standard estimators. It is important that we do not sacrifice robustness in order to have better standard error. Figure 5 shows that in the case where robustness hurts generalization in our convex construction (m = 1), RST improves over robust training not only in standard test error Adversarial Training Can Hurt Generalization Robust test Standard test Standard training 3.5% 95.2% Adversarial training 45.8% 87.3% RST [1] 62.5% 89.7% Table 2. Robust and standard accuracies for different training methods. Robust self-training (RST) leverages unlabeled data in addition to the CIFAR-10 training set to see an increase in both standard and robust accuracies over traditional adversarial training. To mitigate the tradeoff between robustness and accuracy, all we need is (possibly large amounts of) unlabeled data. Initialization and trade-off for MNIST . We note here that the tradeoff for adversarial training reported in [14] is be- cause the adversarially trained model hasn’t converged (even after a large number of epochs). Using the Xavier initializa- tion, we get faster convergence with adversarial training and see no drop in clean accuracy at the same level of robust accu- racy. Interestingly, standard training is not affected by initial- ization, while adversarial training is dramatically affected. (Section 2.4), but also in robust test error. Therefore, by leveraging some unlabeled data, we can recover the standard generalization performance of standard training using RST while simultaneously improving robustness. D. Experimental details D.1. CIFAR-10 We train Wide ResNet 40-2 models [17] using standard and adversarial training while varying the number of samples in CIFAR-10. We sub-sample CIFAR-10 by factors of {1,2,5,8,10, 20,40}. For sub-sample factors 1 to 20, we report results averaged from 2 trials each for standard and adversarial training. For sub-sample factors greater than 20, we average over 5 trials. We train adversarial models under the @,., attack model with @,,-norm constraints of sizes € = {1/255,2/255,3/255,4/255} using PGD adversarial training [6]. The models are trained for 200 epochs using minibatched gradient descent with momentum, such that 100% standard training accuracy is achieved for both standard and adversarial models in all cases and > 98% adversarial training accuracy is achieved by adversarially trained models in most cases. We did not include reuslts for subsampling factors greater than 50, since the test accuracies are very low (20-50%). However, we note that for very small sample sizes (subsampling factor 500), the robust estimator can have slightly better test accuracy than the standard estimator. While this behavior is not captured by our example, we focus on capturing the observation that standard and robust test errors converge with more samples. D.2. MNIST The MNIST dataset consists of 60000 labeled exam- ples of digits. We sub-sample the dataset by factors of {1, 2,5, 8, 10, 20, 40, 50, 80, 200, 500} and report results for a small 3-layer CNN averaged over 2 trials for each sub-sample factor. All models are trained for 200 epochs and achieve 100% standard training accuracy in all cases. The adversarial models achieve > 99% adversarial training accuracy in all cases. We train the adversarial models under the @,. attack model with PGD adversarial training and €=0.3. For computing the max in each training step, we use 40 steps of PGD, with step size 0.01 (the parameters used in [6]). We use the Adam optimizer. The final robust test accuracy when training with the full training set was 91%. Adversarial Training Can Hurt Generalization 0.045 o.oo 4 0035 0.030 Â¥ # 0.025 & 0.020 0.015 0.010 (a) Test MSE (b) Generalization gap (c) Squared norm (d) Robust train MSE (e) Robust test MSE —e— Robust 1 Standard 50 40 £30 20 20 a 10? 10? Number of labeled samples Figure 6. Plots as number of samples varies for the case where robustness hurts (m = 1). For each n, we pick the best regularization parameter λ with respect to standard test MSE individually for robust and standard training. (a),(b) The standard estimator has lower test MSE, but the gap shrinks with more samples. Note that the trend in test MSE is almost identical to generalization gap. (c) The robust estimator has higher norm throughout training due to learning a more complex estimator. The norm of the standard estimator increases as sample size increases as it starts to converge to the true function, which is complex. (d, e) The robust train and test MSE is smaller for the robust estimator throughout. With larger sample size, the standard estimator improves in robust (train and test) MSE as it converges to the true function, which is robust. Shaded regions are 1 STD. Adversarial Training Can Hurt Generalization (a) Test MSE (b) Generalization gap (train MSE - test MSE) (c) Squared norm (d) Training robust MSE (e) Test robust MSE 0.08 ; 2 Robust 0.07- —* Standard 0.06; 0.05, E 6 :0.04- 0.03, 0.02; 001; 0.00 10? Number of labeled samples 18 0.014 0.013 uw 2 S % 0.012 e 0.011 0.010 10? Number of labeled samples Figure 7. Plots as number of samples varies for the case where robustness helps (m = 0). For each n, we pick the best regularization parameter λ with respect to standard test MSE individually for robust and standard training. (a),(b) The robust estimator has lower test MSE, and the gap shrinks with more samples. Note that the trend in test MSE is almost identical to generalization gap. (c) The robust estimator has consistent norm throughout due to the noise-cancelling behavior of optimizing the robust objective. While the standard estimator has low norm for small samples, it has high test MSE due to finding a low norm (close to linear) solution with the wrong slope. (d, e) The robust train and test MSE is smaller for the robust estimator throughout. Shaded regions are 1 STD.
{ "id": "1905.13736" }
1906.05909
Stand-Alone Self-Attention in Vision Models
Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention applied to ResNet model produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a pure self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox.
http://arxiv.org/pdf/1906.05909
Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, Jonathon Shlens
cs.CV
null
null
cs.CV
20190613
20190613
9 1 0 2 n u J 3 1 ] V C . s c [ 1 v 9 0 9 5 0 . 6 0 9 1 : v i X r a # Stand-Alone Self-Attention in Vision Models # Prajit Ramachandran∗ # Niki Parmar∗ # Ashish Vaswani∗ # Irwan Bello # Anselm Levskaya† # Jonathon Shlens Google Research, Brain Team {prajit, nikip, avaswani}@google.com # Abstract Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention applied to ResNet model produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a pure self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner’s toolbox. # Introduction Digital image processing arose from the recognition that handcrafted linear filters applied convolu- tionally to pixelated imagery may subserve a large variety of applications [1]. The success of digital image processing as well as biological considerations [2, 3] inspired early practitioners of neural networks to exploit convolutional representations in order to provide parameter-efficient architectures for learning representations on images [4, 5]. The advent of large datasets [6] and compute resources [7] made convolution neural networks (CNNs) the backbone for many computer vision applications [8–10]. The field of deep learning has in turn largely shifted toward the design of architectures of CNNs for improving the performance on image recognition [11–16], object detection [17–19] and image segmentation [20–22]. The translation equivariance property of convolutions has provided a strong motivation for adopting them as a building block for operating on images [23, 24]. However, capturing long range interactions for convolutions is challenging because of their poor scaling properties with respect to large receptive fields. ∗Denotes equal contribution. Ordering determined by random shuffle. †Work done as a member of the Google AI Residency Program. Preprint. Under review. The problem of long range interactions has been tackled in sequence modeling through the use of attention. Attention has enjoyed rich success in tasks such as language modeling [25, 26], speech recognition [27, 28] and neural captioning [29]. Recently, attention modules have been employed in discriminative computer vision models to boost the performance of traditional CNNs. Most notably, a channel-based attention mechanism termed Squeeze-Excite may be applied to selectively modulate the scale of CNN channels [30, 31]. Likewise, spatially-aware attention mechanisms have been used to augment CNN architectures to provide contextual information for improving object detection [32] and image classification [33–35]. These works have used global attention layers as an add-on to existing convolutional models. This global form attends to all spatial locations of an input, limiting its usage to small inputs which typically require significant downsampling of the original image. In this work, we ask the question if content-based interactions can serve as the primary primitive of vision models instead of acting as an augmentation to convolution. To this end, we develop a simple local self-attention layer that can be used for both small and large inputs. We leverage this stand-alone attention layer to build a fully attentional vision model that outperforms the convolutional baseline for both image classification and object detection while being parameter and compute efficient. Furthermore, we conduct a number of ablations to better understand stand-alone attention. We hope that this result will spur new research directions focused on exploring content-based interactions as a mechanism for improving vision models. # 2 Background # 2.1 Convolutions Convolutional neural networks (CNNs) are typically employed with small neighborhoods (i.e. kernel sizes) to encourage the network to learn local correlation structures within a particular layer. Given an input x ∈ Rh×w×din with height h, width w, and input channels din, a local neighborhood Nk around a pixel xij is extracted with spatial extent k, resulting in a region with shape k × k × din (see Figure 1). Given a learned weight matrix W ∈ Rk×k×dout×din , the output yij ∈ Rdout for position ij is defined by spatially summing the product of depthwise matrix multiplications of the input values: Yij = Ss Wi-a,j—b Tab (1) a,bEN¢ (4,9) where Nj,(i,j) = {a,b | |a —i| < k/2, |b — j| < k/2} (see Figure 2). Importantly, CNNs employ weight sharing, where W is reused for generating the output for all pixel positions ij. Weight sharing enforces translation equivariance in the learned representation and consequently decouples the parameter count of the convolution from the input size. output learned weights Figure 1: An example of a local window around i = 3, j = 3 (one-indexed) with spatial extent k = 3. Figure 2: An example of a 3 × 3 convolution. The output is the inner product between the local window and the learned weights. A wide array of machine learning applications have leveraged convolutions to achieve competitive results including text-to-speech [36] and generative sequence models [37, 38]. Several efforts have 2 reformulated convolutions to improve the predictive performance or the computational efficiency of a model. Notably, depthwise-separable convolutions provide a low-rank factorization of spatial and channel interactions [39–41]. Such factorizations have allowed for the deployment of modern CNNs on mobile and edge computing devices [42, 43]. Likewise, relaxing translation equivariance has been explored in locally connected networks for various vision applications [44]. # 2.2 Self-Attention Attention was introduced by [45] for the encoder-decoder in a neural sequence transduction model to allow for content-based summarization of information from a variable length source sentence. The ability of attention to learn to focus on important regions within a context has made it a critical component in neural transduction models for several modalities [26, 29, 27]. Using attention as a primary mechanism for representation learning has seen widespread adoption in deep learning after [25], which entirely replaced recurrence with self-attention. Self-attention is defined as attention applied to a single context instead of across multiple contexts (in other words, the query, keys, and values, as defined later in this section, are all extracted from the same context). The ability of self-attention to directly model long-distance interactions and its parallelizability, which leverages the strengths of modern hardware, has led to state-of-the-art models for various tasks [46–51]. An emerging theme of augmenting convolution models with self-attention has yielded gains in several vision tasks. [32] show that self-attention is an instantiation of non-local means [52] and use it to achieve gains in video classification and object detection. [53] also show improvements on image classification and achieve state-of-the-art results on video action recognition tasks with a variant of non-local means. Concurrently, [33] also see significant gains in object detection and image classification through augmenting convolutional features with global self-attention features. This paper goes beyond [33] by removing convolutions and employing local self-attention across the entirety of the network. Another concurrent work [35] explores a similar line of thinking by proposing a new content-based layer to be used across the model. This approach is complementary to our focus on directly leveraging existing forms of self-attention for use across the vision model. We now describe a stand-alone self-attention layer that can be used to replace spatial convolutions and build a fully attentional model. The attention layer is developed with a focus on simplicity by reusing innovations explored in prior works, and we leave it up to future work to develop novel attentional forms. Similar to a convolution, given a pixel xij ∈ Rdin , we first extract a local region of pixels in positions ab ∈ Nk(i, j) with spatial extent k centered around xij, which we call the memory block. This form of local attention differs from prior work exploring attention in vision which have performed global (i.e., all-to-all) attention between all pixels [32, 33]. Global attention can only be used after significant spatial downsampling has been applied to the input because it is computationally expensive, which prevents its usage across all layers in a fully attentional model. Single-headed attention for computing the pixel output yij ∈ Rdout is then computed as follows (see Figure 3): yij = Y. softmaxar (gijkap) Yad Q) abe Nx (i,j) where the queries qij = WQxij, keys kab = WKxab, and values vab = WV xab are linear transforma- tions of the pixel in position ij and the neighborhood pixels. softmaxab denotes a softmax applied to all logits computed in the neighborhood of ij. WQ, WK, WV ∈ Rdout×din are all learned transforms. While local self-attention aggregates spatial information over neighborhoods similar to convolutions (Equation 1), the aggregation is done with a convex combination of value vectors with mixing weights (softmaxab(·)) parametrized by content interactions. This computation is repeated for every pixel ij. In practice, multiple attention heads are used to learn multiple distinct representations of ij ∈ Rdin/N , the input. It works by partitioning the pixel features xij depthwise into N groups xn computing single-headed attention on each group separately as above with different transforms V ∈ Rdout/N ×din/N per head, and then concatenating the output representations into W n the final output yij ∈ Rdout. 3 —_ a: output keys values — matrix multiplication ----- learned transform Figure 3: An example of a local attention layer over spatial extent of k = 3. e[oo] 0: 0.2 4-1] 40 | 44 | 4,2 2,-1| 2,0 | 2,1 | 2,2 Figure 4: An example of relative distance computation. The rela- tive distances are computed with respect to the position of the high- lighted pixel. The format of dis- tances is row offset, column offset. As currently framed, no positional information is encoded in attention, which makes it permutation equivariant, limiting expressivity for vision tasks. Sinusoidal embeddings based on the absolute position of pixels in an image (ij) can be used [25], but early experimentation suggested that using relative positional embeddings [51, 46] results in significantly better accuracies. Instead, attention with 2D relative position embeddings, relative attention, is used. Relative attention starts by defining the relative distance of ij to each position ab ∈ Nk(i, j). The relative distance is factorized across dimensions, so each element ab ∈ Nk(i, j) receives two distances: a row offset a − i and column offset b − j (see Figure 4). The row and column offsets are associated with an embedding ra−i and rb−j respectively each with dimension 1 2 dout. The row and column offset embeddings are concatenated to form ra−i,b−j. This spatial-relative attention is now defined as yij =) softmaxay (q3ykan + gi)Ta—ip—j) Vab (3) a,bE Ny (i,7) Thus, the logit measuring the similarity between the query and an element in Nk(i, j) is modulated both by the content of the element and the relative distance of the element from the query. Note that by infusing relative position information, self-attention also enjoys translation equivariance, similar to convolutions. The parameter count of attention is independent of the size of spatial extent, whereas the parameter count for convolution grows quadratically with spatial extent. The computational cost of attention also grows slower with spatial extent compared to convolution with typical values of din and dout. For example, if din = dout = 128, a convolution layer with k = 3 has the same computational cost as an attention layer with k = 19. # 3 Fully Attentional Vision Models Given a local attention layer as a primitive, the question is how to construct a fully attentional architecture. We achieve this in two steps: # 3.1 Replacing Spatial Convolutions A spatial convolution is defined as a convolution with spatial extent k > 1. This definition excludes 1 × 1 convolutions, which may be viewed as a standard fully connected layer applied to each pixel independently.3 This work explores the straightforward strategy of creating a fully attentional vision model: take an existing convolutional architecture and replace every instance of a spatial convolution with an attention layer. A 2 × 2 average pooling with stride 2 operation follows the attention layer whenever spatial downsampling is required. 3Many deep learning libraries internally translate a 1 × 1 convolution to a simple matrix multiplication. 4 This work applies the transform on the ResNet family of architectures [15]. The core building block of a ResNet is a bottleneck block with a structure of a 1 × 1 down-projection convolution, a 3 × 3 spatial convolution, and a 1 × 1 up-projection convolution, followed by a residual connection between the input of the block and the output of the last convolution in the block. The bottleneck block is repeated multiple times to form the ResNet, with the output of one bottleneck block being the input of the next bottleneck block. The proposed transform swaps the 3 × 3 spatial convolution with a self-attention layer as defined in Equation 3. All other structure, including the number of layers and when spatial downsampling is applied, is preserved. This transformation strategy is simple but possibly suboptimal. Crafting the architecture with attention as a core component, such as with architecture search [54], holds the promise of deriving better architectures. # 3.2 Replacing the Convolutional Stem The initial layers of a CNN, sometimes referred to as the stem, play a critical role in learning local features such as edges, which later layers use to identify global objects. Due to input images being large, the stem typically differs from the core block, focusing on lightweight operations with spatial downsampling [11, 15]. For example, in a ResNet, the stem is a 7 × 7 convolution with stride 2 followed by 3 × 3 max pooling with stride 2. At the stem layer, the content is comprised of RGB pixels that are individually uninformative and heavily spatially correlated. This property makes learning useful features such as edge detectors difficult for content-based mechanisms such as self-attention. Our early experiments verify that using self-attention form described in Equation 3 in the stem underperforms compared to using the convolution stem of ResNet. The distance based weight parametrization of convolutions allows them to easily learn edge dectectors and other local features necessary for higher layers. To bridge the gap between convolutions and self-attention while not significantly increasing computation, we inject distance based information in the pointwise 1 x 1 convolution (W,) through spatially-varying linear transformations. The new value transformation is tq, = (Soin p(a, b,m)W, 1) Cab Where multiple value matrices W//’ are combined through a convex combination of factors that are a function of the position of the pixel in its neighborhood p(a, b,m). The position dependent factors are similar to convolutions, which learn scalar weights dependent on the pixel location in a neighborhood. The stem is then comprised of the attention layer with spatially aware value features followed by max pooling. For simplicity, the attention receptive field aligns with the max pooling window. More details on the exact formulation of p(a, b,m) is given in the appendix. # 4 Experiments # ImageNet Classification Setup We perform experiments on ImageNet classification task [55] which contains 1.28 million training images and 50000 test images. The procedure described in Section 3.1 of replacing the spatial convolution layer with a self-attention layer from inside each bottleneck block of a ResNet-50 [15] model is used to create the attention model. The multi-head self-attention layer uses a spatial extent of k = 7 and 8 attention heads. The position-aware attention stem as described above is used. The stem performs self-attention within each 4 × 4 spatial block of the original image, followed by batch normalization and a 4 × 4 max pool operation. Exact hyperparameters can be found in the appendix. To study the behavior of these models with different computational budgets, we scale the model either by width or depth. For width scaling, the base width is linearly multiplied by a given factor across all layers. For depth scaling, a given number of layers are removed from each layer group. There are 4 layer groups, each with multiple layers operating on the same spatial dimensions. Groups are delineated by spatial downsampling. The 38 and 26 layer models remove 1 and 2 layers respectively from each layer group compared to the 50 layer model. Results Table 1 and Figure 5 shows the results of the full attention variant compared with the convolution baseline. Compared to the ResNet-50 baseline, the full attention variant achieves 0.5% 5 ResNet-26 ResNet-38 ResNet-50 FLOPS (B) Params Acc. (%) (M) FLOPS (B) Params Acc. (%) (M) FLOPS (B) (M) Baseline Conv-stem + Attention Full Attention 4.7 4.5 4.7 13.7 10.3 10.3 74.5 75.8 74.8 6.5 5.7 6.0 19.6 14.1 14.1 76.2 77.1 76.9 8.2 7.0 7.2 25.6 18.0 18.0 Table 1: ImageNet classification results for a ResNet network with different depths. Baseline is a standard ResNet, Conv-stem + Attention uses spatial convolution in the stem and attention everywhere else, and Full Attention uses attention everywhere including the stem. The attention models outperform the baseline across all depths while having 12% fewer FLOPS and 29% fewer parameters. 79- Parameters vs. Accuracy 79- FLOPS vs. Accuracy 78.3 78.3 en in) 18.4075 78.0 ~ o ~ 3 ~ 3 Accuracy (%) Af 3 Accuracy (%) Af 3 ee C-stem Attention Full Attention m4- ee Baseline @-e C-stem Attention @-e Full Attention eve Baseline 1 ' 1 ' ' 735 ' ' 1 . ' 1 ' ' ' ' tt} 10 20 30 40 50 60 tt} 2 4 6 8 10 12 #14 #16 18 20 Parameters (M) FLOPS (B) 79- Parameters vs. Accuracy ~ o ~ 3 Accuracy (%) Af 3 ee C-stem Attention Full Attention ee Baseline 1 ' 1 ' ' tt} 10 20 30 40 50 60 Parameters (M) 79- FLOPS vs. Accuracy 78.3 78.3 en in) 18.4075 78.0 ~ 3 Accuracy (%) Af 3 m4- @-e C-stem Attention @-e Full Attention eve Baseline 735 ' ' 1 . ' 1 ' ' ' ' tt} 2 4 6 8 10 12 #14 #16 18 20 FLOPS (B) Figure 5: Comparing parameters and FLOPS against accuracy on ImageNet classification across a range of network widths for ResNet-50. Attention models have fewer parameters and FLOPS while improving upon the accuracy of the baseline. higher classification accuracy while having 12% fewer floating point operations (FLOPS)4 and 29% fewer parameters. Furthermore, this performance gain is consistent across most model variations generated by both depth and width scaling. # 4.2 COCO Object Detection Setup In this section, we evaluate attention models on the COCO object detection task [56] using the RetinaNet architecture [18]. RetinaNet is an object detection model that consists of a backbone image classification network followed by a Feature Pyramid Network (FPN) [57] and two output networks known as detection heads. We experiment with making the backbone and/or the FPN and detection heads fully attentional. The backbone models are the same models described in Section 4.1. The details of how the FPN and detection heads are made fully attentional are provided in the appendix. Results Table 2 shows the object detection results. Using an attention-based backbone in the RetinaNet matches the mAP of using the convolutional backbone but contains 22% fewer parameters. Furthermore, employing attention across all parts of the model including the backbone, FPN, and detection heads matches the mAP of the baseline RetinaNet while using 34% fewer parameters and 39% fewer FLOPS. These results demonstrate the efficacy of stand-alone attention across multiple vision tasks. 6 Detection Heads + FPN Backbone FLOPS (B) Params (M) mAPcoco / 50 / 75 mAPs / m / l Convolution Baseline Conv-stem + Attention Full Attention 182 173 173 33.4 25.9 25.9 36.5 / 54.3 / 39.0 36.8 / 54.6 / 39.3 36.2 / 54.0 / 38.7 18.3 / 40.6 / 51.7 18.4 / 41.1 / 51.7 17.5 / 40.3 / 51.7 Attention Conv-stem + Attention Full Attention 111 110 22.0 22.0 36.6 / 54.3 / 39.1 36.6 / 54.5 / 39.2 19.0 / 40.7 / 51.1 18.5 / 40.6 / 51.6 Table 2: Object detection on COCO dataset with RetinaNet [18]. Mean Average Precision (mAP) is reported at three different IoU values and for three different object sizes (small, medium, large). The fully attentional models achieve similar mAP as the baseline while having up to 39% fewer FLOPS and 34% fewer parameters. Conv Groups Attention Groups FLOPS (B) Params (M) Top-1 Acc. (%) - 1 1, 2 1, 2, 3 1, 2, 3, 4 2, 3, 4 3, 4 4 1, 2, 3, 4 2, 3, 4 3, 4 4 - 1 1, 2 1, 2, 3 7.0 7.3 7.5 8.0 8.2 7.9 7.8 7.2 18.0 18.1 18.5 20.8 25.6 25.5 25.0 22.7 80.2 80.7 80.7 80.2 79.5 79.7 79.6 79.9 Table 3: Modifying which layer groups use which primitive. Accuracies computed on validation set. The best performing models use convolutions for early groups and attention for later groups. FLOPS (B) Top-1 Acc. (%) 3 × 3 5 × 5 7 × 7 9 × 9 11 × 11 6.6 6.7 7.0 7.3 7.7 76.4 77.2 77.4 77.7 77.6 Spatial Extent (k × k) Table 4: Varying the spatial extent k. Param- eter count is constant across all variations. Small k perform poorly, but the improve- ments of larger k plateaus off. # 4.3 Where is stand-alone attention most useful? The impressive performance of fully attentional models verifies that stand-alone attention is a viable primitive for vision models. In this section, we study which parts of the network benefit the most from stand-alone attention. Stem First, we compare the performance of the attention stem against the convolution stem used in ResNet. All other spatial convolutions are replaced with stand-alone attention. Tables 1 and 2 and Figure 5 show the results on ImageNet classification and COCO object detection. For classification, the convolution stem consistently matches or outperforms the attention stem. For object detection, the convolution stem performs better when a the detection heads and FPN are also convolutional, but performs similarly when the entire rest of the network is fully attentional. These results suggest that convolutions consistently perform well when used in the stem. Full network Next, we experiment with using convolution and stand-alone attention in different layer groups in a ResNet with a convolution stem. Table 3 shows that the best performing models use convolutions in the early groups and attention in the later groups. These models are also similar in terms of FLOPS and parameters to the fully attentional model. In contrast, when attention is used in the early groups and convolutions are used in the later groups, the performance degrades despite a large increase in the parameter count. This suggests that convolutions may better capture low level features while stand-alone attention layers may better integrate global information. Taken together, these results suggest that vision practitioners should focus on developing strategies of designing architectures that combine the comparative advantages of convolution and stand-alone attention. 4Some prior works define a FLOP as a single atomic Multiply-Add, whereas we treat the Multiply and Add as 2 FLOPS. This causes a 2× discrepancy in the reported number. 7 Positional Encoding Type FLOPS (B) Params (M) Top-1 Acc. (%) none absolute relative 6.9 6.9 7.0 18.0 18.0 18.0 77.6 78.2 80.2 Attention FLOPS Params Top-1 type BM) Ace. (%) qir 6.1 16.7 76.9 qik+q'r 7.0 18.0 714 Table 5: The effect of changing the positional en- coding type for attention. Accuracies computed on the validation set. Relative encodings signifi- cantly outperform other strategies. Table 6: The effect of removing the q'k inter- actions in attention. Using just q'r interactions only drops accuracy by 0.5%. Attention Stem Type FLOPS (B) Top-1 Acc. (%) stand-alone spatial convolution for values spatially aware values 7.1 7.4 7.2 76.2 77.2 77.6 Table 7: Ablating the form of the attention stem. Spatially-aware value attention outperforms both stand-alone attention and values generated by a spatial convolution. # 4.4 Which components are important in attention? This section presents ablations designed to understand the contributions of the various components in the local attention layer. Unless specified, all attention models in the ablations use the convolution stem. # 4.4.1 Effect of spatial extent of self-attention The value of the spatial extent k controls the size of the region each pixel can attend to. Table 4 studies the effect of varying the spatial extent. While using small k, such as k = 3, has a large negative impact on performance, the improvements of using a larger k plateau around k = 11. The exact plateau value likely depends on specific settings of hyperparameters such as the feature size and number of attention heads used. # 4.4.2 Importance of positional information Table 5 ablates the different types of positional encodings that can be used: no positional encoding, a sinusodial encoding dependent on the absolute position of a pixel [25], and relative position encodings. Using any notion of positional encoding is beneficial over using none, but the type of positional encoding is also important. Relative position encodings perform 2% better than absolute encodings. Furthermore, Table 6 demonstrates the important role of the content-relative interactions (q · r) in attention. Removing the content-content (q · k) interactions and just using the content-relative interactions drops the accuracy by only 0.5%. The importance of positional information suggests that future work may improve attention by exploring different parameterizations and usages of positional information. # 4.4.3 Importance of spatially-aware attention stem Table 7 compares using stand-alone attention in the stem with the attention stem with spatially-aware values proposed in Section 3.2. The proposed attention stem outperforms stand-alone attention by 1.4% despite having a similar number of FLOPS, validating the utility of modifying attention for use in the stem. Furthermore, applying a spatial convolution to the values instead of a spatially-aware mixture of point-wise transformations proposed in Section 3.2 incurs more FLOPS and performs slightly worse. Future work can focus on unifying the spatially-aware attention used in the stem with the attention used in the main trunk of the network. 8 # 5 Discussion In this work, we verified that content-based interactions can indeed serve as the primary primitive of vision models. A fully attentional network based off of the proposed stand-alone local self-attention layer achieves competitive predictive performance on ImageNet classification and COCO object detection tasks while requiring fewer parameters and floating point operations than the corresponding convolution baselines. Furthermore, ablations show that attention is especially effective in the later parts of the network. We see several opportunities for improving the performance of these networks. First, the attention mechanism may be improved by developing better methods for capturing geometries [58, 59]. Second, the architectures employed for image classification and object detection were developed by applying a simple transformation to models designed for the convolutional primitive [13, 19]. It may be possible to achieve improvements by specifically searching for the architecture with an attention layer as a component in the design search space [31, 16, 21, 60]. Finally, additional work on proposing new attention forms that can capture low level features can make attention effective in the early layers of networks [61, 62]. Although the training efficiency and computational demand of an attention based architecture is favorable to a traditional convolution, the resulting network is slower in wall-clock time. The reason for this discrepancy is the lack of optimized kernels available on various hardware accelerators. In principle, depending on the degree to which the field deems that attention provides a viable path, it may be possible to significantly speed up the wall-clock time for training and inference accordingly. While this work primarily focuses on content-based interactions to establish their virtue for vision tasks, in the future, we hope to unify convolution and self-attention to best combine their unique advantages. Given the success of content-based interactions on core computer vision tasks, we expect that future work may explore how attention could be applied to other vision tasks such as semantic segmentation [63], instance segmentation [64], keypoint detection [65], human pose estimation [66, 67] and other tasks currently addressed with convolutional neural networks. # Acknowledgments We thank Blake Hechtman, Justin Gilmer, Pieter-jan Kindermans, Quoc Le, Samy Bengio, and Shibo Wang for fruitful discussions and assistance with implementations as well as the larger Google Brain team for support and assistance. # References [1] R. C. Gonzalez, R. E. Woods, et al., “Digital image processing [m],” Publishing house of electronics industry, vol. 141, no. 7, 2002. [2] K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” Biological cybernetics, vol. 36, no. 4, pp. 193–202, 1980. [3] K. Fukushima, “Neocognitron: A hierarchical neural network capable of visual pattern recogni- tion,” Neural networks, vol. 1, no. 2, pp. 119–130, 1988. [4] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural computation, vol. 1, no. 4, pp. 541–551, 1989. [5] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, 1998. [6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2009. [7] J. Nickolls and W. J. Dally, “The gpu computing era,” IEEE micro, vol. 30, no. 2, pp. 56–69, 2010. 9 [8] A. Krizhevsky, “Learning multiple layers of features from tiny images,” tech. rep., University of Toronto, 2009. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing System, 2012. [10] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015. [11] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015. [12] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception architec- ture for computer vision,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016. [13] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in European Conference on Computer Vision, 2016. [14] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016. [16] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697–8710, 2018. [17] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [18] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017. [19] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Advances in Neural Information Processing Systems, pp. 91–99, 2015. [20] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2018. [21] L.-C. Chen, M. Collins, Y. Zhu, G. Papandreou, B. Zoph, F. Schroff, H. Adam, and J. Shlens, “Searching for efficient multi-scale architectures for dense image prediction,” in Advances in Neural Information Processing Systems, pp. 8713–8724, 2018. [22] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” in Proceedings of the IEEE international conference on computer vision, pp. 2961–2969, 2017. [23] E. P. Simoncelli and B. A. Olshausen, “Natural image statistics and neural representation,” Annual review of neuroscience, vol. 24, no. 1, pp. 1193–1216, 2001. [24] D. L. Ruderman and W. Bialek, “Statistics of natural images: Scaling in the woods,” in Advances in neural information processing systems, pp. 551–558, 1994. [25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, pp. 5998–6008, 2017. 10 [26] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system: Bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016. [27] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Advances in neural information processing systems, pp. 577–585, 2015. [28] W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4960–4964, IEEE, 2016. [29] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in International conference on machine learning, pp. 2048–2057, 2015. [30] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. [31] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le, “Mnasnet: Platform-aware neural architecture search for mobile,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. [32] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803, 2018. [33] I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le, “Attention augmented convolutional networks,” CoRR, vol. abs/1904.09925, 2019. [34] J. Hu, L. Shen, S. Albanie, G. Sun, and A. Vedaldi, “Gather-excite: Exploiting feature context in convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 9423–9433, 2018. [35] H. Hu, Z. Zhang, Z. Xie, and S. Lin, “Local relation networks for image recognition,” arXiv preprint arXiv:1904.11491, 2019. [36] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016. [37] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma, “PixelCNN++: Improving the Pix- elCNN with discretized logistic mixture likelihood and other modifications,” arXiv preprint arXiv:1701.05517, 2017. [38] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, “Convolutional sequence to sequence learning,” CoRR, vol. abs/1705.03122, 2017. [39] L. Sifre and S. Mallat, “Rigid-motion scattering for image classification,” PhD thesis, Ph. D. thesis, vol. 1, p. 3, 2014. [40] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Learning Representations, 2015. [41] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. [42] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [43] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, 2018. 11 [44] S. Bartunov, A. Santoro, B. Richards, L. Marris, G. E. Hinton, and T. Lillicrap, “Assessing the scalability of biologically-motivated deep learning algorithms and architectures,” in Advances in Neural Information Processing Systems, pp. 9368–9378, 2018. [45] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” in International Conference on Learning Representations, 2015. [46] C.-Z. A. Huang, A. Vaswani, J. Uszkoreit, N. Shazeer, C. Hawthorne, A. M. Dai, M. D. Hoffman, and D. Eck, “Music transformer,” in Advances in Neural Processing Systems, 2018. [47] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised multitask learners,” OpenAI Blog, vol. 1, p. 8, 2019. [48] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” CoRR, vol. abs/1810.04805, 2018. [49] N. Parmar, A. Vaswani, J. Uszkoreit, Ł. Kaiser, N. Shazeer, A. Ku, and D. Tran, “Image transformer,” in International Conference on Machine Learning, 2018. [50] N. Shazeer, Y. Cheng, N. Parmar, D. Tran, A. Vaswani, P. Koanantakool, P. Hawkins, H. Lee, M. Hong, C. Young, R. Sepassi, and B. A. Hechtman, “Mesh-tensorflow: Deep learning for supercomputers,” CoRR, vol. abs/1811.02084, 2018. [51] P. Shaw, J. Uszkoreit, and A. Vaswani, “Self-attention with relative position representations,” arXiv preprint arXiv:1803.02155, 2018. [52] A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in Pro- ceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 2 - Volume 02, CVPR ’05, (Washington, DC, USA), pp. 60–65, IEEE Computer Society, 2005. [53] Y. Chen, Y. Kalantidis, J. Li, S. Yan, and J. Feng, “Aˆ 2-nets: Double attention networks,” in Advances in Neural Information Processing Systems, pp. 352–361, 2018. [54] B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” in International Conference on Learning Representations, 2017. [55] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li, “Imagenet large scale visual recognition challenge,” CoRR, vol. abs/1409.0575, 2014. [56] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision, pp. 740–755, Springer, 2014. [57] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125, 2017. [58] T. S. Cohen, M. Geiger, J. Köhler, and M. Welling, “Spherical cnns,” arXiv preprint arXiv:1801.10130, 2018. [59] T. S. Cohen, M. Weiler, B. Kicanaoglu, and M. Welling, “Gauge equivariant convolutional networks and the icosahedral cnn,” arXiv preprint arXiv:1902.04615, 2019. [60] G. Ghiasi, T.-Y. Lin, R. Pang, and Q. V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detection,” arXiv preprint arXiv:1904.07392, 2019. [61] F. Wu, A. Fan, A. Baevski, Y. N. Dauphin, and M. Auli, “Pay less attention with lightweight and dynamic convolutions,” arXiv preprint arXiv:1901.10430, 2019. [62] X. Zhu, D. Cheng, Z. Zhang, S. Lin, and J. Dai, “An empirical study of spatial attention mechanisms in deep networks,” arXiv preprint arXiv:1904.05873, 2019. 12 [63] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, 2017. [64] L.-C. Chen, A. Hermans, G. Papandreou, F. Schroff, P. Wang, and H. Adam, “Masklab: Instance segmentation by refining object detection with semantic and direction features,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4013–4022, 2018. [65] D. DeTone, T. Malisiewicz, and A. Rabinovich, “Superpoint: Self-supervised interest point detection and description,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 224–236, 2018. [66] A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1653–1660, 2014. [67] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in European Conference on Computer Vision, pp. 483–499, Springer, 2016. [68] Y. E. NESTEROV, “A method for solving the convex programming problem with convergence rate o(1/k2),” Dokl. Akad. Nauk SSSR, vol. 269, pp. 543–547, 1983. [69] I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in International Conference on Machine Learning, 2013. [70] I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with warm restarts,” arXiv preprint arXiv:1608.03983, 2016. [71] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P.-l. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, “In-datacenter performance analysis of a tensor processing unit,” SIGARCH Comput. Archit. News, vol. 45, pp. 1–12, June 2017. [72] B. Polyak and A. Juditsky, “Acceleration of stochastic approximation by averaging,” SIAM Journal on Control and Optimization, vol. 30, no. 4, pp. 838–855, 1992. [73] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Computer Vision and Pattern Recognition (CVPR), 2015. 13 # A Appendix # A.1 Attention Stem In this section, we first describe the standard self-attention layer followed by the spatially-aware mixtures in the attention stem. For an input with xij ∈ Rdin we define a standard single-headed self-attention layer as Gij = WQrij (4) # qij = WQxij kij = WKxij vij = WV xij ki = Wrvij (5) vig = Wy rig (6) yij =) softmaxay (qi) kab) vav (7) a,be Nx (3,9) where Wo,Wk,Wv € RtimXdout and the neighborhood =A. (i, /) = {a,b | |a—i] < k/2,|b— j| <k/2} yielding the intermediate per-pixel queries, keys, and values qi;, kij, Vij € R¢™* and the final output y;; € Rv. The attention stem replaces the pointwise values v;; by spatially-aware linear transformations. For simplicity, we align the query, key and value receptive field with the max-pooling receptive field of 4 x 4. Then to inject distance aware value features, we use a convex combination of multiple value matrices Wy’ where the combination weights are a function of the absolute position of the value in the pooling window. The functional form is defined in Equation 9 which computes the logit between the absolute embedding and the mixture embedding v”’. Vab = (= pa, b, mire) Lab (8) p(a,b,m) = softmax ((embyow(a) + embeor(b)) Vv”) (9) Where embrow(a) and embcol(b) are pooling-window aligned row and column embeddings and νm is a per-mixture embedding. The resulting pm ab are shared across the 4 attention heads for the mixture stem layer. # ImageNet Training Details For tuning, a validation set containing a 4% random subset of the training set is used. Training is performed for 130 epochs using Nesterov’s Accelerated Gradient [68, 69] with a learning rate of 1.6 which is linearly warmed up for 10 epochs followed by cosine decay [70]. A total batch size of 4096 is spread across 128 Cloud TPUv3 cores [71]. The setup uses batch normalization [40] with decay 0.9999 and exponential moving average with weight 0.9999 over trainable parameters [72, 73]. # A.3 Object Detection Training Details The fully attentional object detection architecture uses the fully attentional classification models detailed in Section 4.1 as its backbone network. The rest of the architecture is obtained by replacing the 3 × 3 convolutions in the original RetinaNet architecture with self-attention layers of the same width (dout = 256). We additionally apply 2 × 2 average pooling with stride 2 when replacing a strided convolution. The classification and regression heads share weights across all levels and their WV matrices are initialized randomly from a normal distribution with standard deviation 0.01 as in the original RetinaNet architecture [18]. Finally, we add an extra pointwise convolution at the end of the classification and box regression heads to mix the attentional heads. All self-attention layers use a spatial extent of k = 7 and 8 heads as for the image classification experiments. We follow a similar training setup as in [18, 33]. All networks are trained for 150 epochs with a batch size of 64. The learning rate is warmed up linearly from 0 to 0.12 for one epoch and then decayed 14 (4) (5) (6) using a cosine schedule. We apply multiscale jitter, crop to a max dimension of 640 during training and randomly flip images horizontally with 50% probability. 15
{ "id": "1801.10130" }
1906.05271
Does Learning Require Memorization? A Short Tale about a Long Tail
State-of-the-art results on image recognition tasks are achieved using over-parameterized learning algorithms that (nearly) perfectly fit the training set and are known to fit well even random labels. This tendency to memorize the labels of the training data is not explained by existing theoretical analyses. Memorization of the training data also presents significant privacy risks when the training data contains sensitive personal information and thus it is important to understand whether such memorization is necessary for accurate learning. We provide the first conceptual explanation and a theoretical model for this phenomenon. Specifically, we demonstrate that for natural data distributions memorization of labels is necessary for achieving close-to-optimal generalization error. Crucially, even labels of outliers and noisy labels need to be memorized. The model is motivated and supported by the results of several recent empirical works. In our model, data is sampled from a mixture of subpopulations and our results show that memorization is necessary whenever the distribution of subpopulation frequencies is long-tailed. Image and text data is known to be long-tailed and therefore our results establish a formal link between these empirical phenomena. Our results allow to quantify the cost of limiting memorization in learning and explain the disparate effects that privacy and model compression have on different subgroups.
http://arxiv.org/pdf/1906.05271
Vitaly Feldman
cs.LG, stat.ML
Significant revision: revised introduction/overview; added formal treatment of noise in the labels and explanation for the disparate effects of limiting memorization
null
cs.LG
20190612
20210110
1 2 0 2 n a J 0 1 ] G L . s c [ 4 v 1 7 2 5 0 . 6 0 9 1 : v i X r a # Does Learning Require Memorization? A Short Tale about a Long Tail Vitaly Feldman* Google Research, Brain Team # Abstract State-of-the-art results on image recognition tasks are achieved using over-parameterized learning algorithms that (nearly) perfectly fit the training set and are known to fit well even random labels. This tendency to memorize the labels of the training data is not explained by existing theoretical analyses. Memorization of the training data also presents significant privacy risks when the training data contains sensitive personal information and thus it is important to understand whether such memorization is necessary for accurate learning. We provide the first conceptual explanation and a theoretical model for this phenomenon. Specifically, we demonstrate that for natural data distributions memorization of labels is necessary for achieving close- to-optimal generalization error. Crucially, even labels of outliers and noisy labels need to be memorized. The model is motivated and supported by the results of several recent empirical works. In our model, data is sampled from a mixture of subpopulations and our results show that memorization is necessary whenever the distribution of subpopulation frequencies is long-tailed. Image and text data is known to be long-tailed and therefore our results establish a formal link between these empirical phenomena. Our results allow to quantify the cost of limiting memorization in learning and explain the disparate effects that privacy and model compression have on different subgroups. *Now at Apple. Part of this work was done while the author was visiting the Simons Institute for the Theory of Computing. # 1 Introduction Understanding the generalization properties of learning systems based on deep neural networks (DNNs) is an area of great practical importance and significant theoretical interest. The models used in deep learning are famously overparameterized, that is, contain many more tunable parameters than available data points. This makes it is easy to find models that “overfit” to the data by effectively memorizing the labels of all the training examples. The standard theoretical approach to understanding of how learning algorithms avoid such overfitting is based on the idea of regularization. Learning algorithms are designed to either explicitly or implicitly balance the level of the model’s complexity (and, more generally, its ability to fit arbitrary data) and the empirical error on the training dataset. Fitting each mislabeled point or an outlier requires increasing the level of model’s complexity and therefore, by tuning this balance, the learning algorithm can find the patterns in the data without overfitting. A variety of regularization techniques are widely used in practice and have been analyzed theoretically. Yet, the accepted view of regularization contradicts the empirical evidence from most modern image and text classification datasets. Deep learning algorithms tend to produce models that fit the training data very well, typically achieving 95-100% accuracy, even when the accuracy on the test dataset is much more modest (often in the 50-80% range). Such (near) perfect fitting requires memorization1 of mislabeled data and outliers which are inevitably present in large datasets. Further, it is known that the same learning algorithms achieve training accuracy of over 90% on the large ImageNet dataset [Den+09] that is labeled completely randomly [Zha+17]. It is therefore apparent that these algorithms are not using regularization that is sufficiently strong to prevent memorization of (the labels of) mislabeled examples and outliers. This captivating disconnect between the classical theory and modern ML practice has attracted significant amount of research and broad interest in recent years (see Sec. 1.3 for an overview). At the same time the phenomenon is far from new. Random forests [Bre01] and Adaboost [FS97] are known to achieve their optimal generalization error on many learning problems while fitting the training data perfectly [Sch+98; Sch13; Wyn+17]. There is also recent evidence that this holds for kernel methods in certain regimes as well [Zha+17; BMM18; LR18]. Understanding this disconnect is also of significant importance in the context of privacy-preserving machine learning. Privacy is a natural concern when the training data contains sensitive information about individuals such as medical records or private communication. The propensity of deep learning algorithms to memorize training data is known to pose privacy risks when the resulting model is deployed [Sho+17]. This leads to the question of whether such memorization is necessary for learning with high accuracy or is merely an artifact of the current learning methods. # 1.1 Our contribution We propose a conceptually simple explanation and supporting theory for why memorization of seemingly useless labels may be necessary to achieve close-to-optimal generalization error. It is based on the view that the primary hurdle to learning an accurate model is not the noise inherent in the labels but rather an insufficient amount of data to predict accurately on rare and atypical instances. Such instances are usually referred in practice as the “long tail” of the data distribution. It has been widely observed that modern datasets used for visual object recognition and text labeling follow the classical long-tailed distributions such as Zipf distribution (or more general power law distributions). 1In this work we will formalize and quantify this notion of memorization. Informally, we say that a learning algorithm memorizes the label of some example (x, y) in its dataset S if the model output on S predicts y on x whereas the model obtained by training on S without (x, y) is unlikely to predict y on x. 1 n ® 500 x, — log-log distribution Q ao 10) Neg. [oss Line tit o 8 = Bos 5 a 10° [s) 3 10° ° g ‘5 € _ 21 ; 2 100 = 10° 3 Object classes = 0 x. aA “mo a7 Window Person... Rope Spoon Locker ... Coffin ... Ziggurat (a) The number of examples by object class in SUN dataset Bus $00 Person 80 ze 80 2 ; Ee 0 5 60 x oO 3 40 Vis paton 40 £ 3 20 20 0 50 60 0 20 800 1000 10 20 30 40 10 400 600 Visibility pattern Visibility pattern (b) Distributions of the visibility patterns for bus and person Figure 1: Long tail of class frequencies and subpopulation frequencies within classes. The figure is taken from [ZAR14] with the authors’ permission. To formalize the notion of having a “long tail” we will model the data distribution of each class (in a multiclass prediction problem) as a mixture of distinct subpopulations. For example, images of birds include numerous different species photographed from different perspectives and under different conditions (such as close-ups, in foliage and in the sky) [VHP17]. Naturally, the subpopulations may have different frequencies (which correspond to mixture coefficients). We model the informal notion of long-tailed data distributions as distributions in which the frequencies of subpopulations are long-tailed. The long-tailed nature of subpopulation frequencies is known in datasets for which additional human annotations are available. A detailed discussion of this phenomenon in the SUN object detection benchmark [Xia+10] can be found in the work of Zhu et al. [ZAR14]. In Fig. 1 we include a plot from the work that demonstrates the long tail of the frequency distribution. Additional evidence that classes can be viewed as long-tailed mixtures of subpopulations comes from extreme multiclass problems. Specifically, these problems often have more than 10, 000 fine-grained labels and the number of examples per class is long-tailed [BS17; WRH17; Kri+17; VHP17; Cui+18; BS19a]. Observe that fine-grained labels in such problems correspond to subcategories of coarser classes (for example, different species of birds all correspond to the “bird” label in a coarse classification problem). We also remark 2 that subpopulations do not have to directly correspond to human-definable categories. They are the artifacts of the representation used by the learning algorithm which are often relatively low-level. It is natural to presume that before seeing the dataset the learning algorithm does not know the frequencies of subpopulations. The second key observation underlying our explanation is that the algorithm may not be able to predict accurately on a subpopulation until at least one example from the subpopulations is observed. Alternatively, the accuracy of the algorithm on a subpopulation is likely to increase noticeably once a representative example from that subpopulation is observed. A dataset of n samples from a long-tailed mixture distribution will have some subpopulations from which just a single example was observed (and some subpopulations from which none at all). To predict more accurately on a subpopulation from which only a single example was observed (and to fit the example) the learning algorithm needs to memorize the label of the example. The question is whether this is necessary for achieving close-to-optimal generalization error. The answer depends on the frequency of the subpopulation. If the unique example from a subpopulation (or singleton) comes from an extremely rare (or “outlier”) subpopulation then memorizing it has no significant benefits. At the same time, if the singleton comes from an “atypical” subpopulation with frequency on the order of 1/n, then memorizing such an example is likely to improve the accuracy on the entire subpopulation and thereby reduce the generalization error by Ω(1/n). The key point of this work is that based on observing a single sample from a subpopulation, it is impossible to distinguish samples from “atypical” subpopulations from those in the “outlier” ones. Therefore an algorithm can only avoid the risk of missing “atypical” subpopulations by also memorizing the labels of singletons from the “outlier” subpopulations. Importantly, in a long-tailed distribution of frequencies, the total weight of frequencies on the order of 1/n is significant enough that ignoring these subpopulations will hurt the generalization error substantially. Thus, for such distributions, an algorithm needs to memorize the labels of outliers in order to achieve close-to-optimal generalization. The long tail effect also explains why memorizing mislabeled examples can be necessary. As discussed, a learning algorithm may be unable to infer the label of a singleton example accurately based on the rest of the dataset. Thus as long as the observed label is the most likely to be true and the singleton comes from an “atypical” subpopulation, the algorithm needs to memorize the label. In contrast, if the mislabeled example comes from a subpopulation with many other examples in the dataset, the correct label can be inferred from the other labels and thus memorization is not necessary (and can even be harmful). In most datasets used in machine learning benchmarks only relatively atypical examples are mislabeled and the noise rate is low. Thus learning algorithms for such datasets are tuned to memorize the labels quite aggressively. # 1.1.1 Overview On a technical level our primary contribution is turning this intuitive but informal explanation into a formal model that allows to quantify the trade-offs involved. This model also allows to quantify the cost of limiting memorization (for example, via regularization or ensuring differential privacy) when learning from natural data distributions. We start by explaining why achieving close-to-optimal generalization error requires fitting outliers and (some) mislabeled examples since this is the phenomenon observed in practice. We then formalize the claim that such fitting requires label memorization. Our explanation is based on a simple model for classification problems that incorporates the long tail of frequencies in the data distribution. The goal of the model is to isolate the discussion of the effect of memorization on the accuracy from other aspects of modeling subpopulations. More formally, in our model the domain X is unstructured and has size N (each point will correspond to a subpopulation in the more general model). In the base model the true labeling function belongs to some class of functions F known to the learning algorithm. We will be primarily interested in 3 the setting where F is rich (or computationally hard) enough that for a significant fraction of the points the learning algorithm cannot predict the label of a point well without observing it in the dataset. In particular, fitting some of the examples will require memorizing their labels. Nothing is known a priori about the frequency of any individual point aside from a prior distribution over the frequencies described by a list of N frequencies π = (π1, . . . , πN ). Our results are easiest to express when the objective of the learning algorithm is to minimize the expectation of the error over a random choice of the marginal distribution D over X from some meta-distribution D (instead of the more usual worst-case error). In addition, for convenience of notation we will also measure the error with respect to a random choice of the labeling function from some distribution F over F . That is, the objective of a learning algorithm A is defined as: err(D,F,A):= E E Pr (h(x) 4 f(2)| D~D,fxF | SW(D,f)", hw A(S) [e~D . Specifically, we consider the following meta-distribution over marginal distributions on X: the frequency of each point in the domain is chosen randomly and independently from the prior π of individual frequencies and then normalized to 1. This process results in a meta-distribution D over marginal distributions that is similar to choosing the frequencies of the elements to be a random permutation of the elements of π. Models measuring the worst-case error over all the permutations of a list of frequencies underlie the recent breakthroughs in the analysis of density estimation algorithms [OS15; VV16]. We believe that results similar to ours can be obtained in this worst-case model as well and leave such an extension for future work2. Our main result (Thm. 2.3) directly relates the number of points that an algorithm does not fit to the sub-optimality (or excess error) of the algorithm via a quantity that depends only on the frequency prior π and n. Importantly, excess error is measured relative to the optimal algorithm and not relative to the best model in some class. Formally, we denote by errnS(A, 1) the number of examples that appear once in the dataset S and are mislabeled by the classifier that A outputs on S. A special case of our theorem states: err(π, F, A) ≥ opt(π, F) + τ1 · E [errnS(A, 1)] . (1) Here err(π, F, A) refers to the expected generalization error of A and opt(π, F) is the minimum achievable error by any algorithm (expectations are with respect to the meta-distribution over learning problems resulting from the process we described, randomness of the learning algorithm and also sampling of the dataset). The important quantity here is Ey~an [o?-(1- a)" Eyre (a: (L—a)r-}] ’ T= where ¯πN is the actual marginal distribution over frequencies that results from our process and is, basically, a slightly smoothed version of π. We note that the optimal algorithm in this case does not depend on π and thus our modeling does not require the learning algorithm to know π to achieve near-optimal generalization error. The quantity τ1 is easy to compute given π. As a quick numerical example, for the prototypical long-tailed Zipf distribution (where the frequency of the i-th most frequent item is proportional to 1/i) over the universe of size N = 50, 000 and n = 50, 000 samples, one gets the expected loss of at least ≈ 0.47/n per every example the learner does not fit. For comparison, the worst-case loss (per point) in this setting is determined by the least frequent element and is ≈ 0.09/n. Given that the expected fraction of samples that appear once is ≈ 17%, an algorithm that does not fit well will be suboptimal by ≈ 7% (with the optimal top-1 error for 10 balanced classes being ≈ 15% in this case). More generally, we show that τ1 can be lower bounded by the 2The extension to measuring the worst-case error over the choice of f ∈ F , on the other hand, is straightforward. 4 total weight of the part of the prior π which has frequency on the order of 1/n and also that the absence of frequencies on this order will imply negligible τ1 (see Sec. 2.5 for more details). In our basic model the data is labeled correctly and fitting all the training examples (also referred to as interpolation) is the optimal strategy. We extend our model to a more general setting in which examples can be mislabeled. Under the assumption that the learning algorithm’s prior makes the observed label the most likely to be correct by some margin κ we demonstrate that memorization of labels is necessary for singleton examples. The cost of not fitting given in eq. (1) is now multiplied by κ (see Sec. 2.4 for details). Note that in the presence of noise, interpolation may no longer be the optimal strategy, and in particular, memorization of noisy labels can be necessary even in the non-interpolating regime. Continuous data distributions: Naturally, our simple setting in which individual points have significant probability does not capture the continuous and high-dimensional ML problems where each individual point has an exponentially small (in the dimension) probability. In this more general setting the prediction on the example itself has negligible effect on the generalization error. To show how the effects we demonstrated in the simple discrete setting extend to continuous distributions, we consider mixture models of subpopulations. In our model, the frequencies of subpopulations (or mixture coefficients) are selected randomly according to the prior π as before. The labeling function is also chosen as before and is assumed to be constant over every subpopulation. The discussion of the relationship between fitting the dataset and generalization makes sense only if one assumes that the prediction on the data point in the dataset will affect the predictions on related points. In our setting it is natural to assume that (with high probability) the learning algorithm’s prediction on a single point from a subpopulation will be correlated with the prediction on a random example from the same subpopulation. We refer to this condition as coupling (Defn. 3.1) and show that eq. (1) still holds up to the adjustment for the strength of the coupling. Intuitively, it is clear that this form of “coupling” is likely to apply to “local” learning rules such as the nearest neighbors algorithm. Indeed, our assumption can be seen as a more abstract version of geometric smoothness conditions on the marginal distribution of the label used in analysis of such methods (e.g. [CD14]). We also show that it applies to linear predictors/SVMs in high dimension provided that distinct subpopulations are sufficiently uncorrelated (see Sec. 3.1). Deep neural networks are known to have some of the properties of both nearest neighbor rules and linear classifiers in the last-hidden-layer representation (e.g. [CSG18]). Thus DNNs are likely to exhibit this type of coupling as well. From fitting to memorization and privacy: The results we described so far demonstrate that an algorithm that does not fit the training data well will be suboptimal on long-tailed data distributions. Fitting of training data was not previously explained only when the learning algorithm fits the training labels much better than the test data, in other words, when the generalization gap is large (often > 20%). Such fitting suggests that the training algorithm memorized a large fraction of the training labels. To make this intuition formal we give a simple definition of what memorizing a label of a point in the dataset means (we are not aware of a prior formal definition of this notion). Formally, for a dataset S = (xi, yi)i∈[n] and i ∈ [n] define mem(A, S, i) := Pr h∼A(S) [h(xi) = yi] − Pr h∼A(S\i) [h(xi) = yi], where S\i denotes the dataset that is S with (xi, yi) removed. This value is typically non-negative and we think of label as memorized when this value is larger than some fixed positive constant (such as 0.5). Namely, 5 the label of an example is memorized if it is fit well by the algorithm despite being hard to predict based on the rest of the dataset. This definition is closely related to the classical leave-one-out notion of stability [DW79; BE02] but focuses on the change in the label and not in the incurred loss. As in the case of stability, our notion of label memorization is directly related to the expected generalization gap. Indeed, the expectation over the choice of dataset of the average memorization value is equal to the expectation of the generalization gap. Thus a large generalization gap implies that a significant fraction of labels is memorized. An immediate corollary of this definition is that an algorithm with a limited ability to memorize labels will not fit the singleton data points well whenever the algorithm cannot predict their labels based on the rest of the dataset. Two natural situations in which the algorithm will not be able to predict these labels are learning a complex labeling function (e.g. having large VC dimension) and computational hardness of finding a simple model of the data. In addition, the labels are also hard to predict in the presence of noise. A direct corollary of our results is that limiting memorization (for example via regularization or model compression) and differential privacy has costs in terms of achievable generalization error. The sharp quantitative nature of these results allows us to explain recent empirical findings demonstrating that these costs can disproportionably higher for less frequent subgroups in the population (see Section 4.4 for details). # 1.2 Known empirical evidence The best results (that we are aware of) on modern benchmarks that are achieved without interpolation are those for differentially private (DP) training algorithms [Aba+16} |Pap+16} |Pap+17;|McM+18]. While not interpolating is not the goal, the properties of DP imply that a DP algorithm with the privacy parameter € = O(1) cannot memorize individual labels (see Sec|4.3]for more details on why). Moreover, they result in remarkably low gap between the training and test error that is formally explained by the generalization properties of DP [Dwo+14]. However, the test error results achieved in these works are well below the state-of-the-art using similar models and training algorithms. For example, Papernot et al. report accuracy of 98% and 82.7% on MNIST and SVHN as opposed to 99.2% and 92.8%, respectively when training the same models without privacy. The motivation and inspiration for this work comes in part from attempts to understand why DP algorithms fall short of their non-private counterparts and which examples are they more likely to misclassify. A thorough and recent exploration related to this question can be found in the work of Carlini et al. [CEP19]. They consider different ways to measure how “prototypical” each of the data points is according to several natural metrics and across MNIST, CIFAR-10, Fashion-MNIST and ImageNet datasets and compare between these metrics. One of those metrics is the highest level of privacy that a DP training algorithm can achieve while still correctly classifying an example that is correctly classified by a non-private model. As argued in that work and is clear from their comprehensive visualization, the examples on which a DP model errs are either outliers or atypical ones. To illustrate this point, we include the examples for MNIST digit “3” and CIFAR-10 “plane” class from their work as Fig. 2. In addition, the metric based on DP is well correlated with other metrics of being prototypical such as relative confidence of the (non-private) model and human annotation. Their concepts of most and least prototypical map naturally to the frequency of subpopulation in our model. Thus their work supports the view that the reason why learning with DP cannot achieve the same accuracy as non-private learning is that it cannot memorize the tail of the mixture distribution. This view also explains the recent empirical results showing that the decrease in accuracy is larger for less well represented subpopulations [BS19b]. Another empirical work that provides indirect support for our theory is [Arp+17]. It examines the relationship between memorization of random labels and performance of the network for different types 6 >/ 2/5) 3/313] 351313] 13) # 119191314) >/ 2/5) 3/313] 351313] 13) Se Unie Sea Se Unie Sea Figure 2: Hardest examples for a differentially private to predict accurately (among those accurately predicted by a non-private model) on the left vs the easiest ones on the right. Top row is for digit “3” from the MNIST dataset and the bottom row is for the class “plane” from the CIFAR-10 dataset. The figure is extracted from [CEP18] with the authors’ permission. Details of the training process can be found in the original work. of regularization techniques. The work demonstrates that for some regularization techniques it is possible to reduce the ability of the network to fit random labels without significantly impacting their performance on true labels. The explanation proposed for this finding is that memorization is not necessary for learning. While it may appear to contradict our theory, a closer look at the result suggests the opposite conclusion. On the true labels almost all their regularization techniques still reach near perfect train accuracy with test accuracy of at most 78%. The only two techniques that do not quite interpolate (though still reaching around 97% train accuracy) are exactly the ones that do exhibit clear correlation between ability to fit random labels and test accuracy (see “input binary mask” and “input gaussian” in their Figs. 10 and 11). We remark (and elaborate in Section 5) that fitting random examples or even interpolation are not necessary conditions for the application of our approach and for memorization being beneficial. In a subsequent work with Chiyuan Zhang [FZ20b] we investigate label memorization and test the predictions of our theory directly. In particular, using an efficiently computable proxy for the memorization score, we discover examples whose labels are memorized in MNIST, CIFAR-10/100, and ImageNet datasets. Visual inspection of these examples confirms that these examples are a mix of outlier/mislabeled examples and correctly labeled but atypical examples. We then demonstrate that memorized examples are important for learning as removing them from the training set decreases the accuracy of the resulting model significantly. Further, the long-tail theory in this work predicts that there is a significant fraction of examples whose memorization is necessary for predicting accurately on examples from the same subpopulation in the test set. More formally, there exist examples in the training set such that for each of them (1) the label is memorized by the learning algorithm in the sense defined above; (2) there exists a dependent example in the test set in the following sense: the accuracy of the model on the dependent test example drops significantly when the corresponding example from the training set is removed (with no significant effect on the accuracy on the other test examples). We design an algorithm for testing this prediction efficiently. The results of this algorithm on MNIST, CIFAR-100, and ImageNet datasets reveal numerous visually similar pairs of relatively atypical examples [FZ20b; FZ20a]. # 1.3 Related work One line of research motivated by the empirical phenomena we discuss here studies implicit regularization in the overparameterized regime (namely, when the parameter space is large enough that the learning algorithm can perfectly fit the dataset). For example, the classical margin theory [Vap82; CV95; Sch+98] for SVMs and boosting suggests that, while the ambient dimension is large, the learning algorithm implicitly maximizes the margin. The generalization gap can then be upper bounded in terms of the margin. Examples of this approach in the context of DNNs can be found in [NTS15; Ney+17a; BFT17; Ney+17b; LMZ18] (and references therein). These notions imply that it is beneficial to overparameterize and suffice for explaining why the 7 training algorithm will select the best model among those that do fit the training set. However implicit regularization does not explain why, despite the regularization, the training error is near zero even when the generalization error is large. Another line of research studies generalization properties of learning algorithms that fit the training data perfectly, often referred to as interpolating [BHM18; BMM18]. For example, a classical work of Cover and Hart [CH67] gives bounds on the generalization error of the 1-nearest neighbor algorithm. Recent wave of interest in such methods has lead to new analyses of existing interpolating methods as well as new algorithmic techniques [Wyn+17; BRT18; BHM18; LR18; BMM18; RZ19; Bar+19; BHX19; Has+19; MVS19]. These works bypass the classical approach to generalization outlined above and demonstrate that interpolating methods can generalize while tolerating some amount of noise. In particular, they show that interpolation can be “harmless” in the sense that interpolating methods can in some cases achieve asymptotically optimal generalization error. At the same time, for the problems studied in these works there also exist non-interpolating algorithms with the same (or better) generalization guarantees. Thus these works do not explain why on many datasets (such as MNIST, CIFAR-10/100, SVHN) state-of-the-art classifiers interpolate the training data. We also remark that while interpolating the training set (with high generalization error) requires memorization, memorization also occurs without interpolation. For example, experiments of Zhang et al. [Zha+17] show that 9% training error is achieved by a standard deep learning algorithm on completely randomly labeled 1000-class ImageNet dataset (with generalization error being 99.9%). It is known that in the convex setting SGD converges faster when all the loss functions have a joint mini- mizer [SST10; NWS14] and therefore it has been suggested that interpolation is the result of computational benefits of optimization via SGD [MBB18]. However this hypothesis is not well supported by empirical evidence since interpolation does not appear to significantly affect the speed with which the neural networks are trained [Zha+17]. In addition, methods like nearest neighbors, boosting, and bagging are not trained via SGD but tend to interpolate the data as well. Algorithmic stability [BE02; Sha+09; HRS16; FV19] is essentially the only general approach that is known to imply generalization bounds beyond those achievable via uniform convergence [Sha+09; Fel16]. However it runs into exactly the same conceptual issue as capacity-based bounds: average stability needs to be increased by at least 1/n to fit an arbitrary label. In fact, an interpolating learning algorithm does not satisfy any non-trivial uniform stability (but may still be on-average stable). We focus on interpolation and the importance of label memorization in learning as this is the phenomenon that had no prior explanation. However neural networks are known to memorize much more than just labels [Car+19; Car+20]. Such memorization presents even higher privacy risks and thus requires a more fundamental understanding. Building on the ideas in this work, recent work shows that for some natural data distributions, memorization of information about the entire sample can be necessary for achieving close-to-optimal generalization [Bro+20] # 2 Fitting the Training Data in Unstructured Classification In this section we describe a simple learning setting over an unstructured discrete domain that incorporates a prior over the distribution of frequencies. We demonstrate that in the noiseless setting, a learning algorithm that does not fit the training examples will be suboptimal and express the excess error in terms of the properties of the prior over frequencies. We show that this result also holds in the presence of label noise (although only for the singleton examples). We then show that the excess error is significant if and only if the distribution of frequencies is long-tailed. Finally, we compare the conclusions of our analysis with those of the standard approaches in our setting. 8 # 2.1 Preliminaries For a natural number n, we use [n] to denote the set {1, . . . , n}. For a condition E (which defines a subset of some domain X) we use 1 (E) to denote the indicator function of the condition (from X to {0, 1}). A dataset is specified by an ordered n-tuple of examples S = ((x1, y1), . . . , (xn, yn)) but we will also treat it as the multi-set of examples it includes. Let XS denote the set of all points that appear in S. For a probability distribution D over X, x ∼ D denotes choosing x by sampling it randomly from D. For subset (or condition) E ⊆ X and function F over X, we denote by Dx∼D[F (x) | x ∈ E] the probability distribution of F (x), when x ∼ D and is conditioned on x ∈ E. For two probability distributions D1, D2 over the same domain we use TV(D1, D2) to denote the total variation distance between them. The goal of the learning algorithm is to predict the labels given a dataset S = ((x1,y1),---,(n,Yn)) consisting of i.i.d. samples from some unknown distribution P over X x Y. For any function h: X + Y and distribution P over X x Y, we denote err p(h) := Evx,y)~plh(x) # yj. As usual, for a randomized learning algorithm A we denote its expected generalization error on a dataset S' by errP (A, S) := E [errP (h)] , h∼A(S) where h ∼ A(S) refers to h being the output of a (possibly) randomized algorithm. We also denote by errP (A) := ES∼P n [errP (A, S)] the expectation of the generalization error of A when examples are drawn randomly from P . # 2.2 Problem setup To capture the main phenomenon we are interested in, we start by considering a simple and general prediction problem in which the domain does not have any underlying structure (such as the notion of distance). The domains X and Y are discrete, |X| = N and |Y | = m (for concreteness one can think of X = [N ] and Y = [m]). The prior information about the labels is encoded using a distribution F over functions from X to Y . The key assumption is that nothing is known a priori about the frequency of any individual point aside from a prior distribution over the individual frequencies. One natural approach to capturing this assumption is to assume that the frequencies of the elements in X are known up to a permutation. That is, a distribution over X is defined by picking a random permutation of elements of the prior π = (π1, . . . , πN ). Exact knowledge of the entire frequency prior is also a rather strong assumption in most learning problems. We therefore use a related but different way to model the frequencies (which we have not encountered in prior work). In our model the frequency of each point in X is chosen randomly and independently from the list of possible frequencies π and then normalized to sum up to 1. π denote the distribution over probability mass functions on X defined as follows. For every x ∈ X, sample px randomly, independently and uniformly from the elements of π. Define the corresponding probability mass function on X as D(x) = . This definition can be naturally generalized to sampling from a general distribution π over frequencies (instead of just the uniform over a list of frequencies). We also denote by ¯πN the resulting marginal distribution over the frequency of any single element in x. That is, ¯πN (α) := Pr D∼DX π [D(x) = α]. Note that, while π is used to define the process, the actual distribution over individual frequencies the process results in is ¯πN and our bounds will be stated in terms of properties of ¯πN . At the same time, this distinction 9 is not particularly significant for applications of our result since, as we will show later, ¯πN is essentially a slightly smoothed version of π. The key property of this way to generate the frequency distribution is that it allows us to easily express the expected frequency of a sample conditioned on observing it in the dataset (or, equivalently, the mean of the posterior on the frequency). Specifically, in Appendix A we prove the following lemma: Lemma 2.1. For any frequency prior 7, x © X and a sequence of points V = (x1,...,%n) € X™ that includes x exactly £ times, we have Eqna fa‘tt -(_- a)r—4 D(a pape ywpn (7) |U=V] Eq~an [a - (1— a)r-4 An instance of our learning problem is generated by picking a marginal distribution D randomly from π and picking the true labeling function randomly according to F. We refer to the distribution over X × Y π as D whenever the prior We are interested in evaluating the generalization error of a classification algorithm on instances of our learning problem. Our results apply (via a simple adaption) to the more common setup in statistical learning theory where F is a set of functions and worst case error with respect to a choice of f ∈ F is considered. However for simplicity of notation and consistency with the random choice of D, we focus on the expectation of the generalization error on a randomly chosen learning problem: err(π, F, A) := E D∼D,f ∼F [errD,f (A)] . # 2.3 The cost of not fitting We will now demonstrate that for our simple problem there exists a precise relationship between how well an algorithm fits the labels of the points it observed and the excess generalization error of the algorithm. This relationship will be determined by the prior ¯πN and n. Importantly, this relationship will hold even when optimal achievable generalization error is high, a regime not covered by the usual analysis in the “realizable” setting. In our results the effect of not fitting an example depends on the number of times it occurs in the dataset and therefore we count examples that A does not fit separately for each possible multiplicity. More formally, Definition 2.2. For a dataset S = ((@1, y1),---,(@n,Yn)) € (X x Y)" and ¢ € [nl], let Xg4¢ denote the set of points x that appear exactly £ times in S. For a function h: X — Y let l,. errng(h, £) := z l{i | a; € Xsye & h(ai) F yi}| and let errns(A,f):= E_ [errng(h, 0)]. nn A(S) It is not hard to see (and we show this below) that in this noiseless setting the optimal expected gener- alization error is achieved by memorizing the dataset. Namely, by the algorithm that outputs the function that on the points in the dataset predicts the observed label and on points outside the dataset predicts the most likely label according to the posterior distribution on F. We will now quantify the excess error of any algorithm that does not fit the labels of all the observed data points. Our result holds for every single dataset 10 (and not just in expectation). To make this formal, we define G to be the probability distribution over triples (D, f, S) where D ∼ DX π , f ∼ F and S ∼ (D, f )n. For any dataset Z ∈ (X × Y )n, let G(|Z) denote the marginal distribution over distribution-function pairs conditioned on S = Z. That is: G(|Z) := D [(D, f ) | S = Z]. (D,f,S)∼G We then define the expected error of A conditioned on dataset being equal to Z as err(π, F, A | Z) := E (D,f )∼G(|Z) [errD,f (A, Z)] . We will also define opt (7, F | Z) to be the minimum of err(z, F, A’ | Z) over all algorithms A’. Theorem 2.3. Let π be a frequency prior with a corresponding marginal frequency distribution ¯πN , and F be a distribution over Y X . Then for every learning algorithm A and every dataset Z ∈ (X × Y )n: err(7,F,A|Z) > opt(a,F | Z)+ > TT; errnz(A,?), le [n| where Egan fat! . (1a) 0 Baan lo (1 a)" In particular, erE(7, F, A) > opt(7,F) + E YO te: errns(A, ¢) DDE f~F.S~(D.AY | ET Proof. We denote the marginal distribution of G(|Z) over D by D(|Z) and the marginal distribution over f by F (|Z). We begin by noting that for every f’: X — Y consistent with the examples in Z, the distribution of D conditioned on f = f" is still D(|Z), since D is chosen independently of any labeling. Therefore we can conclude that G(|Z) is equal to the product distribution D(|Z) x F (|Z). To prove the claim we will prove that err(1,F,A|Z) = > tT; errng(A,¢) + > £e[n] LEX zy (h(x) # f(@)]-Ple,Z), 2) Pr hv A(Z),f~F (|Z) 0 where p(2, Z) := Ep~p,\z)[D(2)]. This will imply the claim since the right-hand expression is minimized when for all ¢ € [n], errnz(A, £) = 0 and for all « € Xz40, Pr h(x x)|=min Pr [f(«) #y}. ea PE nya # FC] = min, Pr Use) # ul Moreover, this minimum is achieved by the algorithm A* that fits the examples in Z and predicts the label y that minimizes Pr ¢-¢,\z)[f(x) 4 y] on all the points in Xz49. Namely, no) 4 Fal] p(e.Z) > YD mip, Pr Fle) # uv) Pw. 2) > Pr aw A(Z), fF (|Z. yeY frF(\Z) @rEXZH40 wEXZ 40 = err(1, F,A* | Z) = opt(a,F | Z). 11 (2) Plugging this into eq. (2) gives the first claim. We now prove eq. (2). err(7,F,A|Z)= E lerrp,(h)] (D,f)~G(|Z) hv A(Z) = 1 (h(a x)): D(a coast aacey [eh MO) #1) Ble) = 1 (h(a f(a)) - D(a 3 » wpe rea 2h FF) PO) (3) +> EB [1(h(e) 4 f(2))- D(@)). 4) we Xz 49 PL~GIZ) hr AlZ) Using the fact that G(|Z) = D(|Z) × F(|Z), for every x ∈ XZ#0 we get (L(h(x) # f(x) D@\= Pr fhe) #F(@)-_ EB (D(a)] E (D,f)~G(|Z) hw A(Z) hr A(Z), fF (|Z) D~D(\Z) = Pr h(a x)]- p(x, Z). pai er mya le) FLOM Pe Z) Hence we obtain that the term in line (4) is exactly equal to the second term on the right hand side of eq. (2). For the term in line @), we pick an arbitrary € Xz,4¢ for some £ € [n]. We can decompose E 1 (h(a x))- D(x)| = Pr A(x ‘(x))- E [D(a co peaEy nea tO) FIO) DEM = PE yy) ALO, EB (DC) since additional conditioning on h(x) 4 f(a) does not affect the distribution of D(x) (as mentioned, G(|Z) is a product distribution). Let V denote the sequence of points in the dataset Z = ((21, y1),---,(@n.Yn))- The labels of these points do not affect the conditioning of D and therefore by Lemma[2. 1] vy) = r Egan [att (1-a)"™4] BPO = py pB pal P@)1U = V1 = ort ayy Te. For a point x ∈ XZ, we denote by Z(x) the label of x in Z. This label is unique in our setting and is equal to f (x) for every f in the support of F(|Z). Therefore, by combining the above two equalities we obtain that, as claimed in eq.(2), line (3) is equal to 3) = 1 (h(a ‘(a)) D(x ol nee, wpa anata)! MOAT) => > Te: were he) # Za r)| le[n] ceXzHe = > tT]: errnz(A, £). le |n] To obtain the second part of the theorem we denote by S the marginal distribution of G over S. Observe that opt(π, F) = E Z∼S [opt(π, F | Z)] 12 since the optimal algorithm is given Z as an input. The second claim now follows by taking the expectation over the marginal distribution over S: err(7, F, A) lerr(z,F,A| Z)| = E ZS > Bs opt(7,F | Z) + S- Te: errnz(A, £) ~ £E[n] = opt(m,F) + S- ™%]: E [errnz(A, 4]. le[n] 2~8 # 2.4 Extension to label noise A more general way to view Theorem [2.3]is that it translates excess error on points in X5-4¢ into excess generalization error (excess error on points in X'54¢ is the difference between the total error of A on X54¢ and the error of the optimal algorithm on X5,4¢). This view holds even if we allow noise in the labels. In the presence of noise the observed labels are not necessarily correct and therefore the error of A on X54 may no longer be equal to the empirical error errng(A, ¢). At the same time, if for a singleton example (x, y), the posterior probability of label y on x is higher than that of other labels, then fitting label y on x is still the optimal strategy. In this case any algorithm that does not do that will be suboptimal by at least by 7, (for every such example). When the noise level is relatively low and affects primarily hard examples (which is the case in most standard benchmark datasets), the observed label is much more likely to be the correct one than the other labels. Thus on such datasets it is optimal to fit even noisy labels. To make this argument formal we consider a more general setting in which for every true labeling function f the examples are labeled by some ˜f . Formally, we assume that there is a possibly randomized mapping from the support of F to Y X and sampling of f from F also includes ˜f . In particular, in the conditional probability F | Z we include the randomness with respect to generation of ˜f (that labeled Z) from f . Further, it is natural to assume that for a singleton example its label given by ˆf is the most likely to be correct by some margin even conditioned on the rest of the dataset. Formally, we denote the confidence margin in the given label for the given prior F as conf(Z, i, F) := min 0, Pr f ∼F | Z [f (xi) = yi] − max y∈Y \{yi} Pr f ∼F | Z [f (x) = y] . (5) Theorem 2.4. Using the notation in (the proof of) Theorem 2.3, we have that err(z,F,A|Z)> opt(7,F | Z)+7- S- conf(Z,i, F) , Pr, ules) F yi ie[n),wieXz41 In particular, err(m,F,A) > opt(™,F)+71- E S- cont (StF): Pr. [h(ei) # wi) D~DX fr F,S~(D,f)” i€[n],rieX sy Proof. As in the proof of Theorem 2.3, the fact that G(|Z) = D(|Z) × F(|Z) implies that 13 err(1,F,A|Z)= Xe ah Pema # f(x)] p(x, Z), (6) where, as before, p(x, Z) := ED∼D(|Z)[D(x)]. This implies that |Z) — opt(m,F | Z) - x (a Phrnl # F(a)] — mip | Bh it (@) 4 v) ‘Pla, Z) > rEeX sy [h(x) A f(«)] — min [f (2 \Aul)on, ( Pr hv A(Z),frF (|Z) yey port2) # err(π, F, A | Z) − opt(π, F | Z) By our definition in eq. ©), for every x; € Xz41, if h(i) A ys then pein) [h(we (vi)] — min poetalt (oa) #yl= max, Pr zt (wi) =y]- pers) [A(ai) = F(ai)] > conf(Z,i, F). Substituting this into eq. (7), we obtain the claimed result. # 2.5 From tails to bounds Given a frequency prior 7, Theorem[2.3]gives a general and easy way to compute the effect of not fitting an example in the dataset. We now spell out some simple and easier to interpret corollaries of this general result and show that the effect can be very significant. The primary case of interest is 2 = 1, namely examples that appear only once in S, which we refer to as singleton examples. In order to fit those, an algorithm needs to memorize their labels whenever F is hard to learn (see Section [4.1] for a more detailed discussion). We first note that the expected number of singleton examples is determined by the weight of the entire tail of frequencies below 1/n in 7’. Specifically, the expected fraction of the distribution D contributed by frequencies in the range [/31, G2] is defined as: weight(7, [a, (]) |e Dl D(x) -1(D(2) € a) ceX =N. ie [a- 1 (a € [B1, 92))}. At the same time the expected number of singleton points is: single(7) := pwvkt pn (|Xv=il] = pep < vebn [ue Xya i _ nm. x _ x n—-1 = E, an D(x)(1 — D(w)) : =r BE, ©\(1 — D(a))""1] TEX =nN. E, [a(1 — a)’ . # α∼¯πN 14 (7) For every α ≤ 1/n we have that (1 − α)n−1 ≥ 1/3 (for sufficiently large n). Therefore: ann n single(7’)>nN- E laa —a)yrh td (« € 0 ‘)| > 5 - weight (=. 0 il) . (8) We will now show that the expected cost of not fitting any of the singleton examples is lower bounded by the weight contributed by frequencies on the order of 1/n. Our bounds will be stated in terms of the properties of ¯πN (as opposed to π itself) and therefore, before proceeding, we briefly explain the relationship between these two. Relationship between π and ¯πN : Before the normalization step, for every x ∈ X, px is distributed exactly according to π (that is uniform over (π1, . . . , πN ). Therefore, it is sufficient to understand the distribution of the normalization factor conditioned on px = πi for some i. Under this condition the normalization factor si is distributed as the sum of n − 1 independent samples from π plus πi. The mean of each sample is exactly 1/N and thus standard concentration results can be used to obtain that si is concentrated around N −1 N + πi. Tightness of this concentration depends on the properties of π, most importantly, the largest value πmax := maxj∈[N ] πj and Var[π] := 1 N )2 ≤ πmax. For πmax = o(1), ¯πN can be effectively N seen as convolving each πi multiplicatively by a factor whose inverse is a Gaussian-like distribution of mean 1 − 1/N + πi and variance Var(π). More formally, using Bernstein’s (or Bennett’s) concentration inequality (e.g. [Sri02]) we can easily relate the total weight in a certain range of frequencies under ¯πN to the weight in a similar range under π. Lemma 2.5. Let π = (π1, . . . , πN ) be a frequency prior and ¯πN be the corresponding marginal distribution over frequencies. For any 0 < β1 < β2 < 1 Then for and any γ > 0, weight(¯πN , [β1, β2]) ≥ (1 − δ) N + β2 + γ 1 − 1 · weight π, β1 1 − 1 N + β1 − γ , β2 1 − 1 N + β2 + γ , 2 =m: ; _ ._ 1)2 — BNA Varta} yFmax/3 where Tmax ‘= Maxje[y] 7}, War[r] = DV jepn(™j — qv)? and 6 = 2+ eAN VD Waray max 75 Note that 1 Tr, Tr, Varfr]< yy es a = vo JE[N] JELN] . By taking γ = 1/4, we can ensure that the boundaries of the frequency interval change by a factor of at most (roughly) 4/3. For such γ we will obtain δ ≤ 2e−1/(40πmax) and in particular πmax ≤ 1/200 will suffice for making the correction (1 − δ) at least 99/100 (which is insignificant for our purposes). Bounds for = 1: We now show a simple lower bound on 7; in terms of weight (7%, [1/2n, 1/n]) (similar results hold for other choices of the interval [c,/n, c2/n]). We also do not optimize the constants in the bounds as our goal is to demonstrate the qualitative behavior. Lemma 2.6. For every frequency prior π and sufficiently large n, N , 1 . _y | 1 2 TT > =~: weight | 7", |=—,— : 5n 3n nn 15 If, in addition, πmax ≤ 1/200, then >i ight i} TLE Ty ME7E ™ on nl): Proof. We first observe that the denominator of τ1 satisfies n-1 1 [a(l—a)""]< E [alJ=— ann N [sn 2] and sufficiently large n, # E aniN # Now, by simple calculus, for every a € [sn 3n , 2 Now, by simple calculus, for every a € [sn 2] and sufficiently large n, # n α2(1 − α)n−1 ≥ 1 5n · α. Therefore — EBqnay [a2(1—a)""}] Eqran [a(1 — a)" 1 oH Boe MOS AD] 2 goign (#22). 771 ~ 1 3n’n N . To obtain the second part of the claim we apply Lemma 2.5 for γ = 1/4 (as discussed above). To verify, observe that for sufficiently large n and N , 3 4 . 1 3n 3n −1/4 1− 1 N + 1 ≤ 1 2n and 2 n n +1/4 1− 1 N + 2 ≥ 1 n , and (1−δ) N + 2 1− 1 n +γ ≥ The value of τ1 = Ω(1/n) corresponds to paying on the order of 1/n in generalization error for every example that is not fit by the algorithm. Hence if the total weight of frequencies in the range of 1/n is at least some θ then the algorithm that does not fit them will be suboptimal by θ times the fraction of such examples in the dataset. By eq. (8), the fraction of such examples themselves is determined by the weight of the entire tail weight(¯πN , [0, 1/n]). For example, if π is the Zipf distribution and N ≥ n then τ1 = Ω(1/n) and weight(¯πN , [0, 1/n]) = Ω(1). Thus an algorithm that does not fit most of the singleton examples will be suboptimal by Ω(1). Numerically, for N = n = 50, 000 an algorithm that in a binary prediction problem does no better than random on the singletons will have excess error of 4% (relative to the optimum which is 8.5% in this case). We can contrast this situation with the case where there are no frequencies that are on the order of 1/n. Even when the data distribution has no elements with such frequency, the total weight of the frequencies in the tail and as a result the fraction of singleton points might be large. Still, as we show, in such case the cost of not fitting singleton examples will be negligible. Lemma 2.7. Let 7 be a frequency prior such that for some 0 < os weight (a, [0, +)) = 0, where t = In(1/(08)) +2 for 6 := weight (7%, (0, 0]). Then 71 < 20. Proof. We first observe that the numerator of τ1 is at most: y t E [a?(1 - a)’ < max a*(l—a)"!- Pr la > ‘| annN a€l[t/n,1] anit n + E [a?(1 -a)t.1(a< 6)] . annN 16 By Markov’s inequality, Eα∼¯πN [α] = 1 # jy implies Pr α∼¯πN α ≥ t n ≤ n tN . In addition, by our definition of t, max α∈[t/n,1] α2(1 − α)n−1 ≤ t n 1 − t n ≤ tβθ en . Therefore the first term in the numerator is upper bounded by n tN term in the numerator satisfies: tβθ en ≤ βθ eN . At the same time the second E fo? (1 —a)"1!.1(a< 4)| > a1 — 0". B la-1(a<8)| ann ann -1 : =) > a(1 ; y _weight (a, (0, 6]) s 0B n N — 2N° . Therefore the second term is at least as large as the first term and we obtain that: E [e(1-a)""] <2- B [a®(L—a)""!-1(a < 6)] anit ant <20- E fa(1— a)" -1(a< 9)] ann <20- E fa(1— a)""] . annN Thus τ1 ≤ 2θ as claimed. For θ = 1/(2n2), under the conditions of Lemma 2.7 we will obtain that the suboptimality of the algorithm that does not fit any of the singleton examples is at most 1/n. # 2.6 Comparison with standard approaches to generalization We now briefly demonstrate that standard approaches for analysis of generalization error cannot be used to derive the conclusions of this section and do not capture our simple problem whenever N ≥ n. For concreteness, we will use m = 2 with the uniform prior over all labelings. We will also think of π that consists of n/2 frequencies 1/n and n2/2 frequencies 1/n2 (thus N = n2/2 + n/2). Without any structure in the labels, a natural class of algorithms for the problem are algorithms that pick a subset of points whose labels are memorized and predict randomly on the other points in the domain. First of all, it is clear that any approach that does not make any assumption on the marginal distribu- tion D cannot adequately capture the generalization error of such algorithms. A distribution-independent generalization bound needs to apply to the uniform distribution over X. For this distribution the expected generalization error for a randomly chosen labeling function f will be at least (1 − n/N )/2 ≈ 0.5. In particular, for sufficiently large N , the differences in the generalization error of different algorithms will be insignificant and therefore such notion will not be useful for guiding the choice of the algorithm. Notions that are based on the algorithm knowing the input distribution D are not applicable to our setting. Indeed the main difficulty is that the algorithm does not know the exact frequencies of the singleton elements. An algorithm that knows D would not need to fit the points whose frequency is 1/n2. Thus the algorithm 17 would be able to achieve excess generalization error of at most 1/n without fitting the dataset. In contrast, our analysis shows that an algorithm that only knows the prior and fits only 50% of the dataset will be suboptimal by > 13%. Fairly tight data-dependent bounds on the generalization error can be obtained via the notion of empirical Rademacher complexity [Kol01; BM02]. Empirical Rademacher complexity for a dataset S and the class of all Boolean functions on X that memorize k points is ≥ min{k, |XS|}/n. Similar bound can also be obtained via weak notions of stability such as average leave-one-out stability [BE02; RMP05; Muk+06; Sha+10] Pr [h(x;)=yiJ— Pr [h(2;) = yi] 1 LOOstab(P,A):=— )~ E,, wets oe n S i¢[n] , (9) where S\i refers to S with i-th example removed. If we were to use either of these notions to pick k (the number of points to memorize), we would end up not fitting any of the singleton points. The simple reason for this is that, just like a learning algorithm cannot distinguish between “outlier” and “atypical” points given S in this setting, neither will any bound. Therefore any true upper bound on the generalization error that is not aware of the prior on the frequencies needs to be correct when all the points that occur once are “outliers”. Fitting any of the outliers does not improve the generalization error at all and therefore such upper bounds on the generalization error cannot be used to correctly guide the choice of k. An additional issue with the standard approaches to analysis of the generalization error is that they bound the excess error of an algorithm relative to the best function in some class of functions or relative to the Bayes optimal predictor (which is the the optimal predictor for the true data distribution). In our model this would mean comparing with the perfect predictor which has generalization error of 0. For the prior π we consider, the optimal algorithm has generalization error of over 25%. Thus theoretical analysis that is not close-to-perfectly tight will not lead to a meaningful bound. For example, standard bounds based on Rademacher complexity are suboptimal by a factor of at least two and thus lead to vacuous bounds. In contrast, our analysis can give a meaningful bound on the generalization error even when used with a relatively crude bound on the excess error. # 3 General Mixture Models Our problem setting in Section [2|considers discrete domains without any structure on X. The results also focus on elements of the domain whose frequency is on the order of 1/n. Naturally, practical prediction problems are often high-dimensional with each individual point having an exponentially small (in the dimension) probability. Therefore direct application of our analysis from Section|2]for the unstructured case makes little sense. Indeed, any learning algorithm A can be modified to a learning algorithm A’ that does not fit any of the points in the dataset and achieves basically the same generalization error as A simply by modifying A’s predictions on the training data to different labels and vice versa (any algorithm can be made to fit the dataset without any effect on its generalization). At the same time in high dimensional settings the points have additional structure that can be exploited by a learning algorithm. Most machine learning algorithms are very likely to produce the same prediction on points that are sufficiently “close” in some representation. The representation itself may be designed based on domain knowledge or derived from data. This is clearly true about k-NN, SVMs/linear predictors and has been empirically observed for neural networks once the trained representation in the last hidden layer is considered. 18 The second important aspect of natural image and text data is that it can be viewed as a mixture of numerous subpopulations. As we have discussed in the introduction, the relative frequency of these subpopulations has been observed to have a long-tailed distribution most obvious when considering the label distribution in extreme multiclass problems [ZAR14; BS17; WRH17; Kri+17; VHP17; Cui+18; VH+18; BS19a] (see also Fig. 1). A natural way to think of and a common way to model subpopulations (or mixture components) is as consisting of points that are similar to each other yet sufficiently different from other points in the domain. We capture the essence of these two properties using the following model that applies the ideas we developed in Section 2 to mixture models. To keep the main points clear we keep the model relatively simple by making relatively strong assumptions on the structure. (We discuss several ways in which the model’s assumptions can be relaxed or generalized later). We model the unlabeled data distribution as a mixture of a large number of fixed distributions M),..., My. For simplicity, we assume that these distributions have disjoint support, namely M; is supported over X; and XX; X; = 90 for i ¥ 7 (without loss of generality X = Ujetn] Xi). For x € X we denote i,, to be the index of the sub- domain of x and by X,, (or M,,) the sub-domain (or subpopulation, respectively) itself. The unknown marginal distribution M is defined as M(a) := The unknown marginal distribution M is defined as M(a) := Vie] a;M;(x) for some vector of mixture coefficients (ay,...,@,7) that sums up to 1. We describe it as a distribution D(x) over [N] (that is a; = D(i)). As in our unstructured model, we assume that nothing is known a priori about the mixture coefficients aside from (possibly) a prior 7 = (71,..., 7) described by a list of frequencies. The mixture coefficients are generated, as before, by sampling D from De), We denote by Mp the distribution over X defined as Mp(x) := Viel] D(i)M; (a). We assume that the entire subpopulation Xi is labeled by the same label and the label prior is captured via an arbitrary distribution F over functions from [N ] to Y . Note that such prior can be used to reflect a common situation where a subpopulation that is “close” to subpopulations i1 and i2 is likely to have the same label as either i1 or i2. The labeling function L for the entire domain X is sampled by first sampling f ∼ F and defining Lf (x) = f (ix). To model the properties of the learning algorithm we assume that for every point x in a dataset S the distribution over predictions h(x) for a random predictor output by A(S) is close to (or at least not too different) from the distribution over predictions that A produces over the entire subpopulation of x. This follows the intuition that labeling x will have a measurable effect on the prediction over the entire subpopulation. This effect may depend on the number of other points from the same subpopulation and therefore our assumption will be parameterized by n parameters. Definition 3.1. Let X be a domain partitioned into sub-domains { X;} jen) with subpopulations {Mj} ieinj over the sub-domains. For a dataset S, let X54¢ denote the union of subpopulations X; such that points from X;, appear exactly ¢ times in S. For KA = (Aq,...,An), we say that an algorithm A is A-subpopulation- coupled if for every S € (X x Y)", 2% € Xgxe, m( D_ = [h(2)], D ( (we) <1-r». h~A(S) a! Mz hv A(S Note that we do not restrict the algorithm to be coupled in this sense over subpopulations that are not represented in the data. This distinction is important since predictors output by most natural algorithms vary over regions from which no examples were observed. As a result the setting here cannot be derived by simply collapsing points in the sub-domain into a single point and applying the results from the unstructured case. However, the analysis and the results in Sec. 2 still apply essentially verbatim to this more general setup. All 19 we need is to extend the definition of errng(A, @) to look at the multiplicity of sub-domains and not points themselves and count mistakes just once per sub-domain. For a function h: X — Y let 1 errng(h, @) = 7 S- 1 (a; € Xgye and h(x,;) 4 y)- * i€[n] As before, errns(A, 0) = En~4csylerrus(h, ¢)]. With this definition we get the following generalization of Theorem|[2|(we only state the version for the total expectation of the error but the per-dataset version holds as well): Theorem 3.2. Let {Mi}i∈[N ] be subpopulations over sub-domains {Xi}i∈[N ] and let π and F be some frequency and label priors. Then for every Λ-subpopulation-coupled learning algorithm A: err(7,F,A) > opt(m,F) + E S- AeTe: errng(A, £) DxDIW) fAF,SA(Mp,Ly)” te{n] where T, is defined in Thm. We now briefly discuss how the modeling assumptions can be relaxed. We first note that it suffices for subpopulation coupling to hold with high probability over the choice of dataset S from the marginal distribution over the datasets S. Namely, if the property in Definition 3.1 holds with probability 1 − δ over the choice of S ∼ S (where, S is the marginal distribution over the datasets) then the conclusion of the theorem holds up to an additional δ. This follows immediately from the fact that Theorem 3.2 holds for every dataset separately. The assumption that the components of the mixture are supported on disjoint subdomains is potentially quite restrictive as it does not allow for ambiguous data points (for which Bayes optimal error is > 0). Subpopulations are also often modeled as Gaussians (or other distributions with unbounded support). If the probability of the overlap between the subpopulations is sufficiently small, then one can reduce this case to the disjoint one by modifying the components Mi to have disjoint supports while changing the marginal distribution over S by at most δ in the TV distance (and then appealing to the same argument as above). Dealing with a more general case allowing general overlap is significantly messier but the basic insight still applies: observing a single point sampled from some subpopulation increases the expectation of the frequency of the subpopulation under the posterior distribution. That increase can make this expectation significant making it necessary to memorize the label of the point. # 3.1 Examples We will now provide some intuition on why one would expect the Λ-subpopulation-coupling to hold for some natural classes of algorithms. Our goal here is not to propose or justify specific models of data but rather to relate properties of known learning systems (and corresponding properties of data) to subpopulation coupling. Importantly, we aim to demonstrate that the coupling emerges from the interaction between the algorithm and the geometric properties of the data distribution and not from any explicit knowledge of subpopulations. Local algorithms: A simple example of a class of algorithms that will exhibit subpopulation coupling is k-NN-like algorithms and other algorithms that are in some sense locally smooth. If subpopulations are sufficiently “clustered” so that including the example (x, y) in the predictor will affect the prediction in the 20 neighborhood of x and the total weight of affected neighborhood is some fraction λ1 of the subpopulation, then we will obtain subpopulation coupling with λ1. In the more concrete (and extreme case), when for every point x ∈ X, the most distant point in Xx is closer than the closest point from the other subpopulations we will get that any example from a subpopulation will cause a 1-NN classifier to predict in the same way over the entire subpopulation. In particular, it would make it Λ-subpopulation-coupled for Λ = (1, . . . , 1). Linear classifiers: A more interesting case to understand is that of linear classifiers and by extension SVMs and (in a limited sense) neural networks. We will examine a high-dimensional setting, where d >> n. We will assume that points within each subpopulation are likely to have relatively large inner product whereas for every subpopulation most points will, with high probability have, a substantially large component that is orthogonal to the span of n random samples from other populations. These conditions are impossible to satisfy when d < n but are easy to satisfy when d is sufficiently large. Formally, we assume that points in most datasets sampled from the data distribution satisfy the following condition: Definition 3.3. Let X ⊂ Rd be a domain partitioned into subdomains {Xi}i∈[N ]. We say that a sequence of points V = (x1, . . . , xn) is (τ, θ)-independent if it holds that ° for all i,j such that x;,x; € X; for some t, (x;,x;) = T||xill2||rj||2 and * for alli such that x; € X1, and any v € span(V \ X4), |(xi, v)| < 4||x|l2|]v]] 2 We consider the performance of linear classifiers that approximately maximize the margin. Here, by “approximately” we will simply assume that they output classifiers that achieve at least 1/2 of the optimal margin achievable when separating the same points in the given dataset. Note that algorithms with this property are easy to implement efficiently via SGD on the cross-entropy loss [Sou+18] and also via simple regularization of the Perceptron algorithm [SSS05]. We will also assume that the linear classifiers output by the algorithm lie in the span of the points in the dataset3 Formally, we define approximately margin- maximizing algorithms in this multi-class setting (for convenience, restricted to the homogeneous case) as follows: Definition 3.4. An algorithm A is an approximately margin maximizing m-class linear classifier if given a dataset S = ((x1, y1), . . . , (xn, yn)) ∈ (X × [m])n it outputs m linear classifiers w1, . . . , wm satisfying: • for every k ∈ [m], wk lies in the span of x1, . . . , xn; * for every x, the prediction of A on x depends only on the predictions of the classifiers sign((x, wx) ) and; * for every k € [mJ, let V_ := {x € Xg | (x, wg) < 0} and V+ := {x € Xg | (x, we) > O}. If V_ can be linearly separated from V4. by a homogeneous linear separator with margin yy, then for all x € X¢g, (x, we)| > Fllelle. We now show that linear classifiers over distributions that produce datasets independent in the sense of Definition[3-3| will have high subpopulation coupling. In order to guarantee strong coupling, we will assume that the set V of points in a random dataset together with the set of points V’ that consists of additional samples from every mixture present in V (namely, V’ ~ Tiectvisu: M;j) satisfy the independence condition with high probability. Formally, we establish the following result (the proof can be found in Appendix[B). 3A linear classifier can always be projected to the span of the points without affecting the margins. This assumption allows us to avoid having to separately deal with spurious correlations between unseen parts of subpopulations and the produced classifiers. 21 Theorem 3.5. Let X C R® be a domain partitioned into sub-domains {Xihiein] with subpopulations {Mih icin] over the sub-domains. Let A be any approximately margin maximizing m-class linear classifier and t be a frequency prior. Assume that for D ~ DW andV ~ Mt, Vi~ Tyctvjsy: Mj, with probability at least 1 — 5°, V UV" is (7, 7?/(8,/n))-independent for some t € (0, 1/2]. Then for any labeling prior F, A is A-subpopulation-coupled with probability 1 — 6 and ; > 1-6. √ As asimple example of subpopulations that will produce sets of points that are (r, T?/(8,/7))-independent with high probability we pick each M; to be a spherically-symmetric distribution supported on a ball of radius 1 around some center z; of norm 1. We also pick the centers randomly and independently from the uniform distribution on the unit sphere. It is not hard to see that, by the standard concentration properties of spherically-symmetric distributions, a set V of t samples from an arbitrary mixture of such distributions will be (7, @)-independent with high probability for r > 1/2 — 0(1) and 8 = O(,/t/d). Thus for t < 2n, d = O(n?) suffices to ensure that 0 < 7?/(8,/n). # 4 The Memorization, Privacy and Stability So far we have discussed memorization by learning algorithms informally. In this section we give a simple definition of label memorization and demonstrate that fitting the training data in the setting we consider requires label memorization whenever there is enough (statistical or computational) uncertainty in the labels. This allows us to show that limits on the memorization ability of an algorithm translate into a loss of accuracy (on long-tailed distributions). This result explains a recent empirical finding [BPS19; Hoo+20a; Hoo+20b] that in a dataset that is a mixture of several groups the loss in accuracy due to limited memorization will be higher on less frequent subgroups. Finally, we show that (even relatively weak forms of) differential privacy imply that the algorithm cannot memorize well. To keep the notation cleaner we will discuss these results in the context of our simpler model from Sec.2 but they can be easily adapted to our mixture model setting. For simplicity of notation, we will also focus on memorization of singleton elements. # 4.1 Memorization To measure the ability of an algorithm A to memorize labels we will look at how much the labeled example (x, y) affects the prediction of the model on x. This notion will be defined per specific dataset and example but in our applications we will use the expectation of this value when the dataset is drawn randomly. Definition 4.1. For a dataset S = (xi, yi)i∈[n] and i ∈ [n] define mem(A, S, i) := Pr h∼A(S) [h(xi) = yi] − Pr h∼A(S\i) [h(xi) = yi], where S\i denotes the dataset that is S with (xi, yi) removed. In this definition we measure the effect simply as the total variation distance4 between the distributions of the indicator of the label being y, but other notions of distance could be appropriate in other applications. For this notion of distance our definition of memorization is closely related to the leave-one-out stability of 4Strictly speaking, the memorization value can be negative (in which case it is equal to the negation of the TV distance) but for most practical algorithms we expect this value to be non-negative. 22 the algorithm (see eq. (9)). Indeed, it is easy to see from this definition that LOO stability upper bounds the expected memorization: i n gdbn S- mem(A, S,i)} <LOOstab(P, A). i€[n] As in the case of stability label memorization can be related to the generalization gap in the following way (the proof follows immediately from taking the expectation over S). Lemma 4.2. For every distribution P over X × Y and any learning algorithm A we have that i nN Sw E. S- mem( (A, S,i)| = _E [errs(A,S)]— E__ [errp(A,5’)], ie[n Swpe Siwpet where errS(A, S) is the expected empirical error of A on S: 1 errs(A,S) := ~™P ha ne alt xi) # yil- 0 en ] Note that the term Eg. pn-1 [err p(A, S’)] is not exactly equal to the expectation of the generalization error Es~p» [err p(A, S)], but in practice the difference between those is typically negligible (less than 1/n). The immediate implication of Lemma|4.2Jis that a large generalization gap indicates that many labels are memorized and vice versa. An immediate corollary of our definition of memorization is that if A cannot predict the label yi of xi without observing it then it needs to memorize it to fit it. More formally, Lemma 4.3. For every dataset S ∈ (X × Y )n, learning algorithm A and index i ∈ [n], P h xy Yi] = h( i] A, S,2 Brolhed Amd =, Be, thos) # ui — mews, $,i), In particular, errng(A,1) = S- antes) [h(xi) € yi] — mem(A, S, i). i€[n], rEXg41 There can be several reasons why an algorithm A cannot predict the label on xi without observing it. The simplest one is that if there is statistical uncertainty in the label. To measure the uncertainty in a distribution ρ over labels we will simply use 1 minus the maximum probability of any specific label: IlPlloo = mays ply): Note that 1 — || ||. is exactly the error of the Bayes optimal predictor given that the posterior distribution on the label is p. Significant statistical uncertainty conditioned on knowing all the other labeled examples exists only when the labeling prior has high entropy (such as being uniform over a class of functions of VC dimension larger than n). In practice, there might exist a relatively simple model that explains the data well yet the learning algorithm cannot find (or even approximate) this model due to computational limitations. This can be modeled by considering the best accuracy in predicting the label of xi given S\i for the restricted class 23 of algorithms to which A belongs. For example, the uniform prior can be achieved for all polynomial-time algorithms by using a pseudo-random labeling function [GGM86]. More generally, Lemma[f.3]implies that any upper bound on the expected accuracy of a learning algorithm on an unseen singleton example implies the need to memorize the label in order to fit it. Thus the results in the remainder of this section extend directly to computational notions of uncertainty in place of 1 — ||p||,.. We now spell out the properties of this simple statistical notion of uncertainty. Lemma 4.4. Let ρ be an arbitrary distribution over Y . For a dataset S = (xi, yi)i∈[n], i ∈ [n] and y ∈ Y , let Si←y denote the dataset S with (xi, y) in place of example (xi, yi). Then we have: Pr h(x) 4 y] > 1—|lplloo — E [mem(A, S**, i)]. prohetsiey | (x) #y] llell rom ( I In particular, for every distribution D and labeling prior F that also generates the noisy labeling function ˜f for every f (as in Sec. 2.4) E — ferms(A,DJ> E S> 1 =||F(ai|$\‘|]o0) — mem(A, S, i) f~FS~(D.fy" frFS(DA” | sein), weXsy1 where F(xi|S\i) denotes the conditional distribution over the label of xi after observing all the other examples: F(u|SY)= — D_ [Fe |Vi At F(@j) = y)- [rF,S~(D,f)" Proof. By Definition 4.1, for every y, Pr h∼A(Si←y) [h(x) = y] = Pr h∼A(S\i) [h(x) = y] + mem(A, Si←y, i). Thus, pons») (h(x) =y] = odes = 9] + E imem(A, si) < max Pr[y! =] + B [men(, 9". )], giving the first claim. The second claim follows from the definition of errnS(A, 1) and observing that an expectation is taken on f ∼ F that ensures that for every point the error will be averaged over all labelings of the point according to conditional distribution of the corresponding label. # 4.2 The cost of limited memorization We will now translate Lemma 4.4 into bounds on the excess error of algorithms that cannot memorize the labels well. For this purpose we will use the following definition. Definition 4.5. We say that a learning algorithm A is γ-memorization limited if for all S ∈ (X, Y )n and all i ∈ [n] we have mem(A, S, i) ≤ γ. 24 Bounds on memorization ability result directly from a variety of techniques, such as implicit and explicit regularization and model compression. Somewhat simplistically, one can think of these techniques as minimizing the sum some notion of capacity scaled by a regularization parameter λ and the empirical error. Fitting a label that is not predicted correctly based on the rest of the dataset typically requires increasing the capacity. Therefore a regularized algorithm will not fit the example if the increase in the capacity (scaled by λ) does outweigh the decrease in the empirical error. These decisions are randomized and thus correspond to a bounded probability that the algorithm will memorize a label. Using the definitions and Lemma 4.4, we immediately obtain the following example corollary on the excess error of any γ-memorization limited algorithm. Corollary 4.6. In the setting of Thm. 2.4, let A be any γ-memorization limited algorithm. Then err(n, FA) > opt(n, F)+r1 E Yo cont(,i,F) (1 |F(@il$\ llc — 7) D~Dx ,f~F S~(D,f)” i€[n], e1eX oy The bound in this corollary depends on the expectation of the uncertainty in the label 1 — || F(ai|S''||oo. While, in general, this quantity might be hard to estimate it might be relatively easy to get a sufficiently strong upper bound. For example, if for f ~ F the labeling is uniform and k-wise independent for k that upper-bounds the typical number of distinct points (or subpopulations in the general case) then, with high probability, it will hold that || F(x;|S\‘||.o = 1/|Y|. As discussed in Section 5] for Zipf prior distribution and N > n, any 7-memorization limited algorithm with y < 1—1/|Y| being a constant will have excess error of (1). Equivalently, any algorithm that achieves the optimal generalization error will need to memorize Q(n) labels. In particular, it will have a generalization gap of 2(1). These conclusions hold even in the presence of random noise. Consider, for example, the random classification noise model in which f is defined by replacing the correct label f(x) with a random and uniformly chosen one with probability 1 — «. For this model we will have that for singleton examples conf(S,i, 7) > «. Thus we obtain that even noisy labels need to be memorized as long as K = Q(1). # 4.3 Cost of privacy Memorization of the training data can be undesirable in a variety of settings. For example, in the context of user data privacy, memorization is known to lead to ability to mount black-box membership inference attacks (that discover the presence of a specific data point in the dataset) (Sho+17}/LBG17}|Lon+18}/Tru+18] as well as ability to extract planted secrets from language models [Car+19}. The most common approaches toward defending such attacks are based on the notion of differential privacy that are formally known to limit the probability of membership inference by requiring that the output distribution of the learning algorithm is not too sensitive to individual data points. Despite significant recent progress in training deep learning networks with differential privacy, they still lag substantially behind the state-of-the-art results trained without differential privacy [SS15}{Aba+16} [Pap+16}/Wu+17} [Pap+17}|McM+18]. While some of this lag is likely to be closed by improved techniques, our results imply that the some of this gap is inherent due to the data being long-tailed. More formally, we will show that the requirements differential privacy imply a lower bound on the value of errn (for simplicity just for 2 = 1). We will prove that this limitation applies even to algorithms that satisfy a very weak form of privacy: label privacy for predictions. It protects only the privacy of the label as in and also with respect to algorithms that only output a prediction on an (arbitrary) fixed point [DFI8]. Formally, we define: 25 . Definition 4.7. Let A be an algorithm that given a dataset S € (X x Y)” outputs a random predictor h: X + Y. We say that A is (€, 6)-differentially label-private prediction algorithm if for every x € X and datasets S that only differ in a label of a single element we have for any subset of labels Y', Pr [h(xz)€Y']<e%- Pr [h(a)€ Y') +6. h~A(S) h~A(S’) It is easy to see that any algorithm that satisfies this notion of privacy is (e‘ — 1+)-memorization limited. A slightly more careful analysis in this case gives the following analogues of Lemma[4-4]and Corollary {4-6} Theorem 4.8. Let A be an (e, 6)-differentially label-private prediction algorithm and let p be an arbitrary distribution over Y. For a dataset S = (xi, yi)ie{n} 1 € [n] and y € Y, we have: Pro (h(t) =] Se: [Ip lloo +4. yrphrA(S*-¥) In particular, in the setting of Thm. 2.4, for every distribution D and labeling prior F, E — ferms(A,DJ> E So 1 =e: | F(ail$) loo - 5 {°F,8~(D,f)” frF SDL” | scfm), eX s41 and, consequently, err(a, F, A) > opt(a,F)+71- E - S- conf (S, i, F) (1 — €€- ||F(a|S\ hoo DrDE fF S~(D,f)" i€[n], EX gy Proof. By the definition of (¢, 6)-differential label privacy for predictions, for every y, P h(z) =y)<e&- Pr [h(xz) =y] +6. indpnewyh u(r) =y] <e rots)! u(x) = y] Thus, Pr h(x) =y] < e€ Pr h(a) = y) +6 < e€ +06. yophnsi-) [A(z) =y] Se port as) OO) y] +6 < e|[pllo0 The rest of the claim follows as before. This theorem is easy to extend to any subpopulation from which only ¢ examples have been observed using the group privacy property of differential privacy. This property implies that if / labels are changed then the resulting distributions are (Ce, Ze‘—!5)-close (in the same sense) (DR14]. The total weight of subpopulations that have at most ¢ examples for a small value of @ is likely to be significant in most modern datasets. Thus this may formally explain at least some of the gap in the results currently achieved using differentially private training algorithms and those achievable without the privacy constraint. Uniform stability: A related notion of stability is uniform prediction stability [BE02; DF18] that, in the context of prediction, requires that changing any point in the dataset does not change the label distribution on any point by more than γ in total variation distance. This notion is useful in ensuring generalization [BE02; FV19] and as a way to ensure robustness of predictions against data poisoning. In this context, γ-uniform stability implies that the algorithm is γ-memorization limited (and also is (0, γ)-differentially private for predictions). Therefore Corollary 4.6 implies limitations of such algorithms. 26 . # 4.4 Disparate effect of limited memorization Corollary 4.6 and Theorem 4.8 imply that limiting memorization increases the generalization error of an algorithm on long-tailed (and sufficiently hard) learning problems. Moreover, the excess error due to limited memorization depends on the prior π, hardness of the problem and the number of samples n. This implies that if the data distribution consists of several subgroups with different properties, then the cost of limiting memorization can be different for these subgroups. In particular, the cost can be higher for smaller subgroups or those with more distinct subpopulations. These are not hypothetical scenarios. For differential privacy these effects were observed in a concurrent work of Bagdasaryan and Shmatikov [BS19b]. For model compression the differences in the costs have been confirmed and investigated in a subsequent work of Hooker et al. [Hoo+20a; Hoo+20b]. In addition to disparate effects, these works empirically demonstrate that the increase in error is most pronounced on atypical examples. As a concrete example of why our long-tail theory explains the different costs we consider a 10-class classification problem over N = 5, 000 subpopulations, Zipf prior π, and n = 50, 000 samples. We will also assume for simplicity, that the labeling prior is uniform and independent over all subpopulations and there is no noise. Let A be a γ-memorization limited learning algorithm for γ = 1/2. The choice of γ does not affect the comparison as it will scale the excess error for all subgroups in the same way. The labels of all the subpopulations that have not been observed in the sample are completely unpredictable and therefore the expected error of the optimal algorithm in this setting is equal to opt(z, F) = ( - x1) S> a (1-a)”. Je[N].0=7 (5) To compute this value in our setting of parameters we will use π instead of ¯πN as those are very close for large N and it is easier to perform (and verify) computations on π. This gives us opt(π, F) ≈ 0.018. Applying Corollary 4.6, we obtain that cost of limiting memorization to 1/2 is ≈ 0.015. Now, consider the same question but for a sample that only has 10, 000 examples. Then opt(π, F) ≈ 0.113 and the cost of limited memorization ≈ 0.035. Finally, consider the same question but with the number of subpopulations N = 25, 000 and n = 50, 000 (corresponding to a harder learning problem). Then opt(π, F) ≈ 0.107 and the cost of limited memorization is ≈ 0.031. Next, assume that we are given a learning problem that is a mixture of the first and second settings, namely, the population is P = 5 6 P2 and we are given n = 60, 000 examples. Then in each subgroup we still have the same optimums and the same cost of limited memorization. The cost of limited memorization is more than twice higher for the smaller subgroup in this mixture problem. Similarly, in the mixture of the first and third settings (P = 1 2 P3 and n = 100, 000) the cost of limited memorization is twice higher for the subgroup with a harder prediction problem. The cost of memorization with 10 classes and 7 = 0.5 is the same as the cost of (label) differential privacy for predictions with « = In 6 and 6 & 0 so the same conclusions follow from Theorem|4.8} Understanding of the causes of such disparate effects can be used to design mitigation strategies. For example, by using different levels of regularization (or compression) on different subgroups the costs can be balanced. Similarly, a different privacy parameter can be used for different subgroups (assuming that the additional risk of privacy violations is justified by the increase in the accuracy). 27 # 5 Discussion Our work provides a natural and simple learning model in which memorization of labels and, in some cases interpolation, are necessary for achieving nearly optimal generalization when learning from a long-tailed data distribution. It suggests that the reason why many modern ML methods reach their best accuracy while (nearly) perfectly fitting the data is that these methods are (implicitly) tuned to handle the long tail of natural data distributions. Our model explicitly incorporates the prior distribution on the frequencies of subpopulations in the data and we argue that such modeling is necessary to avoid the disconnect between the classical view of generalization and the practice of ML. We hope that the insights derived from our approach will serve as the basis for future theoretical analyses of generalization that more faithfully reflect modern datasets and learning techniques. A recent example that such modeling has practical benefits can be found in [Cao+19]. # Acknowledgements Part of the inspiration and motivation for this work comes from empirical observations that differentially private algorithms have poor accuracy on atypical examples. I’m grateful to Nicholas Carlini, Ulfar Erlingsson and Nicolas Papernot for numerous illuminating discussions of experimental work on this topic [CEP19] and to Vitaly Shmatikov for sharing his insights on this phenomenon in the context of language models. I would like to thank my great colleagues Peter Bartlett, Misha Belkin, Olivier Bousquet, Edith Cohen, Roy Frostig, Daniel Hsu, Phil Long, Yishay Mansour, Mehryar Mohri, Tomer Koren, Sasha Rakhlin, Adam Smith, Kunal Talwar, Greg Valiant, and Chiyuan Zhang for insightful feedback and suggestions on this work. I thank the authors of [ZAR14] for the permission to include Figure 1 from their work. # References [Aba+16] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. “Deep learning with differential privacy”. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM. 2016, pp. 308–318. [Arp+17] D. Arpit, S. Jastrzkebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. Courville, Y. Bengio, et al. “A closer look at memorization in deep networks”. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org. 2017, pp. 233–242. P. L. Bartlett, P. M. Long, G. Lugosi, and A. Tsigler. “Benign Overfitting in Linear Regression”. In: arXiv preprint arXiv:1906.11300 (2019). O. Bousquet and A. Elisseeff. “Stability and generalization”. In: JMLR 2 (2002), pp. 499–526. P. L. Bartlett, D. J. Foster, and M. J. Telgarsky. “Spectrally-normalized margin bounds for neural networks”. In: Advances in Neural Information Processing Systems. 2017, pp. 6240–6249. [BHM18] M. Belkin, D. J. Hsu, and P. Mitra. “Overfitting or perfect fitting? risk bounds for classification and regression rules that interpolate”. In: Advances in Neural Information Processing Systems. 2018, pp. 2300–2311. [BHX19] M. Belkin, D. Hsu, and J. Xu. “Two models of double descent for weak features”. In: arXiv preprint arXiv:1903.07571 (2019). 28 P. Bartlett and S. Mendelson. “Rademacher and Gaussian Complexities: Risk Bounds and Structural Results”. In: Journal of Machine Learning Research 3 (2002), pp. 463–482. [BMM18] M. Belkin, S. Ma, and S. Mandal. “To Understand Deep Learning We Need to Understand Kernel Learning”. In: ICML. Vol. 80. Proceedings of Machine Learning Research. PMLR, 2018, pp. 541–549. URL: http://proceedings.mlr.press/v80/belkin18a.html. E. Bagdasaryan, O. Poursaeed, and V. Shmatikov. “Differential privacy has disparate impact on model accuracy”. In: Advances in Neural Information Processing Systems. 2019, pp. 15453– 15462. L. Breiman. “Random forests”. In: Machine learning 45.1 (2001), pp. 5–32. G. Brown, M. Bun, V. Feldman, A. Smith, and K. Talwar. “When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?” In: CoRR abs/2012.06421 (2020). arXiv: 2012.06421. URL: https://arxiv.org/abs/2012.06421. M. Belkin, A. Rakhlin, and A. B. Tsybakov. “Does data interpolation contradict statistical optimality?” In: arXiv preprint arXiv:1806.09471 (2018). R. Babbar and B. Sch¨olkopf. “Dismec: Distributed sparse machines for extreme multi-label classification”. In: Proceedings of the tenth ACM international conference on web search and data mining. ACM. 2017, pp. 721–729. R. Babbar and B. Sch¨olkopf. “Data scarcity, robustness and extreme multi-label classification”. In: Machine Learning (2019). E. Bagdasaryan and V. Shmatikov. “Differential Privacy Has Disparate Impact on Model Accuracy”. In: CoRR abs/1905.12101 (2019). arXiv: 1905.12101. URL: http://arxiv. org/abs/1905.12101. [Cao+19] Car+19] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma. “Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss”. In: arXiv preprint arXiv:1906.07413 (2019). N. Carlini, C. Liu, J. Kos, ´U. Erlingsson, and D. Song. “The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks”. In: Usenix Security (to appear). 2019. N. Carlini, F. Tram`er, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. B. Brown, D. Song, ´U. Erlingsson, A. Oprea, and C. Raffel. “Extracting Training Data from Large Language Models”. In: CoRR abs/2012.07805 (2020). arXiv: 2012 . 07805. URL: https://arxiv.org/abs/2012.07805. [Car+20] K. Chaudhuri and S. Dasgupta. “Rates of Convergence for Nearest Neighbor Classification”. In: NIPS. 2014, pp. 3437–3445. URL: http://papers.nips.cc/paper/5439-rates- of-convergence-for-nearest-neighbor-classification. [CEP18] [CEP19] N. Carlini, U. Erlingsson, and N. Papernot. “Prototypical Examples in Deep Learning: Metrics, Characteristics, and Utility”. In: (2018). URL: https://openreview.net/forum?id= r1xyx3R9tQ. N. Carlini, ´U. Erlingsson, and N. Papernot. “Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications”. In: arXiv preprint arXiv:1910.13427 (2019). K. Chaudhuri and D. Hsu. “Sample Complexity Bounds for Differentially Private Learning”. In: COLT. 2011, pp. 155–186. 29 T. Cover and P. Hart. “Nearest neighbor pattern classification”. In: IEEE transactions on information theory 13.1 (1967), pp. 21–27. G. Cohen, G. Sapiro, and R. Giryes. “DNN or k-NN: That is the Generalize vs. Memorize Question”. In: arXiv preprint arXiv:1805.06822 (2018). Y. Cui, Y. Song, C. Sun, A. Howard, and S. Belongie. “Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning”. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2018. C. Cortes and V. Vapnik. “Support-vector networks”. In: Machine learning 20.3 (1995), pp. 273– 297. [CV95] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. “ImageNet: A Large-Scale Hierarchical Image Database”. In: CVPR 2009. 2009. C. Dwork and V. Feldman. “Privacy-preserving Prediction”. In: Conference On Learning Theory. 2018, pp. 1693–1702. C. Dwork and A. Roth. The Algorithmic Foundations of Differential Privacy. Vol. 9. 3-4. 2014, pp. 211–407. URL: http://dx.doi.org/10.1561/0400000042. L. Devroye and T. J. Wagner. “Distribution-free inequalities for the deleted and holdout error estimates”. In: IEEE Trans. Information Theory 25.2 (1979), pp. 202–207. [Dwo+06] C. Dwork, F. McSherry, K. Nissim, and A. Smith. “Calibrating noise to sensitivity in private data analysis”. In: TCC. 2006, pp. 265–284. [Dwo+14] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth. “Preserving Statistical Validity in Adaptive Data Analysis”. In: CoRR abs/1411.2664 (2014). Extended abstract in STOC 2015. V. Feldman. “Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back”. In: CoRR abs/1608.04414 (2016). Extended abstract in NIPS 2016. URL: http: //arxiv.org/abs/1608.04414. [FS97] Y. Freund and R. Schapire. “A decision-theoretic generalization of on-line learning and an application to boosting”. In: Journal of Computer and System Sciences 55.1 (1997), pp. 119– 139. [FV19] V. Feldman and J. Vondr´ak. “High probability generalization bounds for uniformly stable algorithms with nearly optimal rate”. In: CoRR abs/1902.10710 (2019). arXiv: 1902.10710. URL: http://arxiv.org/abs/1902.10710. V. Feldman and C. Zhang. Visualizations and Pretrained Models for “What Neural Networks Memorize and Why”. https://pluskid.github.io/influence-memorization/. 2020. V. Feldman and C. Zhang. “What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation”. In: CoRR abs/2008.03703 (2020). Extended abstract appears in NeurIPS 2020. arXiv: 2008.03703. URL: https://arxiv.org/abs/2008.03703. O. Goldreich, S. Goldwasser, and S. Micali. “How to construct random functions”. In: Journal of the ACM 33.4 (1986), pp. 792–807. T. Hastie, A. Montanari, S. Rosset, and R. J. Tibshirani. “Surprises in High-Dimensional Ridgeless Least Squares Interpolation”. In: arXiv preprint arXiv:1903.08560 (2019). 30 [Hoo+20a] S. Hooker, A. Courville, G. Clark, Y. Dauphin, and A. Frome. “What Do Compressed Deep Neural Networks Forget?” In: (2020). arXiv: 1911.05248 [cs.LG]. [Hoo+20b] S. Hooker, N. Moorosi, G. Clark, S. Bengio, and E. Denton. “Characterising Bias in Compressed Models”. In: CoRR abs/2010.03058 (2020). arXiv: 2010.03058. URL: https://arxiv. org/abs/2010.03058. M. Hardt, B. Recht, and Y. Singer. “Train faster, generalize better: Stability of stochastic gradient descent”. In: ICML. 2016, pp. 1225–1234. URL: http://jmlr.org/proceedings/ papers/v48/hardt16.html. V. Koltchinskii. “Rademacher penalties and structural risk minimization”. In: IEEE Transactions on Information Theory 47.5 (2001), pp. 1902–1914. [Kri+17] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. “Visual genome: Connecting language and vision using crowdsourced dense image annotations”. In: International Journal of Computer Vision 123.1 (2017), pp. 32– 73. Y. Long, V. Bindschaedler, and C. A. Gunter. “Towards Measuring Membership Privacy”. In: CoRR abs/1712.09136 (2017). arXiv: 1712.09136. URL: http://arxiv.org/abs/ 1712.09136. Y. Li, T. Ma, and H. Zhang. “Algorithmic Regularization in Over-parameterized Matrix Sensing and Neural Networks with Quadratic Activations”. In: Conference On Learning Theory. 2018, pp. 2–47. [Lon+18] Y. Long, V. Bindschaedler, L. Wang, D. Bu, X. Wang, H. Tang, C. A. Gunter, and K. Chen. “Understanding Membership Inferences on Well-Generalized Learning Models”. In: CoRR abs/1802.04889 (2018). arXiv: 1802.04889. URL: http://arxiv.org/abs/1802. 04889. T. Liang and A. Rakhlin. “Just interpolate: Kernel” ridgeless” regression can generalize”. In: arXiv preprint arXiv:1808.00387 (2018). S. Ma, R. Bassily, and M. Belkin. “The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning”. In: ICML. 2018, pp. 3331–3340. URL: http://proceedings.mlr.press/v80/ma18a.html. [McM+18] B. McMahan, D. Ramage, K. Talwar, and L. Zhang. “Learning Differentially Private Recurrent Language Models”. In: International Conference on Learning Representations (ICLR). 2018. URL: https://openreview.net/pdf?id=BJ0hF1Z0b. S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin. “Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization”. In: Advances in Computational Mathematics 25.1-3 (2006), pp. 161–193. V. Muthukumar, K. Vodrahalli, and A. Sahai. “Harmless interpolation of noisy data in regres- sion”. In: arXiv preprint arXiv:1903.09139 (2019). [Ney+17a] B. Neyshabur, S. Bhojanapalli, D. McAllester, and N. Srebro. “Exploring generalization in deep learning”. In: Advances in Neural Information Processing Systems. 2017, pp. 5947–5956. [Ney+17b] B. Neyshabur, R. Tomioka, R. Salakhutdinov, and N. Srebro. “Geometry of optimization and implicit regularization in deep learning”. In: arXiv preprint arXiv:1705.03071 (2017). 31 [NTS15] B. Neyshabur, R. Tomioka, and N. Srebro. “In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning”. In: ICLR. 2015. URL: http://arxiv.org/ abs/1412.6614. [NWS14] D. Needell, R. Ward, and N. Srebro. “Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm”. In: NIPS. 2014, pp. 1017–1025. URL: http://papers. nips.cc/paper/5355-stochastic-gradient-descent-weighted-sampling- and-the-randomized-kaczmarz-algorithm.pdf. [OS15] Pap+16] Pap+17] A. Orlitsky and A. T. Suresh. “Competitive distribution estimation: Why is good-turing good”. In: NIPS. 2015, pp. 2143–2151. N. Papernot, M. Abadi, ´U. Erlingsson, I. J. Goodfellow, and K. Talwar. “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data”. In: CoRR abs/1610.05755 (2016). arXiv: 1610.05755. URL: http://arxiv.org/abs/1610.05755. N. Papernot, M. Abadi, ´U. Erlingsson, I. J. Goodfellow, and K. Talwar. “Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data”. In: Proceedings of the 5th International Conference on Learning Representations (ICLR). 2017. A. Rakhlin, S. Mukherjee, and T. Poggio. “Stability Results In Learning Theory”. In: Analysis and Applications 03.04 (2005), pp. 397–417. [RZ19] A. Rakhlin and X. Zhai. “Consistency of Interpolation with Laplace Kernels is a High- Dimensional Phenomenon”. In: COLT. Vol. 99. PMLR, 2019, pp. 2595–2623. URL: http: //proceedings.mlr.press/v99/rakhlin19a.html. R. E. Schapire. “Explaining adaboost”. In: Empirical inference. Springer, 2013, pp. 37–52. [Sch13] R. Schapire, Y. Freund, P. Bartlett, and W. Lee. “Boosting the margin: a new explanation for the effectiveness of voting methods”. In: Annals of Statistics 26.5 (1998), pp. 1651–1686. [Sha+09] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. “Stochastic Convex Optimization”. In: COLT. 2009. S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. “Learnability, Stability and Uniform Convergence”. In: Journal of Machine Learning Research 11 (2010), pp. 2635–2670. URL: http://portal.acm.org/citation.cfm?id=1953019. [Sho+17] R. Shokri, M. Stronati, C. Song, and V. Shmatikov. “Membership Inference Attacks Against Machine Learning Models”. In: 2017 IEEE Symposium on Security and Privacy, SP 2017. 2017, pp. 3–18. [Sou+18] D. Soudry, E. Hoffer, M. S. Nacson, S. Gunasekar, and N. Srebro. “The implicit bias of gradient descent on separable data”. In: The Journal of Machine Learning Research 19.1 (2018), pp. 2822–2878. K. Sridharan. A gentle introduction to concentration inequalities. Tech. rep. 2002. [Sri02] R. Shokri and V. Shmatikov. “Privacy-preserving deep learning”. In: Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM. 2015, pp. 1310– 1321. S. Shalev-Shwartz and Y. Singer. “A new perspective on an old perceptron algorithm”. In: International Conference on Computational Learning Theory. Springer. 2005, pp. 264–278. 32 N. Srebro, K. Sridharan, and A. Tewari. “Smoothness, Low Noise and Fast Rates”. In: NIPS. 2010, pp. 2199–2207. URL: http://papers.nips.cc/paper/3894-smoothness- low-noise-and-fast-rates.pdf. S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei. “Towards demystifying membership inference attacks”. In: arXiv preprint arXiv:1807.09173 (2018). V. N. Vapnik. Estimation of Dependences Based on Empirical Data. New York: Springer-Verlag, 1982. G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. “The inaturalist species classification and detection dataset”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp. 8769–8778. [VH+18] G. Van Horn and P. Perona. “The devil is in the tails: Fine-grained classification in the wild”. In: arXiv preprint arXiv:1709.01450 (2017). [VHP17] G. Valiant and P. Valiant. “Instance optimal learning of discrete distributions”. In: STOC. ACM. 2016, pp. 142–155. Y.-X. Wang, D. Ramanan, and M. Hebert. “Learning to model the tail”. In: Advances in Neural Information Processing Systems. 2017, pp. 7029–7039. X. Wu, F. Li, A. Kumar, K. Chaudhuri, S. Jha, and J. F. Naughton. “Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics”. In: SIGMOD. 2017, pp. 1307–1322. [Wyn+17] A. J. Wyner, M. Olson, J. Bleich, and D. Mease. “Explaining the success of adaboost and random forests as interpolating classifiers”. In: The Journal of Machine Learning Research 18.1 (2017), pp. 1558–1590. J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. “Sun database: Large-scale scene recognition from abbey to zoo”. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE. 2010, pp. 3485–3492. X. Zhu, D. Anguelov, and D. Ramanan. “Capturing long-tail distributions of object subcate- gories”. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, pp. 915–922. C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. “Understanding deep learning requires rethinking generalization”. In: ICLR. 2017. URL: https://openreview.net/forum? id=Sy8gdB9xx. # A Proof Lemma 2.1 The key property of our problem definition is that it allows to decompose the probability of a dataset (under the entire generative process) into a probability of seeing one of the points in the dataset and the probability of seeing the rest of the dataset under a similar generative process. Specifically, we prove the following lemma. Lemma A.1. For x € X, a sequence of points V = (x1,...,%) € X"™ that includes x exactly ¢ times, let V \ x be equal to V with all the elements equal to x omitted. Then for any frequency prior 7 and a in the support of 7, we have . Pr DxDX,U~D” Din DX\} yr pn~e Pr [U =V | D(a) =a] =a*- (1-a)” (U'=V]. 33 In particular: Pr (UU =V]= E. la’ -- ay] : [U'=V}. r DwDX ,U~D" a DInDX\} yr pre Proof. We consider the distribution of D ∼ DX an event with positive probability). We denote this distribution by DX DX z ∈ X \ {x}, sampling pz from π and normalizing the results to sum to 1 − α. That is, defining D(z) =(1- Sar zEX\{a} Pe From here we obtain that an equivalent way to generate a random sample from Dx (|D(«) = a) is to sample D! from DX \*} and then multiply the resulting p.m.f. by 1 — a (with D(x) = a as before). Naturally, for any D, Pr U ∼Dn [U = V ] = D(xi). i∈[n] Now we denote by J_; the subset of indices of elements of V that are different from x: I; = {i € [n] | xi # x}. We can now conclude: Pr [(U=V|D(a)=a|= Pr [UU =V] D~DX ,U~D” D~DX (|D(«)=a),U~D" = E I] 2 D~D¥(\D(@)=0) | icin) = E af Il D(x;) D~DX(|D(z)=a) | jez, a nl =a -(l-a : ) DxDX (\D(x)=a) | ,oj_, =a‘-(1—a)"*. E Il D' (xi) piwk\ | ij, £ n-k 1 =a -(l-a : Pr ( ) prapt\ Eh ype! # D(xi) 1 − α 1 U=Vi. | The second part of the claim follows directly from the fact that, by definition of ¯πN , Pr π ,U ∼Dn D∼DX [D(x) = α] = ¯πN (α). We can now prove Lemma 2.1 which we restate here for convenience. 34 Lemma A.2 (Lemma [2.1] restated). For any frequency prior 7, «© € X and a sequence of points V = (a1,...,%n) € X” that includes x exactly £ times, we have Eynwan [aft!- (1- a)? D(x =V|= papeynpn| (x) |U=V] Bawa’ [ae -d- a)r—-4 . Proof. We first observe that by the Bayes rule and Lemma A.1: Pr [D(x) =a|U =V]= Prpwpy v~pn|U = V | D(x) = a]: Prpspy y~p»[D(2) = a] D~DN USD” Prpvpy,u~pn [UU =V] _ af. (1—a)"*. Pry pX\e} yr pn elU =V]- 7% (a) Ewan [8% (1— BP Pry, a xv} yu pn dU! = V \ a] a’. (1—a)"*. #X (a) Egran [BF (1 8)P-4 # D∼DN This leads to the claim: [D(z)|U=V)= SO a [D(x) =a|U=V] . Pr . DXDN U~D" aesupp(7) D~DN .U~D a®-(1—a)"*. #% (a) = a: a_ an [BE- (1 — B)r- aesupp(#) Epean [8 ( 6 ) ] _ Eqnan [att . (1 _ a) Ewes fa (1— ay] # B Proof of Theorem 3.5 Theorem B.1 (Thm.|3.5|restated). Let X C R* be a domain partitioned into sub-domains {Xi}ietn] with subpopulations {M; ie[n] Over the sub-domains. Let A be any approximately margin maximizing m-class linear classifier and 7 be a frequency prior. Assume that for D ~ pl and V ~ M3, Vi ~ Tet oar 45 with probability at least 1 — 5°, V UV" is (1, 7?/(8/7))-independent for some t € (0, 1/2]. Then for any labeling prior F, A is A-subpopulation-coupled with probability 1 — 6 and , > 1-6. Proof. For the given priors 7 and F, let S = ((1, 41), ---,(@n,Yn)) be a dataset sampled from (Mp, Ly)” for D~ pW and f ~ F. Let V = (21,...,%n). Let T := [N]5=1 and let V’ = (25) jeK be sampled from II jer Mj, that is, V’ consists of additional independent samples from every subpopulation with a single sample. √ We will show that for any V U V’ that is (7, 9 := 7?/(8,/n))-independent, the output w1,..., Wm of any approximately margin maximizing m-class linear classifier A gives predictions on V’ that are consistent with those on V (which are defined by S$): if x; € X, fort € T then for every k € [ml], sign((wz, x,)) = sign((we, x;))- 35 By Defn.[3.4| this implies that the prediction of the classifier on 2’, is identical to that on x;. By our assumption, V UV’ is not (7, 7/(4,/n))-independent with probability at most 5?. By Markov’s inequality, probability over the choice of V such that, the probability over the choice of V’ that V U V’ is not (r, )-independent is more than 6, is at most 6. By our definition of V’, the marginal distribution of x/, is exactly M!;. This implies that, with probability at least 1 — 6 over the choice of the dataset S, for every x € Xg—1, and x’ ~ M, we have TV ( D ([h(2)], D To) <6 hv A(S) a!~M; hw A(S) as required by Defn. 3.1] (for f=1). To prove the stated consistency property for V U V’ that is (7, @)-independent, we will first show that every subset of points in V can be separated from its complement with margin y of Q(1/./n). We will then use the properties of approximately margin maximizing classifiers and, again, independence to obtain consistency. For any vector v, we denote v := v/||v||2. To show that the margin is large, we define the weights explicitly by using one representative point from every subpopulation in V. Without loss of generality, we can assume that these representatives are v1,..., 2, for some r < n. Let 2,...,2, € {£1} be an arbitrary partition of these representatives into positively and negatively labeled ones. We define w := >> jefe] Ei and consider the linear separator given by w. √ To evaluate the margin we first observe that ||w||2 < \/2r. This follows via induction on r: 2 2 > ti) =|) SO aa: + Bs +2 ( > =i) jelr] 2 je[r—1] 2 je[r—1] 2 T _ S2(r-N +1 +2575 252; jelr-] 2 < 4(r —1) 1 ; ~ 3 16/n Now for i ∈ [n], assume that xi ∈ Xt and (without loss of generality) that xr is the representative of subdomain Xt. Then 1 2p (Zi, @) = re (Feb (Fe > “s) 2 jelr—1] > fs- 28) ae TO ZjL5 ~ |jwlle 8/n m jelr-1 2 en ee ~ \lwll2 8/n Jn" Thus we obtain that xi is labeled in the same way as its representative xr and with margin of at least n . This holds for all i ∈ [n] and therefore ¯w shows that the desired separation can be achieved with margin of at least 36 Let w1,..., wx be the linear separators returned by A. Let w be one of them. By our assumptions on A, w separates V with margin of at least y := ayn and further it lies in the span on V. Namely, there exist Q1,...,Q@p such that w = Dieln] Qj Xj. We now pick an arbitrary singleton point from V. Without loss of generality we assume that it is x, (tn, w) > ¥||vn||2 and let z € V’ be the point from the same subdomain X;. Let v := Vie{n—1] ajX; be the part of w that excludes x,,. By our assumption, 7, is a singleton and therefore the points in (1,..., Z,—1) are from other subdomains. By the independence of V, this implies that |(Z,,,v)| < ||v||2 and |(Z, v)| < @ Vila. Now we need to show that the margin condition implies that αn is sufficiently large. Specifically, Y < (En, w) = an + (En, v) < an +0 vl2, and thus VU On > ¥—-Alvlle>y—-O0 + an), where we used the fact that, by the triangle inequality, ||v||2 < ||w|l2 + ||anZn|l2 < 1+ an. This implies that Qn > 4. We can now bound (7, w) (Z,w) = (AnX, En) + (£,v) > Ant — O|lvllg > ant — O(1 + an) = an(t — 6) — (Ge) (-v) (y= (7 - 8) > a> 3 >0. 1+0 1+ an 8/n where the last inequality assumes that n > 4. Thus we obtain that for every w € {wi,..., Wm}, every point in V’N Xg = will be classified by w in the same way as the point from the same subpopulation in S. 37
{ "id": "1806.09471" }
1906.05433
Tackling Climate Change with Machine Learning
Climate change is one of the greatest challenges facing humanity, and we, as machine learning experts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research questions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change.
http://arxiv.org/pdf/1906.05433
David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio
cs.CY, cs.AI, cs.LG, stat.ML
For additional resources, please visit the website that accompanies this paper: https://www.climatechange.ai/
null
cs.CY
20190610
20191105
9 1 0 2 v o N 5 ] Y C . s c [ 2 v 3 3 4 5 0 . 6 0 9 1 : v i X r a # Tackling Climate Change with Machine Learning David Rolnick1 ∗, Priya L. Donti2, Lynn H. Kaack3, Kelly Kochanski4, Alexandre Lacoste5, Kris Sankaran6,7, Andrew Slavin Ross9, Nikola Milojevic-Dupont10,11, Natasha Jaques12, Anna Waldman-Brown12, Alexandra Luccioni6,7, Tegan Maharaj6,8, Evan D. Sherwin2, S. Karthik Mukkavilli6,7, Konrad P. K¨ording1, Carla Gomes13, Andrew Y. Ng14, Demis Hassabis15, John C. Platt16, Felix Creutzig10,11, Jennifer Chayes17, Yoshua Bengio6,7 1University of Pennsylvania, 2Carnegie Mellon University, 3ETH Z¨urich, 4University of Colorado Boulder, 5Element AI, 6Mila, 7Universit´e de Montr´eal, 8 ´Ecole Polytechnique de Montr´eal, 9Harvard University, 10Mercator Research Institute on Global Commons and Climate Change, 11Technische Universit¨at Berlin, 12Massachusetts Institute of Technology, 13Cornell University, 14Stanford University, 15DeepMind, 16Google AI, 17Microsoft Research # Abstract Climate change is one of the greatest challenges facing humanity, and we, as machine learning ex- perts, may wonder how we can help. Here we describe how machine learning can be a powerful tool in reducing greenhouse gas emissions and helping society adapt to a changing climate. From smart grids to disaster management, we identify high impact problems where existing gaps can be filled by machine learning, in collaboration with other fields. Our recommendations encompass exciting research ques- tions as well as promising business opportunities. We call on the machine learning community to join the global effort against climate change. # Introduction The effects of climate change are increasingly visible.1 Storms, droughts, fires, and flooding have become stronger and more frequent [3]. Global ecosystems are changing, including the natural resources and agri- culture on which humanity depends. The 2018 intergovernmental report on climate change estimated that the world will face catastrophic consequences unless global greenhouse gas emissions are eliminated within thirty years [4]. Yet year after year, these emissions rise. Addressing climate change involves mitigation (reducing emissions) and adaptation (preparing for un- avoidable consequences). Both are multifaceted issues. Mitigation of greenhouse gas (GHG) emissions requires changes to electricity systems, transportation, buildings, industry, and land use. Adaptation re- quires planning for resilience and disaster management, given an understanding of climate and extreme events. Such a diversity of problems can be seen as an opportunity: there are many ways to have an impact. ∗D.R. conceived and edited this work, with P.L.D., L.H.K., and K.K. Authors P.L.D., L.H.K., K.K., A.L., K.S., A.S.R., N.M-D., N.J., A.W-B., A.L., T.M., and E.D.S. researched and wrote individual sections. S.K.M., K.P.K., C.G., A.Y.N., D.H., J.C.P., F.C., J.C., and Y.B. contributed expert advice. Correspondence to [email protected]. 1For a layman’s introduction to the topic of climate change, see [1, 2]. 1 In recent years, machine learning (ML) has been recognized as a broadly powerful tool for technological progress. Despite the growth of movements applying ML and AI to problems of societal and global good,2 there remains the need for a concerted effort to identify how these tools may best be applied to tackle climate change. Many ML practitioners wish to act, but are uncertain how. On the other side, many fields have begun actively seeking input from the ML community. This paper aims to provide an overview of where machine learning can be applied with high impact in the fight against climate change, through either effective engineering or innovative research. The strategies we highlight include climate mitigation and adaptation, as well as meta-level tools that enable other strategies. In order to maximize the relevance of our recommendations, we have consulted experts across many fields (see Acknowledgments) in the preparation of this paper. # Who is this paper written for? We believe that our recommendations will prove valuable to several different audiences (detailed below). In our writing, we have assumed some familiarity with basic terminology in machine learning, but do not assume any prior familiarity with application domains (such as agriculture or electric grids). Researchers and engineers: We identify many problems that require conceptual innovation and can advance the field of ML, as well as being highly impactful. For example, we highlight how climate models afford an exciting domain for interpretable ML (see §7). We encourage researchers and engineers across fields to use their expertise in solving urgent problems relevant to society. Entrepreneurs and investors: We identify many problems where existing ML techniques could have a major impact without further research, and where the missing piece is deployment. We realize that some of the recommendations we offer here will make valuable startups and nonprofits. For example, we highlight techniques for providing fine-grained solar forecasts for power companies (see §1.1), tools for helping re- duce personal energy consumption (see §10.2), and predictions for the financial impacts of climate change (see §13). We encourage entrepreneurs and investors to fill what is currently a wide-open space. Corporate leaders: We identify problems where ML can lead to massive efficiency gains if adopted at scale by corporate players. For example, we highlight means of optimizing supply chains to reduce waste (see §4.1) and software/hardware tools for precision agriculture (see §5.2). We encourage corporate leaders to take advantage of opportunities offered by ML to benefit both the world and the bottom line. Local and national governments: We identify problems where ML can improve public services, help gather data for decision-making, and guide plans for future development. For example, we highlight intel- ligent transportation systems (see §2.4), techniques for automatically assessing the energy consumption of buildings in cities (see §3.1), and tools for improving disaster management (see §8.4). We encourage gov- ernments to consult ML experts while planning infrastructure and development, as this can lead to better, more cost-effective outcomes. We further encourage public entities to release data that may be relevant to climate change mitigation and adaptation goals. 2See the AI for social good movement (e.g. [5, 6]), ML for the developing world [7], the computational sustainability movement (e.g. [8–12], the American Meteorological Society’s Committee on AI Applications to Environmental Science, and the field of Climate Informatics (www.climateinformatics.org) [13], as well as the relevant survey papers [14–16]. 2 l a s u a C e c n e r e f n i r e t u p m o C n o i s i v e l b a t e r p r e t n I s l e d o m P L N l o r t n o C & L R s e i r e s - e m T i s i s y l a n a r e f s n a r T g n i n r a e l y t n i a t r e c n U n o i t a c fi i t n a u q 1 Electricity systems Enabling low-carbon electricity Reducing current-system impacts Ensuring global impact • • • • • • • • • • 2 Transportation Reducing transport activity Improving vehicle efficiency Alternative fuels & electrification Modal shift • • • • • • • • • • 3 Buildings and cities Optimizing buildings Urban planning The future of cities • • • • • • • • • • 4 Industry Optimizing supply chains Improving materials Production & energy • • • • • • 5 Farms & forests Remote sensing of emissions Precision agriculture Monitoring peatlands Managing forests • • • • • • • • 6 Carbon dioxide removal Direct air capture Sequestering CO2 7 Climate prediction • • Uniting data, ML & climate science Forecasting extreme events • • • • • • • • 8 Societal impacts Ecology Infrastructure Social systems Crisis • • • • • • • • • 9 Solar geoengineering Understanding & improving aerosols Engineering a planetary control system Modeling impacts 10 Individual action • • • • • • Understanding personal footprint Facilitating behavior change • • • • • 11 Collective decisions d e s i v r e p u s n U • • • • • • • • • • • • g n i n r a e l Modeling social interactions Informing policy Designing markets # 12 Education 13 Finance • • Table 1: Climate change solution domains, corresponding to sections of this paper, matched with selected areas of ML that are relevant to each. 3 • # How to read this paper The paper is broken into sections according to application domain (see Table 1). To help the reader, we have also included the following flags at the level of individual strategies. • High Leverage denotes bottlenecks that domain experts have identified in climate change mitigation or adaptation and that we believe to be particularly well-suited to tools from ML. These areas may be especially fruitful for ML practitioners wishing to have an outsized impact, though applications not marked with this flag are also valuable and should be pursued. • Long-term denotes applications that will have their primary impact after 2040. While extremely important, these may in some cases be less pressing than those which can help act on climate change in the near term. • Uncertain Impact denotes applications where the impact on GHG emissions is uncertain (for exam- ple, the Jevons paradox may apply3) or where there is potential for undesirable side effects (negative externalities). These flags should not be taken as definitive; they represent our understanding of more rigorous analyses within the domains we consider, combined with our subjective evaluation of the potential role of ML in these various applications. Despite the length of the paper, we cannot cover everything. There will certainly be many applications that we have not considered, or that we have erroneously dismissed. We look forward to seeing where future work leads. # A call for collaboration All of the problems we highlight in this paper require collaboration across fields. As the language used to re- fer to problems often varies between disciplines, we have provided keywords and background reading within each section of the paper. Finding collaborators and relevant data can sometimes be difficult; for additional resources, please visit the website that accompanies this paper: https://www.climatechange.ai/. Collaboration makes it easier to develop effective strategies. Working with domain experts reduces the chance of using powerful tools when simple tools will do the job, of working on a problem that isn’t actually relevant to practitioners, of overly simplifying a complex issue, or of failing to anticipate risks. Collaboration can also help ensure that new work reaches the audience that will use it. To be impactful, ML code should be accessible and published using a language and a platform that are already popular with the intended users. For maximal impact, new code can be integrated into an existing, widely used tool. We emphasize that machine learning is not a silver bullet. The applications we highlight are impactful, but no one solution will “fix” climate change. There are also many areas of action where ML is inapplicable, and we omit these entirely. Furthermore, technology alone is not enough – technologies that would address climate change have been available for years, but have largely not been adopted at scale by society. While we hope that ML will be useful in reducing the costs associated with climate action, humanity also must decide to act. 3The Jevons paradox in economics refers to a situation where increased efficiency nonetheless results in higher overall demand. For example, autonomous vehicles could cause people to drive far more, so that overall GHG emissions could increase even if each ride is more efficient. In such cases, it becomes especially important to make use of specific policies, such as carbon pricing, to direct new technologies and the ML behind them. See also the literature on rebound effects and induced demand. 4 # Mitigation # 1 Electricity Systems # by Priya L. Donti AI has been called the new electricity, given its potential to transform entire industries [17]. Interestingly, electricity itself is one of the industries that AI is poised to transform. Many electricity systems are awash in data, and the industry has begun to envision next-generation systems (smart grids) driven by AI and ML [18–20]. Electricity systems4 are responsible for about a quarter of human-caused greenhouse gas emissions each year [26]. Moreover, as buildings, transportation, and other sectors seek to replace GHG-emitting fuels (§2- 3), demand for low-carbon electricity will grow. To reduce emissions from electricity systems, society must • Rapidly transition to low-carbon5 electricity sources (such as solar, wind, hydro, and nuclear) and phase out carbon-emitting sources (such as coal, natural gas, and other fossil fuels). • Reduce emissions from existing CO2-emitting power plants, since the transition to low-carbon power will not happen overnight. • Implement these changes across all countries and contexts, as electricity systems are everywhere. ML can contribute on all fronts by informing the research, deployment, and operation of electricity system technologies (Fig. 1). Such contributions include accelerating the development of clean energy technologies, improving forecasts of demand and clean energy, improving electricity system optimization and management, and enhancing system monitoring. These contributions require a variety of ML paradigms and techniques, as well as close collaborations with the electricity industry and other experts to integrate insights from operations research, electrical engineering, physics, chemistry, the social sciences, and other fields. # 1.1 Enabling low-carbon electricity Low-carbon electricity sources are essential to tackling climate change. These sources come in two forms: vari- able and controllable. Variable sources fluctuate based on external factors; for instance, solar panels pro- duce power only when the sun is shining, and wind turbines only when the wind is blowing. On the other hand, controllable sources such as nuclear or geothermal plants can be turned on and off (though not in- stantaneously6). These two types of sources affect electricity systems differently, and so present distinct opportunities for ML techniques. # 1.1.1 Variable sources Most electricity is delivered to consumers using a physical network called the electric grid, where the power generated must equal the power consumed at every moment. This implies that for every solar panel, wind 4Throughout this section, we use the term “electricity systems” to refer to the procurement of fuels and raw materials for electric grid components; the generation and storage of electricity; and the delivery of electricity to end-use consumers. For primers on these topics, see [21–25]. 5We use the term “low-carbon” here instead of “renewable” because of this paper’s explicit focus on climate change goals. Renewable energy is produced from inexhaustible or easily replenished energy sources such as the sun, wind, or water, but need not necessarily be carbon-free (as in the case of some biomass [27]). Similarly, not all low-carbon energy is renewable (as in the case of nuclear energy). 6Nuclear power plants are often viewed as inflexible since they can take hours or days to turn on or off, and are often left on (at full capacity) to operate as baseload. That said, nuclear power plants may have some flexibility to change their power generation for load-following and other electric grid services, as in the case of France [28]. 5 Forecasting supply Detecting methane leaks wes matenals science Accelerating vr G5 Managing existing technologies fusion science & a 6: Variable low-carbon power Fossil fuel power Controllable low-carbon power | Modeling emissions iF he scheduling Approaching flexible demand low-data settings Electric grid I t-----—--—-——— /Mmproving clean | energy access -———4 Forecasting demand @ & Figure 1: Selected opportunities to reduce GHG emissions from electricity systems using machine learning. turbine, or other variable electricity generator, there is some mix of natural gas plants, storage, or other controllable sources ready to buffer changes in its output (e.g. when unexpected clouds block the sun or the wind blows less strongly than predicted). Today, this buffer is often provided by coal and natural gas plants run in a CO2-emitting standby mode called spinning reserve. In the future, this role is expected to be played by energy storage technologies such as batteries (§2.3), pumped hydro, or power-to-gas [29].7 ML can both reduce emissions from today’s standby generators and enable the transition to carbon-free systems by helping improve necessary technologies (namely forecasting, scheduling, and control) and by helping create advanced electricity markets that accommodate both variable electricity and flexible demand. 7It is worth noting that in systems with many fossil fuel plants, storage may increase emissions depending on how it is operated [30, 31]. 6 Forecasting supply and demand High Leverage Since variable generation and electricity demand both fluctuate, they must be forecast ahead of time to in- form real-time electricity scheduling and longer-term system planning. Better short-term forecasts can allow system operators to reduce their reliance on polluting standby plants and to proactively manage increasing amounts of variable sources. Better long-term forecasts can help system operators (and investors) determine where and when variable plants should be built. While many system operators today use basic forecasting techniques, forecasts will need to become increasingly accurate, span multiple horizons in time and space, and better quantify uncertainty to support these use cases. ML can help on all these fronts. To date, many ML methods have been used to forecast electricity supply and demand. These meth- ods have employed historical data, physical model outputs, images, and even video data to create short- to medium-term forecasts of solar power [32–40], wind power [41–45], “run-of-the-river” hydro power [19], demand [46–49], or more than one of these [50, 51] at aggregate spatial scales. These methods span various types of supervised machine learning, fuzzy logic, and hybrid physical models, and take different approaches to quantifying (or not quantifying) uncertainty. At a more spatially granular level, some work has attempted to understand specific categories of demand, for instance by clustering households [52, 53] or by disaggregating electricity signals using game theory, optimization, regression, and/or online learning [54–56]. While much of this previous work has used domain-agnostic techniques, ML algorithms of the future will need to incorporate domain-specific insights. For instance, since weather fundamentally drives both variable generation and electricity demand, ML algorithms forecasting these quantities should draw from innovations in climate modeling and weather forecasting (§7) and in hybrid physics-plus-ML modeling tech- niques [33–35]. Such techniques can help improve short- to medium-term forecasts, and are also necessary for ML to contribute to longer-term (e.g. year-scale) forecasts since weather distributions shift over time [57]. In addition to incorporating system physics, ML models should also directly optimize for system goals [58–60]. For instance, the authors of [58] use a deep neural network to produce demand forecasts that optimize for electricity scheduling costs rather than forecast accuracy; this notion could be extended to produce forecasts that minimize GHG emissions. In non-automated settings where power system control engineers (partially) determine how much power each generator should produce, interpretable ML and auto- mated visualization techniques could help engineers better understand forecasts and thus improve how they schedule low-carbon generators. More broadly, understanding the domain value of improved forecasts is an interesting challenge. For example, previous work has characterized the benefits of specific solar forecast improvements in a region of the United States [61]; further study in different contexts and for different types of improvements could help better direct ML work in the forecasting space. Improving scheduling and flexible demand When balancing electricity systems, system operators use a process called scheduling and dispatch to de- termine how much power every controllable generator should produce. This process is slow and complex, as it is governed by NP-hard optimization problems such as unit commitment and optimal power flow that must be coordinated across multiple time scales (from sub-second to days ahead). Further, scheduling will become even more complex as electricity systems include more storage, variable generators, and flexible demand, since operators will need to manage even more system components while simultaneously solving scheduling problems more quickly to account for real-time variations in electricity production. Schedul- ing processes must therefore improve significantly for operators to manage systems with a high reliance on variable sources. ML can help improve the existing (centralized) process of scheduling and dispatch by speeding up power system optimization problems and improving the quality of optimization solutions. A great deal of work primarily in optimization, but also using techniques such as neural networks, genetic algorithms, and fuzzy 7 logic [62], has focused on improving the tractability of power system optimization problems. ML could also be used to approximate or simplify existing optimization problems [63–65], to find good starting points for optimization [66], or to learn from the actions of power system control engineers [67]. Dynamic scheduling [68, 69] and safe reinforcement learning could also be used to balance the electric grid in real time; in fact, some electricity system operators have started to pilot similar methods at small, test case-based scales. While many modern electricity systems are centrally coordinated, recent work has examined how to (at least partially) decentralize scheduling and dispatch using energy storage, flexible demand, low-carbon generators, and other resources connected to the electric grid. One strategy is to explicitly design local control algorithms; for instance, recent work has controlled energy storage and solar inverters using super- vised learning techniques trained on historical optimization data [70–73]. Another strategy is to let storage, demand, and generation respond to real-time prices8 that reflect (for example) how emissions-intensive elec- tricity currently is. In this case, ML can help both to design real-time prices and to respond to these prices. Previous work has used dynamic programming to set real-time electricity prices [78] and reinforcement learning to set real-time prices in more general settings [79]; similar techniques could be applied to create prices that instead optimize for GHG emissions. Techniques such as agent-based models [80–83], online optimization [84], and dynamic programming [85] can then help maximize profits for decentralized storage, demand, and generation, given real-time prices. In general, much more work is needed to test and scale existing decentralized solutions; barring deployment on real systems, platforms such as PowerTAC [86] can provide large-scale simulated electricity markets on which to perform these tests. Accelerating materials science High Leverage Long-term Scientists are working to develop new materials that can better store or otherwise harness energy from vari- able natural resources. For instance, creating solar fuels (synthetic fuels produced from sunlight or solar heat) could allow us to capture solar energy when the sun is shining and then store this energy for later use. However, the process of discovering new materials can be slow and imprecise; the physics behind materials are not completely understood, so human experts often manually apply heuristics to understand a proposed material’s physical properties [87, 88]. ML can automate this process by combining existing heuristics with experimental data, physics, and reasoning to apply and even extend existing physical knowledge. For instance, recent work has used tools from ML, AI, optimization, and physics to figure out a proposed ma- terial’s crystal structure, with the goal of accelerating materials discovery for solar fuels [88–90]. Other work seeking to improve battery storage technologies has combined first-principles physics calculations with support-vector regression to design conducting solids for lithium-ion batteries [91]. (Additional appli- cations of ML to batteries are discussed in §2.3.) More generally in materials science, ML techniques including supervised learning, active learning, and generative models have been used to help synthesize, characterize, model, and design materials, as described in reviews [87, 92] and more recent work [93]. As discussed in [87], novel challenges for ML in materials science include coping with moderately sized datasets and inferring physical principles from trained models In addition to advancing technology, ML can inform policy for accelerated materials science; for [94]. instance, previous work has applied natural language processing to patent data to understand the solar panel innovation process [95]. We note that while our focus here has been on electricity system applications, ML for accelerated science may also have significant impacts outside electricity systems, e.g. by helping design alternatives to cement (§4.2) or create better CO2 sorbents (§6.1). Additional applications There are many additional opportunities for ML to advance variable power generation. For instance, it is important to ensure that low-carbon variable generators produce energy as efficiently and profitably as 8For discussions and examples of different types of advanced electricity markets, see [74–77]. 8 possible. Prior work has attempted to maximize electricity production by controlling movable solar panels [96, 97] or wind turbine blades [98] using reinforcement learning or Bayesian optimization. Other work has used graphical models to detect faults in rooftop solar panels [99] and genetic algorithms to optimally place wind turbines within a wind farm [100]. ML can also help control batteries located at solar and wind farms to increase these farms’ profits, for instance by storing their electricity when prices are low and then selling it when prices are high; prior work has used ML to forecast electricity prices [101, 102] or reinforcement learning to control batteries based on current and historical prices [103]. ML can also help integrate rooftop solar panels into the electric grid, particularly in the United States and Europe. Rooftop solar panels are connected to a part of the electric grid called the distribution grid, which traditionally did not have many sensors because it was only used to deliver electricity “one-way” from centralized power plants to consumers. However, rooftop solar and other distributed energy resources have created a “two-way” flow of electricity on distribution grids. Since the locations and sizes of rooftop solar panels are often unknown to electricity system operators, previous work has used computer vision techniques on satellite imagery to generate size and location data for rooftop solar panels [104, 105]. Further, to ensure that the distribution system runs smoothly, recent work has employed techniques such as matrix completion and deep neural networks to estimate the state of the system when there are few sensors [106–108]. # 1.1.2 Controllable sources Controllable low-carbon electricity sources can help achieve climate change goals while requiring very few changes to how the electric grid is run (since today’s fossil fuel power plants are also controllable). ML can support existing controllable technologies while accelerating the development of new technologies such as nuclear fusion power plants. Managing existing technologies Many controllable low-carbon technologies are already commercially available; these technologies include geothermal, nuclear fission, and (in some cases9) dam-based hydropower. ML can provide valuable input in planning where these technologies should be deployed and can also help maintain already-operating power plants. For instance, recent work has proposed to use ML to identify and manage sites for geothermal energy, using satellite imagery and seismic data [110]. Previous work has also used multi-objective optimization to place hydropower dams in a way that satisfies both energy and ecological objectives [111]. Finally, ML can help maintain nuclear fission reactors (i.e., nuclear power plants) by detecting cracks and anomalies from image and video data [112] or by preemptively detecting faults from high-dimensional sensor and simulation data [113]. (The authors of [114] speculate that ML and high performance computing could also be used to help simulate nuclear waste disposal options or even design next-generation nuclear reactors.) Accelerating fusion science High Leverage Long-term Nuclear fusion reactors [115] have the potential to produce safe and carbon-free electricity using a virtu- ally limitless hydrogen fuel supply, but currently consume more energy than they produce [116]. While considerable scientific and engineering research is still needed, ML can help accelerate this work by guid- ing experimental design and monitoring physical processes. Fusion reactors require intelligent experimental design because they have a large number of tunable parameters; ML can help prioritize which parameter con- figurations should be explored during physical experiments. For instance, Google and TAE Technologies have developed a human-in-the-loop experimental design algorithm enabling rapid parameter exploration for TAE’s reactor [117]. 9Dam-based hydropower may produce methane, primarily due to biomass that decomposes when a hydro reservoir floods, but the amount produced varies between power plants [109]. 9 Physically monitoring fusion reactors is also an important application for ML. Modern reactors attempt to super-heat hydrogen into a plasma state and then stabilize it, but during this process, the plasma may experience rapid instabilities that damage the reactor. Prior work has tried to preemptively detect disruptions for tokamak reactors, using supervised learning methods such as support-vector machines, adaptive fuzzy logic, decision trees, and deep learning [118–123] on previous disruption data. While many of these methods are tuned to work on individual reactors, recent work has shown that deep learning may enable insights that generalize to multiple reactors [123]. More generally, rather than simply detecting disruptions, scientists need to understand how plasma’s state evolves over time, e.g. by finding the solutions of time-dependent magnetohydrodynamic equations [124]; speculatively, ML could help characterize this evolution and even help steer plasma into safe states through reactor control. ML models for such fusion applications would likely employ a combination of simulated10 and experimental data, and would need to account for the different physical characteristics, data volumes, and simulator speeds or accuracies associated with different reactor types. # 1.2 Reducing current-system impacts While switching to low-carbon electricity sources will be essential, in the meantime, it will also be important to mitigate emissions from the electricity system as it currently stands. Some methods for mitigating current- system impacts include cutting emissions from fossil fuels, reducing waste from electricity delivery, and flexibly managing demand to minimize its emissions impacts. Reducing life-cycle fossil fuel emissions High Leverage Uncertain Impact Reducing emissions from fossil fuels is a necessary stopgap while society transitions towards low-carbon electricity. In particular, ML can help prevent the leakage of methane (an extremely potent greenhouse gas) from natural gas pipelines and compressor stations. Previous and ongoing work has used sensor and/or satellite data to proactively suggest pipeline maintenance [131, 132] or detect existing leaks [133–135], and there is a great deal of opportunity in this space to improve and scale existing strategies. In addition to leak detection, ML can help reduce emissions from freight transportation of solid fuels (§2), identify and manage storage sites for CO2 sequestered from power plant flue gas (§6.2), and optimize power plant parameters to reduce CO2 emissions. In all these cases, projects should be pursued with great care so as not to impede or prolong the transition to a low-carbon electricity system; ideally, projects should be preceded by system impact analyses to ensure that they will indeed decrease GHG emissions. Reducing system waste As electricity gets transported from generators to consumers, some of it gets lost as resistive heat on electric- ity lines. While some of these losses are unavoidable, others can be significantly mitigated to reduce waste and emissions. ML can help prevent avoidable losses through predictive maintenance, i.e., by suggesting proactive electricity grid upgrades. Prior work has performed predictive maintenance using LSTMs [136], bipartite ranking [137], and neural network-plus-clustering techniques [138] on electric grid data, and future work will need to improve and/or localize these approaches to different contexts. Modeling emissions Flexibly managing household, commercial, industrial, and electric vehicle demand (as well as energy stor- age) can help minimize electricity-based emissions (§2, 3, 4, 10), but doing so involves understanding what the emissions on the electric grid actually are at any moment. Specifically, marginal emissions factors 10Plasma simulation frameworks for tokamak reactors include RAPTOR [125, 126], ASTRA [127], CRONOS [128], PTRANSP [129], and IPS [130]. 10 capture the emissions effects of small changes in demand at any given time. To inform consumers about marginal emissions factors, WattTime [139] estimates these factors in real time for the United States using regression-based techniques, and the electricityMap project [140] provides multi-day forecasts for Europe using ensemble models on electricity and weather data. Great Britain’s National Grid ESO also uses en- semble models to forecast average emissions factors, which measure the aggregate emissions intensity of all power plants [141]. There is still much room to improve the performance of these methods, as well as to forecast related quantities such as electricity curtailments (i.e. the wasting of usually low-carbon electricity for grid balancing purposes). As most existing methods produce point estimates, it would also be important to quantify the uncertainty of these estimates to ensure that load-shifting techniques indeed decrease (rather than increase) emissions. # 1.3 Ensuring global impact Much of the discussion around electricity systems often focuses on settings such as the United States with near universal electricity access and relatively abundant data. However, many places that do not share these attributes are still integral to tackling climate change [26] and warrant serious consideration. To ensure global impact, ML can help improve electricity access and translate electricity system insights from high- data to low-data contexts. Improving clean energy access Improving access to clean electricity can address climate change while simultaneously improving social and economic development [142, 143]. Specifically, clean electricity provided via electric grids, microgrids, or off-grid methods can displace diesel generators, wood-burning stoves, and other carbon-emitting energy sources. Figuring out what clean electrification methods are best for different areas can require intensive, boots-on-the-ground surveying work, but ML can help provide input to this process in a scalable manner. For instance, previous work has used image processing, clustering, and optimization techniques on satellite imagery to inform electrification initiatives [144]. ML and statistics can also help operate rural microgrids through accurate forecasts of demand and power production [145, 146], since small microgrids are even harder to balance than country-scale electric grids. Generating data to aid energy access policy and better managing energy access strategies are therefore two areas in which ML may have promising applications. Approaching low-data settings High Leverage While ML methods have often been applied to grids with widespread sensors, system operators in many countries do not collect or share system data. Although these data availability practices may evolve, it may meanwhile be beneficial to use ML techniques such as transfer learning to translate insights from high-data to low-data settings (especially since all electric grids share the same underlying system physics). Developing data-efficient ML techniques will likely also be useful in low-data settings; for instance, in [147], the authors enforce physical or other domain-specific constraints on weakly supervised ML models, allowing these models to learn from very little labeled data. ML can also help generate information within low-data settings. For instance, recent work has esti- mated the layout of electricity grids in regions where they are not explicitly mapped, using computer vision on satellite imagery along with graph search techniques [148]. Companies have also proposed to use satel- lite imagery to measure power plant CO2 emissions [149] (also see §5.1). Other recent work has modeled electricity consumption using regression-based techniques on cellular network data [150], which may prove useful in settings with many cellular towers but few electric grid sensors. Although low-data settings are generally underexplored by the ML community, electricity systems research in these settings presents op- portunities for both innovative ML and climate change mitigation. 11 # 1.4 Discussion Data-driven and critical to climate change, electricity systems hold many opportunities for ML. At the same time, applications in this space hold many potential pitfalls; for instance, innovations that seek to reduce GHG emissions in the oil and gas industries could actually increase emissions by making them cheaper to emit [20]. Given these domain-specific nuances, working in this area requires close collaborations with electricity system decision-makers and with practitioners in fields including electrical engineering, the natural sciences, and the social sciences. Interpretable ML may enable stakeholders outside ML to better understand and apply models in real-world settings. Similarly, it will be important to develop hybrid ML models that explicitly account for system physics (see e.g. [147, 151–153]), directly optimize for domain- specific goals [58–60], or otherwise incorporate or scale existing domain knowledge. Finally, since most modern electric grids are not data-abundant (although they may be data-driven), understanding how to apply data-driven insights to these grids may be the next grand challenge for ML in electricity systems. 12 # 2 Transportation # by Lynn H. Kaack Transportation systems form a complex web that is fundamental to an active and prosperous society. Glob- ally, the transportation sector accounts for about a quarter of energy-related CO2 emissions [4]. In contrast to the electricity sector, however, transportation has not made significant progress to lower its CO2 emissions [154] and much of the sector is regarded as hard to decarbonize [155]. This is because of the high energy density of fuels required for many types of vehicles, which constrains low-carbon alternatives, and because transport policies directly impact end-users and are thus more likely to be controversial. Passenger and freight transportation are each responsible for about half of transport GHG emissions [156]. Both freight and passengers can travel by road, by rail, by water, or by air (referred to as transport modes). Different modes carry vastly different carbon emission intensities.11 At present, more than two- thirds of transportation emissions are from road travel [156], but air travel has the highest emission intensity and is responsible for an increasingly large share. Strategies to reduce GHG emissions12 from transportation include [156]: • Reducing transport activity. • Improving vehicle efficiency. • Alternative fuels and electrification. • Modal shift (shifting to lower-carbon options, like rail). Each of these mitigation strategies offers opportunities for ML (Fig. 2). While many of us probably think of autonomous vehicles and ride-sharing when we think of transport and ML, these technologies have uncertain impacts on GHG emissions [160], potentially even increasing them. We discuss these disruptive technologies in §2.1 but show that ML can play a role for decarbonizing transportation that goes much further. ML can improve vehicle engineering, enable intelligent infrastructure, and provide policy-relevant information. Many interventions that reduce GHG emissions in the transportation sector require changes in planning, maintenance, and operations of transportation systems, even though the GHG reduction potential of those measures might not be immediately apparent. ML can help in implementing such interventions, for example by providing better demand forecasts. Typically, ML strategies are most effective in tandem with strong public policies. While we do not cover all ML applications in the transportation sector, we aim to include those areas that can conceivably reduce GHG emissions. # 2.1 Reducing transport activity A colossal amount of transport occurs each day across the world, but much of this mileage occurs ineffi- ciently, resulting in needless GHG emissions. With the help of ML, the number of vehicle-miles traveled can be reduced by making long trips less necessary, increasing loading, and optimizing vehicle routing. Here, we discuss the first two in depth – for a discussion of ML and routing, see for example [161]. Understanding transportation data Many areas of transportation lack data, and decision-makers often design infrastructure and policy with un- certain information. In recent years, new types of sensors have become available, and ML can turn this raw data into useful information. Traditionally, traffic is monitored with ground-based counters that are installed on selected roads. A variety of technologies are used, such as inductive loop detectors or pneumatic tubes. 11Carbon intensity is measured in grams of CO2-equivalent per person-km or per ton-km, respectively. 12For general resources on how to decarbonize the transportation sector, see the AR5 chapter on transportation [156], and [157– '2For general resources on how to decarbonize the transportation sector, see the ARS chapter on transportation , and 159). 159]. 13 hie () .d —> Slit = > 3 KR Vehicle efficiency Reducing %& Designing for efficiency transportation activity Passenger Detecting loading inefficiency Analyzing data 3-D printing ; Remote sensing Â¥ i) Autonomous vehicles Forecasting Freight consolidation ( _ ( Alternatives to transport \ GZ il Alternative fuels Research and development YyNY \ Freight . “ye o Modal shift P - N Consumer choices os ae 5D. Coordinating modes Electric vehicles Bike share rebalancing Charging patterns Predictive maintenance Charge scheduling Enforcing regulation Congestion management Vehicle-to-grid algorithms Battery energy management Battery R&D Figure 2: Selected strategies to mitigate GHG emissions from transportation using machine learning. Traffic is sometimes monitored with video systems, in particular when counting pedestrians and cyclists, which can be automated with computer vision [162]. Since counts on most roads are often available only over short time frames, these roads are modeled by looking at known traffic patterns for similar roads. ML methods, such as SVMs and neural networks, have made it easier to classify roads with similar traffic pat- terns [163–165]. As ground-based counters require costly installation and maintenance, many countries do not have such systems. Vehicles can also be detected in high-resolution satellite images with high accuracy [166–169], and image counts can serve to estimate average vehicle traffic [170]. Similarly, ML methods can help with imputing missing data for precise bottom-up estimation of GHG emissions [171] and they are also applied in simulation models of vehicle emissions [172]. Modeling demand High Leverage Modeling demand and planning new infrastructure can significantly shape how long trips are and which transport modes are chosen by passengers and shippers – for example, discouraging sprawl and creating new transportation links can both reduce GHG emissions. ML can provide information about mobility patterns, which is directly necessary for agent-based travel demand models, one of the main transport planning tools [173]. For example, ML makes it possible to estimate origin-destination demand from traffic counts [174], and it offers new methods for spatio-temporal road traffic forecasting – which do not always outperform other statistical methods [175] but may transfer well between areas [176]. Also, short-term forecasting of public transit ridership can improve with ML; see for example [177, 178]. ML is particularly relevant for deducing information from novel data – for example, learning about the behavior of public transit users from smart card data [179, 180]. Also, mobile phone sensors provide new means to understand personal travel demand and the urban topology, such as walking route choices [181]. Similarly, ML-based modeling of 14 demand can help mitigate climate change by improving operational efficiency of modes that emit significant CO2, such as aviation. ML can help predict runway demand and aircraft taxi time in order to reduce the excess fuel burned in the air and on the ground due to congestion in airports [182, 183]. Shared mobility Uncertain Impact In the passenger sector, shared mobility (such as on-demand ride services or vehicle-sharing13), is undoubt- edly disrupting the way people travel and think about vehicle ownership, and ML plays an integral part in optimizing these services (e.g. [184]). However, it is largely unclear what the impact of this development will be. For example, shared cars can actually cause more people to travel by car, as opposed to using public transportation. Similarly, on-demand taxi services add mileage when traveling without a customer, possibly negating any GHG emission savings [185]. On the other hand, shared mobility can lead to higher utilization of each vehicle, which means a more efficient use of materials [186]. The use of newer and more efficient vehicles, ideally electric ones, could increase with vehicle-sharing concepts, reducing GHG emissions. Some of the issues raised above could also perhaps be overcome by making taxis autonomous. Such vehicles also might integrate better with public transportation, and offer new concepts for pooled rides, which substantially reduce the emissions per person-mile. ML methods can help to understand the energy impact of shared mobility concepts. For example, they can be used to predict if a customer decides to share a ride with other passengers from an on-demand ride ser- vice [187]. For decision-makers it is important to have access to timely location-specific empirical analysis to understand if a ride share service is taking away customers from low-carbon transit modes and increasing the use of cars. Some local governments are beginning to require data-sharing from these providers (see §3.3). Car-sharing services using autonomous vehicles could yield GHG emission savings when they encour- age people to use public transit for part of the journey [188] or with autonomous electric vehicles [189]. However, using autonomous shared vehicles alone could increase the total vehicle-miles traveled and there- fore do not necessarily lead to lower emissions as long as the vehicles have internal combustion engines (or electrical engines on a “dirty” electrical grid) [190, 191]. We see the intersection of shared mobility, autonomous and electric vehicles, and smart public transit as a path where ML can make a contribution to shaping future mobility. See also §2.2 for more on autonomous vehicles. When designing and promoting new mobility services, it is important that industry and public policy prioritize lowering GHG emissions. Misaligned incentives in the early stages of technological development could result in the lock-in to a service with high GHG emissions [192, 193]. Freight routing and consolidation High Leverage Bundling shipments together, which is referred to as freight consolidation, dramatically reduces the number of trips (and therefore the GHG emissions). The same is true for changing routing so that trucks do not have to return empty. As rail and water modes require much larger loads than trucks, consolidation also enables shipments to use these modes for part of the journey [159]. Freight consolidation and routing de- cisions are often taken by third-party logistics service providers and other freight forwarders, such as in the less-than-truckload market, which deals with shipments of smaller sizes. ML offers opportunities to opti- mize this complex interaction of shipment sizes, modes, origin-destination pairs, and service requirements. Many problem settings are addressed with methods from the field of operations research. There is evidence that ML can improve upon these methods, in particular mixed-integer linear programming [194]. Other proposed and deployed applications of ML include predicting arrival times or demand, identifying and plan- ning around transportation disruptions [195], and clustering suppliers by their geographical location and common shipping destinations. Proposed planning approaches include designing allocation algorithms and 13In this section, we discuss shared cars; see §2.4 for bike shares and electric scooters. 15 freight auctions, and ML has for example been shown to help pick good algorithms and parameters to solve auction markets [196]. Alternatives to transport Uncertain Impact Disruptive technologies that are based on ML could replace or reduce transportation demand. For example, additive manufacturing (AM, or 3-D printing) has (limited) potential to reduce freight transport by producing lighter goods and enabling production closer to the consumer [159]. ML can be a valuable tool for improving AM processes [197]. ML can also help to improve virtual communication [198]. If passenger trips are replaced by telepresence, travel demand can be reduced, as has been shown for example in public agencies [199] and for scientific teams [200]. However, it is uncertain to what extent virtual meetings replace physical travel, or if they may actually give rise to more face-to-face meetings [201]. # Improving vehicle efficiency Most vehicles are not very efficient compared to what is technically possible: for example, aircraft car- bon intensity is expected to decline by more than a third with respect to 2012, simply by virtue of newer models replacing aging jets [202]. Both the design of the vehicle and the way it is operated can increase the fuel economy. Here, we discuss how ML can help design more efficient vehicles and the impacts that autonomous driving may have on GHG emissions. Encouraging drivers to adopt more efficient vehicles is also a priority; while we do not focus on this here, ML plays a role in studying consumer preferences in vehicle markets [203]. Designing for efficiency There are many ways to reduce the energy a vehicle uses – such as more efficient engines, improved aero- dynamics, hybrid electric engines, and reducing the vehicle’s weight or tire resistance. These different strategies require a broad range of engineering techniques, many of which can benefit from ML. For exam- ple, ML is applied in advanced combustion engine design [204]. Hybrid electric vehicles, which are more efficient than combustion engines alone, rely on power management methods that can be improved with ML [205]. Aerodynamic efficiency improvements need turbulence modeling that is often computationally intensive and relies heavily on ML-based surrogate models [206]. Aerodynamic improvements can not only be made by vehicle design but also by rearranging load. Lai et al. [207] use computer vision to detect aerodynamically inefficient loading on freight trains. Additive manufacturing (3-D printing) can produce lighter parts in vehicles, such as road vehicles and aircraft, that reduce energy consumption [159, 186]. ML is applied to improve those processes, for example through failure detection [208, 209] or material design [210]. Autonomous vehicles Uncertain Impact Machine learning is essential in the development of autonomous vehicles (AVs), including in such basic tasks as following the road and detecting obstacles [211].14 While AVs could reduce energy consumption – for example, by reducing traffic congestion and inducing efficiency through eco-driving – it is also possible that AVs will lead to an increase in overall road traffic that nullifies efficiency gains. (For an overview of possible energy impacts of AVs see [160, 212] and for broader impacts on mobility see [213].) Two advan- tages of AVs in the freight sector promise to cut GHG emissions: First, small autonomous vehicles, such as delivery robots and drones, could reduce the energy consumption of last-mile delivery [214], though they come with regulatory challenges [215]. Second, trucks can reduce energy consumption by platooning (driv- ing very close together to reduce air resistance), thereby alleviating some of the challenges that come with 14Providing details on the general role of ML for AVs is beyond the scope of this paper. 16 electrifying long-distance road freight [216]. Platooning relies on autonomous driving and communication technologies that allow vehicles to brake and accelerate simultaneously. ML can help to develop AV technologies specifically aimed at reducing energy consumption. For ex- ample, Wu et al. [217, 218] develop AV controllers based on reinforcement learning to smooth out traffic involving non-autonomous vehicles, reducing congestion-related energy consumption. ML methods can also help to understand driving practices that are more energy efficient. For example, Jim´enez et al. [219] use data from smart phone sensors to identify driving behavior that leads to higher energy consumption in electric vehicles. # 2.3 Alternative fuels and electrification Electric vehicles High Leverage Electric vehicle (EV) technologies – using batteries, hydrogen fuel cells, or electrified roads and railways – are regarded as a primary means to decarbonize transport. EVs can have very low GHG emissions – de- pending, of course, on the carbon intensity of the electricity. ML is vital for a range of different problems related to EVs. Rigas et al. [220] detail methods by which ML can improve charge scheduling, congestion management, and vehicle-to-grid algorithms. ML methods have also been applied to battery energy man- agement (for example charge estimation [221] or optimization in hybrid EVs [205]), and to detect faults and lateral misalignment in wireless charging of EVs [222]. As more people drive EVs, understanding their use patterns will become more important. Modeling charging behavior will be useful for grid operators looking to predict electric load. For this application, it is possible to analyze residential EV charging behavior from aggregate electricity load (energy disaggregation, see also §3.1) [223]. Also, in-vehicle sensors and communication data are increasingly becoming available and offer an opportunity to understand travel and charging behavior of EV owners, which can for example inform the placement of charging stations [224]. Battery electric vehicles are typically not used for more than a fraction of the day, allowing them to act as energy storage for the grid at other times, where charging and discharging is controlled for example by price signals [225] (see §1.1.1,1.2). There is much potential for ML to improve such vehicle-to-grid technology, for example with reinforcement learning [226], which can reduce GHG emissions from electricity genera- tion. Vehicle-to-grid technology comes with private and social financial benefits. However, consumers are expected to be reluctant to agree to such services, as they might not want to compromise their driving range [227]. Finally, ML can also play a role in the research and development of batteries, a decisive technology for EV costs and usability. Work in this area has focused on predicting battery state, degradation, and remaining lifetime using supervised learning techniques, fuzzy logic, and clustering [228–235]. However, many models developed in academia are based on laboratory data that do not account for real-world factors such as environmental conditions [228–230]. By contrast, industry lags behind in ML modeling, but real- world operational data are readily available. Merging these two perspectives could yield significant benefits for the field. Alternative fuels Long-term Much of the transportation sector is highly dependent on liquid fossil fuels. Aviation, long-distance road transportation, and ocean shipping require fuels with high energy density and thus are not conducive to electrification [155]. Electrofuels [236], solar fuels 1.1.1, biofuels [237], hydrogen [238, 239], and perhaps natural gas [240] offer alternatives, but the use of these fuels is constrained by factors such as cost, land- use, and (for hydrogen and natural gas) incompatibility with current infrastructure [155]. Electrofuels and biofuels have the potential to serve as low-carbon drop-in fuels that retain the properties of fossil fuels, such as high energy density, while retaining compatibility with the existing fleet of vehicles and the current fuel 17 infrastructure [159]. Fuels such as electrofuels and hydrogen can be produced using electricity-intensive processes and can be stored at lower cost than electricity. Thus, as a form of energy storage, these fuels could provide services to the electricity grid by enabling flexible power use and balancing variable electricity generators (§1.1.1). Given their relative long-term importance and early stage of development, they present a critical opportunity to mitigate climate change. ML techniques may present opportunities for improvement at various stages of research and development of alternative fuels (similar to applications in §1.1.1). # 2.4 Modal shift Shifting passengers and freight to low carbon-intensity modes is one of the most important means to decar- bonize transport. This modal shift in passenger transportation can for example involve providing people with public transit, which requires analyzing mode choice and travel demand data. ML can also make low-carbon freight modes more competitive by helping to coordinate intermodal transport. Passenger preferences ML can improve our understanding about passengers’ travel mode choices, which in turn informs transporta- tion planning, such as where public transit should be built. Some recent studies have shown that supervised ML based on survey data can improve passenger mode choice models [241–243]. Seo et al. propose to conduct long-term travel surveys with online learning, which reduces the demand on respondents, while obtaining high data quality [244]. Sun et al. [245] use SVMs and neural networks for analyzing preferences of customers traveling by high speed rail in China. There is also work on inferring people’s travel modes and destinations from social media or various mobile phone sensors such as GPS (transportation mode de- tection), e.g. [246, 247]. Also in the freight sector, ML has been applied to analyze modal trade-offs, for example by imputing data on counterfactual mode choices [248]. Enabling low-carbon options High Leverage In order to incentivize more users to choose low-carbon transport modes, their costs and service quality can be improved. Many low-carbon modes must be integrated with other modes of transportation to deliver the same level of service. For example, when traveling by train, the trip to and from the station will often be by car, taxi, bus, or bike. There are many opportunities for ML to facilitate a better integration of modes, both in the passenger and freight sectors. ML can also help to improve the operation of low-carbon modes, for example by reducing the operations and maintenance costs of rail [249] and predicting track degradation [250]. Bike sharing and electric scooter services can offer low-carbon alternatives for urban mobility that do not require ownership and integrate well with public transportation. ML studies help to understand how usage patterns for bike stations depend on their immediate urban surroundings [251]. ML can also help solve the bike sharing rebalancing problem, where shared bikes accumulate in one location and are lacking in other locations, by improving forecasts of bike demand and inventory [252]. Singla et al. [253] propose a pricing mechanism based on online learning to provide monetary incentives for bike users to help rebalancing. By producing accurate travel time estimates, ML can provide tools that help to integrate bike shares with other modes of transportation [254]. Many emerging bike and scooter sharing services are dockless, which means that they are parked anywhere in public space and can block sidewalks [255]. ML has been applied to monitor public sentiment about such bike shares via tweets [256]. ML could also provide tools and information for regulators to ensure that public space can be used by everyone [257]. Coordination between modes resulting in faster and more reliable transit times could increase the amount of people or goods traveling on low-carbon modes such as rail. ML algorithms could be applied to make public transportation faster and easier to use. For example, there is a rich literature exploring ML methods to predict bus arrival times and their uncertainty [258, 259]. Often freight is packaged so that it can switch 18 between different modes of transport easily. Such intermodal transportation relies on low-carbon modes such as rail and water for part of the journey [159]. ML can contribute by improving predictions of the estimated time of arrival (for example of freight trains [260]) or the weight or volume of expected freight (for example for roll-on/roll-off transport – often abbreviated as Ro-Ro [261]). Intelligent transport systems of different modes could be combined and enable more efficient multimodal freight transportation [159]. Some modes with high GHG emissions, such as trucks, can be particularly cost-competitive in regions with lax enforcement of regulation, as they can benefit from overloading and not obeying labor or safety rules [159]. ML can assist public institutions with enforcing their regulations. For example, image recognition can help law enforcement detect overloading of trucks [262]. # 2.5 Discussion Decarbonizing transport is essential to a low-carbon society, and there are numerous applications where ML can make an impact. This is because transportation causes a large share of GHG emissions, but reducing them has been slow and complex. Solutions are likely very technical, are highly dependent on existing infrastructure, and require detailed understanding of passengers’ and freight companies’ behavior. ML can help decarbonize transportation by providing data, gaining knowledge from data, planning, and automation. Moreover, ML is fundamental to shared mobility, AVs, EVs, and smart public transit, which, with the right incentives, can be used to enable significant reductions in GHG emissions. 19 # 3 Buildings & Cities # by Nikola Milojevic-Dupont and Lynn H. Kaack Buildings offer some of the lowest-hanging fruit when it comes to reducing GHG emissions. While the en- ergy consumed in buildings is responsible for a quarter of global energy-related emissions [4], a combination of easy-to-implement fixes and state-of-the-art strategies15 could reduce emissions for existing buildings by up to 90% [264]. It is possible today for buildings to consume almost no energy [265].16 Many of these energy efficiency measures actually result in overall cost savings [266] and simultaneously yield other ben- efits, such as cleaner air for occupants. This potential can be achieved while maintaining the services that buildings provide – and even while extending them to more people, as climate change will necessitate. For example, with the changing climate, more people will need access to air conditioning in regions where deadly heat waves will become common [267, 268]. Two major challenges are heterogeneity and inertia. Buildings vary according to age, construction, usage, and ownership, so optimal strategies vary widely depending on the context. For instance, buildings with access to cheap, low-carbon electricity may have less need for expensive features such as intelligent light bulbs. Buildings also have very long lifespans; thus, it is necessary both to create new, energy-efficient buildings, and to retrofit old buildings to be as efficient as possible [269]. Urban planning and public policy can play a major role in reducing emissions by providing infrastructure, financial incentives, or energy standards for buildings.17 Machine learning provides critical tools for supporting both building managers and policy makers in their efforts to reduce GHG emissions (Fig. 3). At the level of building management, ML can help select strategies that are tailored to individual buildings, and can also contribute to implementing those strategies via smart control systems (§3.1). At the level of urban planning, ML can be used to gather and make sense of data to inform policy makers (§3.2). Finally, we consider how ML can help cities as a whole to transition to low-carbon futures (§3.3). # 3.1 Optimizing buildings In designing new buildings and improving existing ones, there are numerous technologies that can reduce GHG emissions, often saving money in the process [263–266, 270]. ML can accelerate these strategies by (i) modeling data on energy consumption and (ii) optimizing energy use (in smart buildings). Modeling building energy An essential step towards energy efficiency is making sense of the increasing amounts of data produced by meters and home energy monitors (see for example [271]). This can take the form of energy demand forecasts for specific buildings, which are useful for power companies (§1.1.1) and in evaluating building design and operation strategies [272]. Traditionally, energy demand forecasts are based on models of the physical structure of a building that are essentially massive thermodynamics computations. ML has the po- tential to speed up these computations greatly, either by ignoring physical knowledge of the building entirely [273, 274], by incorporating it into the computation [275], or by learning to approximate the physical model to reduce the need for expensive simulation (surrogate models) [276]. Learning how to transfer the knowl- edge gained from modeling one building to another can make it easier to render precise estimations of more 15The IPCC classifies mitigation actions in buildings into four categories: carbon efficiency (switching to low-carbon fuels or to natural refrigerants); energy efficiency (reducing energy waste through insulation, efficient appliances, better heating and ventilation, or other similar measures); system and infrastructure efficiency (e.g. passive house standards, urban planning, and district cooling and heating); and service demand reduction (behavioral and lifestyle changes) [263]. 16There are even high-rise buildings, e.g. the Tower Raiffeisen-Holding N ¨O-Vienna office, or large university buildings, e.g. the Technical University also in Vienna, that achieve such performance. 17For example, see the case of New York City, which mandated that building owners collectively reduce their emissions by 40% by 2040: https://www.nytimes.com/2019/04/17/nyregion/nyc-energy-laws.html. 20 Hi new infrastructure (unsustainable) . gathering infrastructure data | | new infrastructure (sustainable) Hi existing infrastructure % modeling buildings energy w cS 3D building models optimizing HVAC modeling energy across buildings data for smart cities transfer knowledge efficient sensing targeted retrofit strategies coordinating between sectors smart buildings low-carbon infrastructure Figure 3: Selected strategies to mitigate GHG emissions from buildings and cities using machine learning. buildings. For instance, Mocanu et al. [277] modeled building load profiles with reinforcement learning and deep belief networks using data on commercial and residential buildings; they then used approximate reinforcement learning and transfer learning to make predictions about new buildings, enabling the transfer of knowledge from commercial to residential buildings, and from gas- to power-heated buildings. Within a single building, understanding which appliances drive energy use (energy disaggregation) is crucial for targeting efficiency measures, and can motivate behavioral changes. Promising ML approaches to this problem include hidden Markov models [278], sparse coding algorithms for structured prediction [279], harmonic analysis that picks out the “signatures” of individual appliances [280], and deep neural networks [281]. To verify the success or failure of energy efficiency interventions, statistical ML offers methods for causal inference. For example, Burlig et al. [282] used Lasso regression on hourly electricity consumption data from schools in California to find that energy efficiency interventions fall short of the expected savings. Such problems could represent a useful application of deep learning methods for counterfactual prediction [283]. Smart buildings High Leverage Intelligent control systems in buildings can decrease the carbon footprint both by reducing the energy con- sumed and by providing means to integrate lower-carbon sources into the electricity mix [284]. Specifically, ML can reduce energy usage by allowing devices and systems to adapt to usage patterns. Further, buildings can respond to signals from the electricity grid, providing flexibility to the grid operator and lowering costs to the consumer (§1.1.1). Many critical systems inside buildings can be made radically more efficient. While this is also true for small appliances such as refrigerators and lightbulbs, we use the example of heating and cooling (HVAC) systems, both because they are notoriously inefficient and because they account for more than half of the energy consumed in buildings [263]. There are several promising ways to enhance HVAC operating per- 21 formance, each providing substantial opportunities for using ML: forecasting what temperatures are needed throughout the system, better control to achieve those temperatures, and fault detection. Forecasting temper- atures, as with modeling energy use in buildings, has traditionally been performed using detailed physical models of the system involved; however, ML approaches such as deep belief networks can potentially in- crease accuracy with less computational expense [285, 286] (see also §4.3). For control, Kazmi et al. [287] used deep reinforcement learning to achieve a scalable 20% reduction of energy while requiring only three sensors: air temperature, water temperature, and energy use (see also §4.3 for similarly substantial gains in datacenter cooling). Finally, ML can automate building diagnostics and maintenance through fault- detection. For example, the energy efficiency of cooling systems can degrade if refrigerant levels are low [288]; ML approaches are well-suited to detect faults in these systems. Wang et al. [289] treated HVAC fault-detection as a one-class classification problem, using only temperature readings for their predictions. Deep autoencoders can be used to simplify information about machine operation so that deep neural net- works can then more easily predict multiple kinds of faults [290]. Many systems within buildings – such as lights and heating – can also adjust how they operate based on whether a building or room is occupied, thereby improving both occupant comfort and energy use [291]. ML can help these systems dynamically adapt to changes in occupancy patterns [292]. Moreover, occupancy detection itself represents an opportunity for ML algorithms, ranging from decision trees [293, 294] to deep neural networks [295] that take input from occupancy sensors [293], WiFi signals [295, 296], or appliance power consumption data [294]. In §1.1.1, we discussed how using variable low-carbon energy can mean that the supply and price of electricity varies over time. Thus, energy flexibility in buildings is increasingly useful to schedule consump- tion when supply is high [297]. For this, automated demand-side response [298] can respond to electricity prices, smart meter signals, or learned user preferences [299]. Edge computing can be used to process data from distributed sensors and other Internet of Things devices, and deep reinforcement learning can then use this data to efficiently schedule energy use [300]. While smart building technologies have the capability to significantly increase efficiency, we should note that there are potential drawbacks [301]. First, smart building devices and connection networks, like wireless sensor networks, consume energy themselves; however, deep neural networks can be used to mon- itor and optimize their operations [302]. Second, rebound effects are likely to happen in certain cases [303], leading to additional building energy consumption typically ranging between 10 and 20% [304]. If control systems optimize for costs, interventions do not necessarily translate into energy efficiency measures or GHG reductions. Therefore, public policies are needed to mandate, support and complement the actions of individual building managers [263]. Another concern in the case of widespread adoption of smart meters is the impact on mineral use and embodied energy use arising from their production [305]. Finally, smart home applications present security and privacy risks [306] that require adequate regulation. # 3.2 Urban planning For many impactful mitigation strategies – such as district heating and cooling, neighborhood planning, and large-scale retrofitting of existing buildings – coordination at the district and city level is essential. Policy makers use instruments such as building codes, retrofitting subsidies, investments in public utilities, and public-private partnerships in order to reduce GHG emissions without compromising equity. Where energy- use data on individual buildings exist, ML can be used to derive higher-level patterns. However, many regions of the world have almost no energy consumption data, which can make it difficult to design tar- geted mitigation strategies. ML is uniquely capable of predicting energy consumption and GHG mitigation potential at scale from other types of available data. 22 Modeling energy use across buildings Urban Building Energy Models provide simplified information on the energy use of all buildings across a city. These are different from individual-building models, which model energy use of only specific buildings, but with finer details and temporal granularity (§3.1). While UBEMs have yet to be adopted at scale, they are expected to become fundamental for enabling localized action by city planners [307].18 UBEMs can for example be used for planning and operating district heating and cooling, where a central plant supplies many households in a district. In turn, district heating and cooling reduces HVAC energy consumption and can provide flexible load [309], but it needs large amounts of data at the district level for implementation and operation. UBEMs include features such as the location, geometries, and various other attributes of interest like building footprint, usage, material, roof type, immediate surroundings etc. ML can be used to held predict energy consumption from such features. For example, Kolter and Ferreira used Gaussian process regression to predict energy use from features such as property class or the presence of central AC [310]. Based on energy data disclosed by residents of New York City, Kontokosta and colleagues used various ML methods to predict the energy use of the city’s 1.1 million buildings [311], analyzed the effect of energy disclosure on the demand [312], and developed a system for ranking buildings based on energy efficiency [313]. Zhang et al. [314] matched residential energy consumption survey data with public use microdata samples to estimate residential energy consumption at the neighborhood level. Using five commonly accessible features of buildings and climate, Robinson et al. predict commercial building energy use across large American cities [315]. Beyond energy prediction, buildings’ features can be used by ML algorithms to pinpoint which buildings have the highest retrofit potential. Simple building characteristics and surrounding environmental factors – both potentially available at scale – can be used [316, 317]. There have also been attempts to upscale individual-building energy models to the district scale. Using deep neural networks for hybrid ML-physical modelling, Nutkiewicz et al. provided precise energy demand forecasts that account for inter-building energy dynamics and urban microclimate factors for all buildings on a campus [318]. Gathering infrastructure data High Leverage Specifics about building infrastructure can often be predicted using ML techniques. Remote sensing is key to inferring infrastructure data [105, 319–323] as satellite data19 present a source of information that is globally available and largely consistent worldwide. For example, using remote sensing data, Geiß et al. [325] clustered buildings into types to assess the potential of district heat in a German town. The resolution of infrastructure data ranges from coarse localization of all buildings at the global scale [319], to precise 3D reconstruction of a neighborhood [323]. It is possible to produce a global map of human settlement footprints at meter-level resolution from satellite radar images [319]. For this, Esch et al. used highly automated learners, which make classification at such scale possible by retraining locally. Segmentation of high-resolution satellite images can now generate exact building footprints at a national scale [320]. Energy-relevant building attributes, such as the presence of photovoltaic panels, can also be retrieved from these images [105] (see §1.1.1). To generate 3D models, LiDAR data is often used to retrieve heights or classify buildings at city scale [321, 322], but its collection is expensive. Recent research showed that heights can be predicted even without such elevation data, as demonstrated by [326], who predicted these from real estate records and census data. Studies, which for now are small scale, aim for complete 3D reconstruction with class labels for different components of buildings [323]. 18The startup nam.R is developing a database of all school buildings in France to help inform retrofitting decisions, harmonizing vast amounts of open and proprietary data with ML [308]. 19See [324] for a review of different sources of data and deep learning methods for processing them. 23 # 3.3 The future of cities Since most of the resources of the world are ultimately channeled into cities, municipal governments have a unique opportunity to mitigate climate change. City governments regulate (and sometimes operate) trans- portation, buildings, and economic activity. They handle such diverse issues as energy, water, waste, crime, health, and noise. Recently, data and ML have become more common for improving efficiency in such areas, giving rise to the notion of smart city. While the phrase smart city encompasses a wide array of technologies [327], here we discuss only applications that are relevant to reducing GHG emissions. Data for smart cities High Leverage Increasingly, important aspects of city life come with digital information that can make the city function in a more coordinated way. Habibzadeh et al. [328] differentiate between hard-sensing, i.e., fixed-location- dedicated sensors like traffic cameras, and soft-sensing, for example from mobile devices. Hard sensing is the primary data collection paradigm in many smart city applications, as it is adapted to precisely meet the application requirements. However, there is a growing volume of data coming from soft sensing, due to the widespread adoption of personal devices like smartphones that can provide movement data and geotagged pictures.20 Urban computing [330] is an emerging field looking at data analytics in urban spaces, and aiming to yield insights for data-driven policies. For example, clustering anonymized credit card payments makes it possible to model different communities and lifestyles – of which the sustainability can be assessed [331]. Jiang et al. provides a review of urban computing from mobile phone traces [332].21 Relevant information on the urban space can also be learned from social media activity, e.g. on Twitter, as reviewed in [333, 334]. There are, however, numerous challenges in making sense of this wealth of data (see [335]), and privacy considerations are of paramount importance when collecting or working with many of these data sources. First, cities need to obtain relevant data on activities that directly or indirectly consume energy. Data are often proprietary. To obtain these data, the city of Los Angeles now requires all mobility as a service providers, i.e. vehicle-sharing companies, to use an open-source API. Data on location, use, and condition of all those vehicles, which can be useful in guiding regulation, are thus transmitted to the city [336]. ML can also distill information on urban issues related to climate change through web-scraping and text-mining, e.g. [256]. As discussed above (§3.2), ML can also be used to infer infrastructure data. Second, smart city applications must transmit high volumes of data in real-time. ML is key to prepro- cessing large amounts of data in large sensor networks, allowing only what is relevant to be transmitted, instead of all the raw data that is being collected [337–339]. Similar techniques also help to reduce the amount of energy consumed during transmission itself [340]. Third, urban policy-making based on intelligent infrastructure faces major challenges with data man- agement [341]. Smart cities require the integration of multiple large and heterogeneous sources of data, for which ML can be a valuable tool, which includes data matching [342, 343], data fusion [344], and ensemble learning [345]. Low-emissions infrastructure When smart city projects are properly integrated into urban planning, they can make cities more sustainable and foster low-carbon lifestyles (see [340, 346, 347] for extensive reviews on this topic). Different types of infrastructure interact, meaning that planning strategies should be coordinated to achieve mitigation goals. For instance, urban sprawl influences the energy use from transport, as wider cities tend to be more car- oriented [348–350]. ML-based analysis has shown that the development of efficient public transportation 20Note that management of any such private data, even if they are anonymized, poses challenges [329]. 21See https://www.microsoft.com/en-us/research/project/urban-computing/ for more applications of urban computing. 24 is dependent on both the extent of urban sprawl and the local development around transportation hubs [351, 352]. Cities can reduce GHG emissions by coordinating between infrastructure sectors and better adapting services to the needs of the inhabitants. ML and AI can help, for example, to coordinate district heating and cooling networks, solar power generation, and charging stations for electric vehicles and bikes [347], and can improve public lighting systems by regulating light intensity based on historical patterns of foot traffic [353]. Due to inherent variability in energy demand and supply, there is a need for uncertainty estimation, e.g. using Markov chain Monte Carlo methods or Gaussian processes [347]. Currently, most smart city projects for urban climate change mitigation are implemented in wealthier regions such as the United States, China, and the EU.22 The literature on city-scale mitigation strategies is also strongly biased towards the Global North [354], while key mitigation challenges are expected to arise from the Global South [355]. Infrastructure models described in §3.2 could be used to plan low-carbon neighborhoods without relying on advanced smart city technologies. To transfer strategies across cities, it is possible to cluster similar cities based on climate-relevant dimensions [356, 357]. Creutzig et al. [349] related the energy use of 300 cities worldwide to historical structural factors such as fuel taxes (which have a strong impact on urban sprawl). Other relevant applications include groupings of transportation systems [356] using a latent class choice model, or of street networks [357] to identify common patterns in urban development using hierarchical clustering. # 3.4 Discussion We have shown many different ways that ML can help to reduce GHG emissions from buildings and cities. A central challenge in this sector is the availability of high-quality data for training the algorithms, which rarely go beyond main cities or represent the full spectrum of building types. Techniques for obtaining these data, however, can themselves be an important application for ML (e.g. via computer vision algorithms to parse satellite imagery). Realizing the potential of data-driven urban infrastructure can advance mitigation goals while improving the well-being of citizens [264, 269, 358]. 22See for example the European Union H2020 smart cities project https://ec.europa.eu/inea/en/horizon- 2020/smart-cities-communities. 25 # 4 Industry # by Anna Waldman-Brown Industrial production, logistics, and building materials are leading causes of difficult-to-eliminate GHG emissions [155]. Fortunately for ML researchers, the global industrial sector spends billions of dollars annually gathering data on factories and supply chains [359] – aided by improvements in the cost and accessibility of sensors and other data-gathering mechanisms (such as QR codes and image recognition). The availability of large quantities of data, combined with affordable cloud-based storage and computing, indicates that industry may be an excellent place for ML to make a positive climate impact. ML demonstrates considerable potential for reducing industrial GHG emissions under the following circumstances: • When there is enough accessible, high-quality data around specific processes or transport routes. • When firms have an incentive to share their proprietary data and/or algorithms with researchers and other firms. • When aspects of production or shipping can be readily fine-tuned or adjusted, and there are clear objective functions. • When firms’ incentives align with reducing emissions (for example, through efficiency gains, regula- tory compliance, or high GHG prices). In particular, ML can potentially reduce global emissions (Fig. 4) by helping to streamline supply chains, improve production quality, predict machine breakdowns, optimize heating and cooling systems, and priori- tize the use of clean electricity over fossil fuels [360–363]. However, it is worth noting that greater efficiency may increase the production of goods and thus GHG emissions (via the Jevons paradox) unless industrial actors have sufficient incentives to reduce overall emissions [364]. Distribution Manufacturing 3D printing Directing Optimizing Preventative Detecting Optimizing & generative purchasers factories for maintenance GHG emissions shipping routes design to low-GHG renewables & preventing options overstocking malt Inventing Optimizing Improving Adaptive Streamlining Reducing clean materials supply chains quality control heating and transport of and catalysts eo, ~ cooling perishables Figure 4: Selected opportunities to use machine learning to reduce greenhouse gas emissions in industry. # 4.1 Optimizing supply chains In 2006, at least two Scottish seafood firms flew hundreds of metric tons of shrimp from Scotland to China and Thailand for peeling, then back to Scotland for sale – because they could save on labor costs [365]. This 26 indicates the complexity of today’s globalized supply chains, i.e., the organizational processes and shipping networks that are required to bring a product from producer to final consumer. ML can help reduce emissions in supply chains by intelligently predicting supply and demand, identifying lower-carbon products, and optimizing shipping routes. (For details on shipping and delivery optimization, see §2.) However, for many of these applications to reduce emissions, firms’ financial incentives must also align with climate change mitigation through carbon pricing or other policy mechanisms. Reducing overproduction Uncertain Impact The production, shipment, and climate-controlled warehousing of excess products is a major source of industrial GHG emissions, particularly for time-dependent goods such as perishable food or retail goods that quickly fall out of fashion [366]. Global excess inventory in 2011 amounted to about $8 trillion worth of goods, according to the Council of Supply Chain Management Professionals [367]. This excess may be in part due to mis-estimation of demand, as the same organization noted that corporate sales estimates diverged from actual sales by an average of 40% [367]. ML may be able to mitigate these issues of overproducing and/or overstocking goods by improving demand forecasting [368, 369]. For example, the clothing industry sells an average of only 60% of its wares at full price, but some brands can sell up to 85% due to just-in-time manufacturing and clever intelligence networks [370]. As online shopping and just-in-time manufacturing become more prevalent and websites offer more product types than physical storefronts, better demand forecasts will be needed on a regional level to efficiently distribute inventory without letting unwanted goods travel long distances only to languish in warehouses [371]. Nonetheless, negative side effects can be significant depending on the type of product and regional characteristics; just-in-time manufacturing and online shopping are often responsible for smaller and faster shipments of goods, mostly on road, that lack the energy efficiency of freight aggregation and slower shipping methods such as rail [371, 372]. Recommender systems Recommender systems can potentially direct consumers and purchasing firms toward climate-friendly op- tions, as long as one can obtain information about GHG emissions throughout the entire life-cycle of some product. The challenge here lies in hunting down usable data on every relevant material and production process from metal ore extraction through production, shipping, and eventual use and disposal of a product [373, 374]. One must also convince companies to share proprietary data to help other firms learn from best practices. If these datasets can be acquired, ML algorithms could hypothetically assist in identifying the cleanest options. Reducing food waste High Leverage Globally, society loses or wastes 1.3 billion metric tons of food each year, which translates to one-third of all food produced for human consumption [375]. In developing countries, 40% of food waste occurs between harvest and processing or retail, while over 40% of food waste in industrialized nations occurs at the end of supply chains, in retail outlets, restaurants, and consumers’ homes [375]. ML can help reduce food waste by optimizing delivery routes and improving demand forecasting at the point of sale (see §4.1), as well as improving refrigeration systems [376] (see §4.3). ML can also potentially assist with other issues related to food waste, such as helping develop sensors to identify when produce is about to spoil, so it can be sold quickly or removed from a storage crate before it ruins the rest of the shipment [377]. # Improving materials Climate-friendly construction High Leverage Long-term Cement and steel production together account for over 10% of all global GHG emissions [378]; the cement 27 industry alone emits more GHGs than every country except the US and China [379]. ML can help minimize these emissions by reducing the need for carbon-intensive materials, by transforming industrial processes to run on low-carbon energy, and even by redesigning the chemistry of structural materials. To reduce the use of cement and steel, researchers have combined ML with generative design to develop structural products that require less raw material, thus reducing the resulting GHG emissions [360]. Novel manufacturing techniques such as 3D printing allow for the production of unusual shapes that use less material but may be impossible to produce through traditional metal-casting or poured concrete; ML and finite element modeling have been used to simulate the physical processes of 3D printing in order to improve the quality of finished products [380]. Assuming future advances in materials science, ML research could potentially draw upon open databases such as the Materials Project [381] and the UCI Machine Learning Repository [382] to invent new, climate- friendly materials [383]. Using semi-supervised generative models and concrete compression data, for example, Ge et al. proposed novel, low-emission concrete formulas that could satisfy desired structural characteristics [382]. Climate-friendly chemicals High Leverage Long-term Researchers are also experimenting with supervised learning and thermal imaging systems to rapidly iden- tify promising catalysts and chemical reactions [384, 385], as described in §1.1.1. Firms are unlikely to adopt new materials or change existing practices without financial incentives, so widespread adoption might require subsidies for low-carbon alternatives or penalties for high GHG emissions. Ammonia production for fertilizer use relies upon natural gas to heat up and catalyze the reaction, and accounts for around 2% of global energy consumption [386]. To develop cleaner ammonia, chemists may be able to invent electrochemical strategies for lower-temperature ammonia production [386, 387]. Given the potential of ML for predicting chemical reactions [385], ML may also be able to help with the discovery of new materials for electrocatalysts and/or proton conductors to facilitate ammonia production. # 4.3 Production and energy ML can potentially assist in reducing overall electricity consumption; streamlining factories’ heating, ven- tilation, and air conditioning (HVAC) systems; and redesigning some types of industrial processes to run on low-carbon energy instead of coal, oil, or gas. Again, the higher the incentives for reducing carbon emissions, the more likely that firms will optimize for low-carbon energy use. New factory equipment can be very expensive to purchase and set up, so firms’ cost-benefit calculations may dissuade them from retrofitting existing factories to run using low-carbon electricity or to save a few kilowatts [388–390]. Given the heterogeneity across industrial sectors and the secrecy of industrial data, firms will also need to tailor the requisite sensors and data analysis systems to their individual processes. ML will become a much more viable option for industry when factory workers can identify, develop, implement, and monitor their own solutions internally instead of relying upon outside experts [391]. The ML community can assist by building accessible, customizable industry tools tailored for people without a strong background in data science. Adaptive control High Leverage On the production side, ML can potentially improve the efficiency of HVAC systems and other indus- trial control mechanisms—given necessary data about all relevant processes. To reduce GHG emissions from HVAC systems, researchers have suggested combining optimization-based control algorithms with ML techniques such as image recognition, regression trees, and time delay neural networks [392, 393] (see also 3.1). DeepMind has used reinforcement learning to optimize cooling centers for Google’s internal servers by predicting and optimizing the power usage effectiveness (PUE), thus lowering HFC emissions and reducing cooling costs [361, 394]. Deep neural networks could also be used for adaptive control in 28 a variety of industrial networking applications [395], enabling energy savings through self-learning about devices’ surroundings. Predictive maintenance ML could also contribute to predictive maintenance by more accurately modelling the wear and tear of machinery that is currently in use, and interpretable ML could assist factory owners in developing a better understanding of how best to minimize GHG emissions for specific equipment and processes. For example, creating a digital twin model of some industrial equipment or process could enable a manufacturer to identify and prevent undesirable scenarios, as well as virtually test out a new piece of code before uploading it to the actual factory floor – thus potentially increasing the GHG efficiency of industrial processes [396, 397]. Digital twins can also reduce production waste by identifying broken or about-to-break machines before the actual factory equipment starts producing damaged products. Industrial systems can employ similar models to predict which pipes are liable to spring leaks, in order to minimize the direct release of GHGs such as HFCs and natural gas. Using cleaner electricity High Leverage ML may be particularly useful for enabling more flexible operation of industrial electrical loads, through op- timizing a firm’s demand response to electricity prices as addressed in §1. Such optimization can contribute to cutting GHG emissions as long as firms have a financial incentive to optimize for minimal emissions, maximal low-carbon energy, or minimum overall power usage. Demand response optimization algorithms can help firms adjust the timing of energy-intensive processes such as cement crushing [362] and powder- coating [398] to take advantage of electricity price fluctuations, although published work on the topic has to date used relatively little ML. Online algorithms for optimizing demand response can reduce overall power usage for computer servers by dynamically shifting the internet traffic load of data providers to underutilized servers, although most of this research, again, has focused on minimizing costs rather than GHG emissions [84, 399]. Berral et al. proposed a framework that demonstrates how such optimization algorithms might be combined with RL, digitized controls, and feedback systems to enable the autonomous control of industrial processes [363]. # 4.4 Discussion Given the globalized nature of international trade and the urgency of climate change, decarbonizing the industrial sector must become a key priority for both policy makers and factory owners worldwide. As we have seen, there are a number of highly impactful applications where ML can help reduce GHG emissions in industry, with several caveats. First, incentives for cleaner production and distribution are not always aligned with reduced costs, though policies can play a role in aligning these incentives. Second, despite the proliferation of industrial data, much of the information is proprietary, low-quality, or very specific to indi- vidual machines or processes; practitioners estimate that 60-70% of industrial data goes unused [359, 400]. Before investing in extensive ML research, researchers should be sure that they will be able to eventually access and clean any data needed for their algorithms. Finally, misjudgments can be very costly for man- ufacturers and retailers, leading most managers to adopt risk-averse strategies towards relatively untested technologies such as ML [391]. For this reason, ML algorithms that determine industrial activities should be robust enough to guarantee both performance and safety, along with providing both interpretable and reproducible results [401]. 29 # 5 Farms & Forests # by Alexandre Lacoste Plants, microbes, and other organisms have been drawing CO2 from the atmosphere for millions of years. Most of this carbon is continually broken down and recirculated through the carbon cycle, and some is stored deep underground as coal and oil, but a large amount of carbon is sequestered in the biomass of trees, peat bogs, and soil. Our current economy encourages practices that are freeing much of this sequestered carbon through deforestation and unsustainable agriculture. On top of these effects, cattle and rice farming generate methane, a greenhouse gas far more potent than CO2 itself. Overall, land use by humans is estimated to be responsible for about a quarter of global GHG emissions [26] (and this may be an underestimate [402]). In addition to this direct release of carbon through human actions, the permafrost is now melting, peat bogs are drying, and forest fires are becoming more frequent as a consequence of climate change itself – all of which release yet more carbon [403]. The large scale of this problem allows for a similar scale of positive impact. According to one estimate [404], about a third of GHG emissions reductions could come from better land management and agriculture. ML can play an important role in some of these areas. Precision agriculture could reduce carbon release from the soil and improve crop yield, which in turn could reduce the need for deforestation. Satellite images make it possible to estimate the amount of carbon sequestered in a given area of land, as well as track GHG emissions from it. ML can help monitor the health of forests and peatlands, predict the risk of fire, and contribute to sustainable forestry (Fig. 5). These areas represent highly impactful applications, in particular, of sophisticated computer vision tools, though care must be taken in some cases to avoid negative consequences via the Jevons paradox. Remote sensing of emissions NS Estimating carbon stock Precision , Automating Managing agriculture Monitor ‘de afforestation \ peatianas , Reducing deforestation Farmland Peatland Figure 5: Selected strategies to mitigate GHG emissions from land use using machine learning. # 5.1 Remote sensing of emissions # High Leverage Having real-time maps of GHGs could help us quantify emissions from agriculture and forestry practices, as well as monitor emissions from other sectors (§1.2). Such information would be valuable in guiding regulations or incentives that could lead to better land use practices. For example, data on emissions make it possible to set effective targets, and pinpointing the sources of emissions makes it possible to enforce regulations. While greenhouse gases are invisible to our eyes, they must by definition interact with sunlight. This means that we can observe these compounds with hyperspectral cameras [405, 406]. These cameras can record up to several hundred wavelengths (instead of simply RGB), providing information on the interaction between light and individual chemicals. Many satellites are equipped with such cameras and can perform, 30 to some extent, estimations of CO2, CH4 (methane), H2O, and N2O (nitrous oxide) emissions [407, 408]. While extremely useful for studying climate change, most of these satellites have very coarse spatial resolu- tion and large temporal and spatial gaps, making them unsuitable for precise tracking of emissions. Standard satellite imagery provides RGB images with much higher resolution, which could be used in an ML algo- rithm to fill the gaps in hyperspectral data and obtain more precise information about emissions.23 Some preliminary work [407] has studied this possibility, but there are no clear results as of yet. This is therefore an open problem with high potential impact. # 5.2 Precision agriculture # High Leverage Uncertain Impact Agriculture is responsible for about 14% of GHG emissions [26]. This might come as a surprise, since plants take up CO2 from the air. However, modern industrial agriculture involves more than just growing plants. First, the land is stripped of trees, releasing carbon sequestered there. Second, the process of tilling exposes topsoil to the air, thereby releasing carbon that had been bound in soil aggregates and disrupting organisms in the soil that contribute to sequestration. Finally, because such farming practices strip soil of nutrients, nitrogen-based fertilizers must be added back to the system. Synthesizing these fertilizers consumes massive amounts of energy, about 2% of global energy consumption [386] (see §4.2). Moreover, while some of this nitrogen is absorbed by plants or retained in the soil, some is converted to nitrous oxide,24 a greenhouse gas that is about 300 times more potent than CO2. Such industrial agriculture approaches are ultimately based on making farmland more uniform and pre- dictable. This allows it to be managed at scale using basic automation tools like tractors, but can be both more destructive and less productive than approaches that work with the natural heterogeneity of land and crops. Increasingly, there is demand for sophisticated tools which would allow farmers to work at scale, but adapt to what the land needs. This approach is often known as “precision agriculture.” Smarter robotic tools can help enable precision agriculture. RIPPA [410], a robot under development at the University of Sydney, is equipped with a hyperspectral camera and has the capacity to perform mechan- ical weeding, targeted pesticide application, and vacuuming of pests. It can cover 5 acres per day on solar energy and collect large datasets [411] for continual improvement. Many other robotic platforms25 likewise offer opportunities for developing new ML algorithms. There remains significant room for development in this space, since current robots still sometimes get stuck, are optimized only for certain types of crops, and rely on ML algorithms that may be highly sensitive to changes of environment. There are many additional ways in which ML can contribute to precision agriculture. Intelligent irri- gation systems can save large amounts of water while reducing pests that thrive under excessive moisture [404]. ML can also help in disease detection, weed detection, and soil sensing [412–414]. ML can guide crop yield prediction [415] and even macroeconomic models that help farmers predict crop demand and decide what to plant at the beginning of the season [416]. These problems often have minimal hardware re- quirements, as devices such as Unmanned Aerial Vehicles (UAVs) with hyperspectral cameras can be used for all of these tasks. Globally, agriculture constitutes a $2.4 trillion industry [417], and there is already a significant economic incentive to increase efficiency. However, efficiency gains do not necessarily translate into reduced GHG emissions (e.g. via the Jevons paradox). Moreover, significantly reducing emissions may require a shift in agricultural paradigms – for example, widespread adoption of regenerative agriculture, silvopasture, and tree 23Microsatellites with higher resolution hyperspectral cameras are expected to launch over the coming years, including a proposal by Bluefield Technologies that would provide methane detection at 20-meter spatial resolution with daily refresh. Even once this technology comes online, ML will remain useful to cover the daily gaps and to estimate emissions of other GHGs. 24Some fertilizer additionally often ends up in waterways, which can contaminate drinking water and induce blooms of toxic algae [409]. 25Examples include sagarobotics.com, ecorobotix.com, and farm.bot. 31 intercropping [404]. ML tools for policy makers and agronomists [418] could potentially encourage climate- positive action: for example, remote sensing with UAVs and satellites could perform methane detection and carbon stock estimation, which could be used to incentivize farmers to sequester more carbon and reduce emissions. # 5.3 Monitoring peatlands # High Leverage Peatlands (a type of wetland ecosystem) cover only 3% of the Earth’s land area, yet hold twice the total carbon in all the world’s forests, making peat the largest source of sequestered carbon on Earth [419]. When peat dries, however, it releases carbon through decomposition and also becomes susceptible to fire [419, 420]. A single peat fire in Indonesia in 1997 is reported to have released emissions comparable to 20-50% of global fossil fuel emissions during the same year [421]. Monitoring peatlands and protecting them from artificial drainage or droughts is essential to preserve the carbon sequestered in them [422, 423]. In [424], ML was applied to features extracted from remote sensing data to estimate the thickness of peat and assess the carbon stock of tropical peatlands. A more precise peatlands map is expected to be made by 2020 using specialized satellites [425]. Advanced ML could potentially help develop precise monitoring tools at low cost and predict the risk of fire. # 5.4 Managing forests Estimating carbon stock High Leverage Modeling (and pricing) carbon stored in forests requires us to assess how much is being sequestered or released across the planet. Since most of a forest’s carbon is stored in above-ground biomass [426], tree species and heights are a good indicator of the carbon stock. The height of trees can be estimated fairly accurately with LiDAR devices mounted on UAVs, but this technology is not scalable and many areas are closed to UAVs. To address this challenge, ML can be used to predict the LiDAR’s outcome from satellite imagery [426, 427]. From there, the learned estimator can perform predictions at the scale of the planet. Despite progress in this area, there is still significant room for improvement. For example, LiDAR data is often not equally distributed across regions or seasons. Hence domain adaptation and transfer learning techniques may help algorithms to generalize better. Automating afforestation Long-term Uncertain Impact Planting trees, also called afforestation, can be a means of sequestering CO2 over the long term. According to one estimate, up to 0.9 billion hectares of extra canopy cover could theoretically be added [428] globally. However, care must be taken when planting trees to ensure a positive impact. Afforestation that comes at the expense of farmland (or ecosystems such as peat bogs) could result in a net increase of GHG emissions. Moreover, planting trees without regard for local conditions and native species can reduce the climate impact of afforestation as well as negatively affecting biodiversity. ML can be helpful in automating large-scale afforestation by locating appropriate planting sites, mon- itoring plant health, assessing weeds, and analyzing trends. Startups like BioCarbon Engineering26 and Droneseed27 are even developing UAVs that are capable of planting seed packets more quickly and cheaply than traditional methods [429]. Managing forest fires Besides their potential for harming people and property, forest fires release CO2 into the atmosphere (which # 26www.biocarbonengineering.com 27www.droneseed.co 32 in turn increases the rate of forest fires [430]). On the other hand, small forest fires are part of natural forest cycles. Preventing them causes biomass to accumulate on the ground and increases the chances of large fires, which can then burn all trees to the ground and erode top soil, resulting in high CO2 emissions, biodiversity loss, and a long recovery time [431]. Drought forecasting [432] is helpful in predicting regions that are more at risk, as is estimating the water content in the tree canopy [433]. In [434, 435], reinforcement learning is used to predict the spatial progression of fire. This helps firefighters decide when to let a fire burn and when to stop it [436]. With good tools to evaluate regions that are more at risk, firefighters can perform controlled burns and cut select areas to prevent the progression of fires. Reducing deforestation High Leverage Only 17% of the world’s forests are legally protected [437]. The rest are subject to deforestation, which contributes to approximately 10% of global GHG emissions [26] as vegetation is burned or decays. While some deforestation is the result of expanding agriculture or urban developments, most of it comes from the logging industry. Clearcutting, which has a particularly ruinous effect upon ecosystems and the carbon they sequester, remains a widespread practice across the world. Tools for tracking deforestation can provide valuable data for informing policy makers, as well as law enforcement in cases where deforestation may be conducted illegally. ML can be used to differentiate selective cutting from clearcutting using remote sensing imagery [438–441]. Another approach is to install (old) smartphones powered by solar panels in the forest; ML can then be used to detect and report chainsaw sounds within a one-kilometer radius [442]. Logistics and transport still dominate the cost of wood harvesting, which often motivates clearcutting. Increasingly, ML tools [443] are becoming available to help foresters decide when to harvest, where to fertilize, and what roads to build. However, once more, the Jevons paradox is at play; making forestry more efficient can have a negative effect by increasing the amount of wood harvested. On the other hand, developing the right combination of tools for regulation and selective cutting could have a significant positive impact. # 5.5 Discussion Farms and forests make up a large portion of global GHG emissions, but reducing these emissions is chal- lenging. The scope of the problem is highly globalized, but the necessary actions are highly localized. Many applications also involve a diversity of stakeholders. Agriculture, for example, involves a complex mix of large-scale farming interests, small-scale farmers, agricultural equipment manufacturers, and chemical com- panies. Each stakeholder has different interests, and each often has access to a different portion of the data that would be useful for impactful ML applications. Interfacing between these different stakeholders is a practical challenge for meaningful work in this area. 33 # 6 Carbon Dioxide Removal # by Andrew S. Ross and Evan D. Sherwin Even if we could cut emissions to zero today, we would still face significant climate consequences from greenhouse gases already in the atmosphere. Eliminating emissions entirely may also be tricky, given the sheer diversity of sources (such as airplanes and cows). Instead, many experts argue that to meet critical climate goals, global emissions must become net-negative—that is, we must remove more CO2 from the atmosphere than we release [444, 445]. Although there has been significant progress in negative emis- sions research [446–450], the actual CO2 removal industry is still in its infancy. As such, many of the ML applications we outline in this section are either speculative or in the early stages of development or commercialization. Many of the primary candidate technologies for CO2 removal directly harness the same natural pro- cesses which have (pre-)historically shaped our atmosphere. One of the most promising methods is simply allowing or encouraging more natural uptake of CO2 by plants (whose ML applications we discuss in §5). Other plant-based methods include bioenergy with carbon capture and biochar, where plants are grown specifically to absorb CO2 and then burned in a way that sequesters it (while creating energy or fertilizer as a useful byproduct) [446, 451, 452]. Finally, the way most of Earth’s CO2 has been removed over geologic timescales is the slow process of mineral weathering, which also initiates further CO2 absorption in the ocean due to alkaline runoff [453]. These processes can both be massively accelerated by human activity to achieve necessary scales of CO2 removal [446]. However, although these biomass, mineral, and ocean- based methods are all promising enough as techniques to merit mention, they may have drawbacks in terms of land use and potentially serious environmental impacts, and (more relevantly for this paper) they would not likely benefit significantly from ML. # 6.1 Direct air capture Long-term Another approach is to build facilities to extract CO2 from power plant exhaust, industrial processes, or even ambient air [454]. While this “direct air capture” (DAC) approach faces technical hurdles, it requires little land and has, according to current understanding, minimal negative environmental impacts [455]. The basic idea behind DAC is to blow air onto CO2 sorbents (essentially like sponges, but for gas), which are either solid or in solution, then use heat-powered chemical processes to release the CO2 in purified form for sequestration [446, 447]. Several companies have recently been started to pilot these methods.28, 29, 30 While CO2 sorbents are improving significantly [456, 457], issues still remain with efficiency and degra- dation over time, offering potential (though still speculative) opportunities for ML. ML could be used (as in §1.1.1) to accelerate materials discovery and process engineering workflows [87, 92, 93, 458] to maxi- mize sorbent reusability and CO2 uptake while minimizing the energy required for CO2 release. ML might also help to develop corrosion-resistant components capable of withstanding high temperatures, as well as optimize their geometry for air-sorbent contact (which strongly impacts efficiency [459]). # 6.2 Sequestering CO2 # High Leverage Long-term Uncertain Impact Once CO2 is captured, it must be sequestered or stored, securely and at scale, to prevent re-release back into the atmosphere. The best-understood form of CO2 sequestration is direct injection into geologic formations such as saline aquifers, which are generally similar to oil and gas reservoirs [446]. A Norwegian oil company has successfully sequestered CO2 from an offshore natural gas field in a saline aquifer for more than twenty # 28https://carbonengineering.com/ 29https://www.climeworks.com/ 30https://globalthermostat.com/ 34 years [460]. Another promising option is to sequester CO2 in volcanic basalt formations, which is being piloted in Iceland [461]. Machine learning may be able to help with many aspects of CO2 sequestration. First, ML can help identify and characterize potential storage locations. Oil and gas companies have had promising results us- ing ML for subsurface imaging based on raw seismograph traces [462]. These models and the data behind them could likely be repurposed to help trap CO2 rather than release it. Second, ML can help monitor and maintain active sequestration sites. Noisy sensor measurements must be translated into inferences about subsurface CO2 flow and remaining injection capacity [463]; recently, [464] found success using convo- lutional image-to-image regression techniques for uncertainty quantification in a global CO2 storage sim- ulation study. Additionally, it is important to monitor for CO2 leaks [465]. ML techniques have recently been applied to monitoring potential CO2 leaks from wells [466]; computer vision approaches for emissions detection (see [467] and §5.1) may also be applicable. # 6.3 Discussion Given limits on how much more CO2 humanity can safely emit and the difficulties associated with elimi- nating emissions entirely, CO2 removal may have a critical role to play in tackling climate change. Promis- ing applications for ML in CO2 removal include informing research and development of novel component materials, characterizing geologic resource availability, and monitoring underground CO2 in sequestration facilities. Although many of these applications are speculative, the industry is growing, which will create more data and more opportunities for ML approaches to help. 35 # Adaptation # 7 Climate Prediction # by Kelly Kochanski The first global warming prediction was made in 1896, when Arrhenius estimated that burning fossil fu- els could eventually release enough CO2 to warm the Earth by 5◦C. The fundamental physics underlying those calculations has not changed, but our predictions have become far more detailed and precise. The predominant predictive tools are climate models, known as General Circulation Models (GCMs) or Earth System Models (ESMs).31 These models inform local and national government decisions (see IPCC reports [4, 26, 469]), help people calculate their climate risks (see §10 and §8) and allow us to estimate the potential impacts of solar geoengineering (see §9). Recent trends have created opportunities for ML to advance the state-of-the-art in climate prediction (Fig. 6). First, new and cheaper satellites are creating petabytes of climate observation data.32 Second, mas- sive climate modeling projects are generating petabytes of simulated climate data.33 Third, climate forecasts are computationally expensive [473] (the simulations in [472] took three weeks to run on NCAR supercom- puters), while ML methods are becoming increasingly fast to train and run, especially on next-generation computing hardware. As a result, climate scientists have recently begun to explore ML techniques, and are starting to team up with computer scientists to build new and exciting applications. simulating cloud physics learning from classifyin fying satellite data land use r> te making high-resolution tracking forecasts storms learning ice simulating reflectivity LL ocean mixing Seer Figure 6: Schematic of a climate model, with selected strategies to improve climate change predictions using machine learning. 31Learn about climate modeling from climate.be/textbook [468] or Climate Literacy, youtu.be/XGi2a0tNjOo 32e.g. NASA’s Earth Science Data Systems program, earthdata.nasa.gov, and ESA’s Earth Online, earth.esa.int 33e.g. the Coupled Model Intercomparison Project, cmip.llnl.gov [470, 471] and Community Earth System Model Large Ensemble [472] 36 # 7.1 Uniting data, ML, and climate science Climate models represent our understanding of Earth and climate physics. We can learn about the Earth by collecting data. To turn that data into useful predictions, we need to condense it into coherent, computation- ally tractable models. ML models are likely to be more accurate or less expensive than other models where: (1) there is plentiful data, but it is hard to model systems with traditional statistics, or (2) there are good models, but they are too computationally expensive to use in production. # 7.1.1 Data for climate models When data are plentiful, climate scientists build data-driven models. In these areas, ML techniques may solve many problems that were previously challenging. These include black box problems, for instance sensor calibration [474], and classification of observational data, for instance classifying crop cover or iden- tifying pollutant sources in satellite imagery [475, 476]. More applications like these are likely to appear as satellite databases grow. The authors of [13] describe many opportunities for data scientists to assimilate data from diverse field and remote sensing sources, many of which have since been explored by climate informatics researchers. Numerous authors, such as [477], have identified geoscience problems that would be aided by the de- velopment of benchmark datasets. Efforts to develop such datasets include EnviroNet [478], the IS-GEO benchmark datasets [479], and ExtremeWeather [480]. We expect the collection of curated geoscience datasets to continue to grow; this process might even be accelerated by ML optimizations in data collection systems [477]. We strongly encourage modellers to dive into the data in collaboration with domain experts. We also recommend that modellers who seek to learn directly from data see [481] for specific advice on fitting and over-fitting climate data. # 7.1.2 Accelerating climate models Many climate prediction problems are irremediably data-limited. No matter how many weather stations we construct, how many field campaigns we run, or how many satellites we deploy, the Earth will generate at most one year of new climate data per year. Existing climate models deal with this limitation by relying heavily on physical laws, such as thermodynamics. These models are structured in terms of coupled partial differential equations that represent physical processes like cloud formation, ice sheet flow, and permafrost melt. ML models provide new techniques for solving such systems efficiently. Clouds and aerosols High Leverage Recent work has shown how deep neural networks could be combined with existing thermodynamics knowl- edge to fix the largest source of uncertainty in current climate models: clouds. Bright clouds block sunlight and cool the Earth; dark clouds catch outgoing heat and keep the Earth warm [469, 482]. These effects are controlled by small-scale processes such as cloud convection and atmospheric aerosols (see uses of aerosols for cloud seeding and solar geoengineering in §9). Physical models of these processes are far too compu- tationally expensive to include in global climate models — but ML models are not. Gentine et al. trained a deep neural network to emulate the behavior of a high-resolution cloud simulation, and found that the network gave similar results for a fraction of the cost [483] and was stable in a simplified global model [484]. Existing scientific model structures do not always offer great trade-offs between cost and accuracy. Neural networks trained on those scientific models produce similar predictions, but offer an entirely new set of compromises between training cost, production cost, and accuracy. Replacing select climate model components with neural network approximators may thus improve both the cost and the accuracy of global 37 climate models. Additional work is needed to identify more climate model components that could be re- placed by neural networks (we highlight other impactful components below), to optimize those models, and to automate their training workflows (see examples in [485]). Ice sheets and sea level rise High Leverage The next most important targets for climate model improvements are ice sheet dynamics and sea level rise. The Arctic and Antarctic are warming faster than anywhere else on Earth, and their climates control the future of global sea level rise and many vulnerable ecosystems [4, 26]. Unfortunately, these regions are dark and cold, and until recently they were difficult to observe. In the past few years, however, new satellite campaigns have illuminated them with hundreds of terabytes of data.34 These data could make it possible to use ML to solve some of the field’s biggest outstanding questions. In particular, models of mass loss from the Antarctic ice-sheet are highly uncertain [486] and models of the extent of Antarctic sea ice do not match reality well [487]. The most uncertain parts of these models, and thus the best targets for improvement, are snow reflectivity, sea ice reflectivity, ocean heat mixing and ice sheet grounding line migration rates [481, 486, 488]. Computer scientists who wish to work in this area could build models that learn snow and sea ice properties from satellite data, or use new video prediction techniques to predict short-term changes in the sea ice extent. # 7.1.3 Working with climate models ML could also be used to identify and leverage relationships between climate variables. Pattern recognition and feature extraction techniques could allow us to identify more useful connections in the climate system, and regression models could allow us to quantify non-linear relationships between connected variables. For example, Nowack et al. demonstrated that ozone concentrations could be computed as a function of temperature, rather than physical transport laws, which led to considerable computational savings [489]. The best climate predictions are synthesized from ensembles of 20+ climate models [490]. Making good ensemble predictions is an excellent ML problem. Monteleoni et al. proposed that online ML algorithms could create better predictions of one or more target variables in a multi-model ensemble of climate models [491]; this idea has been refined in [492, 493]. More recently, Anderson and Lucas used random forests to make high-resolution predictions from a mix of high- and low-resolution models, which could reduce the costs of building multi-model ensembles [494]. In the further future, the Climate Modeling Alliance has proposed to build an entirely new climate model that learns continuously from data and from high-resolution simulations [495]. The proposed model would be written in Julia, in contrast to existing models which are mostly written in C++ and Fortran. At the cost of a daunting translation workload, they aim to build a model that is more accessible to new developers and more compatible with ML libraries. # 7.2 Forecasting extreme events For most people, extreme event prediction means the local weather forecast and a few days’ warning to stockpile food, go home, and lock the shutters. Weather forecasts are shorter-term than climate forecasts, but they produce abundant data. Weather models are optimized to track the rapid, chaotic changes of the atmosphere; since these changes are fast, tomorrow’s weather forecast is made and tested every day. Climate models, in contrast, are chaotic on short time scales, but their long-term trends are driven by slow, predictable changes of ocean, land, and ice (see [496]).35 As a result, climate model output can only be tested against long-term observations (at the scale of years to decades). Intermediate time scales, of weeks to months, are 34See e.g. icebridge.gsfc.nasa.gov and pgc.umn.edu/data/arcticdem. 35This is one of several reasons why climate models produce accurate long-term predictions in spite of atmospheric chaos. 38 exceptionally difficult to predict, although Cohen et al. [497] argue that machine learning could bridge that gap by making good predictions on four to six week timescales [498]. Thus far, however, weather modelers have had hundreds of times more test data than climate modelers, and began to adopt ML techniques earlier. Numerous ML weather models are already running in production. For example, Gagne et al. recently used an ensemble of random forests to improve hail predictions within a major weather model [499]. A full review of the applications of ML for extreme weather forecasting is beyond the scope of this article. Fortunately, that review has already been written: see [500]. The authors describe ML systems that correct bias, recognize patterns, and predict storms. Moving forward, they envision human experts working alongside automated forecasts. # 7.2.1 Storm tracking Climate models cannot predict the specific dates of future events, but they can predict changes in long-term trends like drought frequency and storm intensity. Information about these trends helps individuals, corpo- rations and towns make informed decisions about infrastructure, asset valuation and disaster response plans (see also §8.4). Identifying extreme events in climate model output, however, is a classification problem with a twist: all of the available data sets are strongly skewed because extreme events are, by definition, rare. ML has been used successfully to classify some extreme weather events. Researchers have used deep learning to classify [501], detect [480] and segment [502] cyclones and atmospheric rivers, as well as tornadoes [503], in historical climate datasets. Tools for more event types would be useful, as would online tools that work within climate models, labelled datasets for predicting future events, and statistical tools that quantify the uncertainty in new extreme event forecasts. # 7.2.2 Local forecasts # High Leverage Forecasts are most actionable if they are specific and local. ML is widely used to make local forecasts from coarse 10–100 km climate or weather model predictions; various authors have attempted this using support vector machines, autoencoders, Bayesian deep learning, and super-resolution convolutional neural networks (e.g. [504]). Several groups are now working to translate high-resolution climate forecasts into risk scenarios. For example, ML can predict localized flooding patterns from past data [505], which could inform individuals buying insurance or homes. Since ML methods like neural networks are effective at predicting local flooding during extreme weather events [506], these could be used to update local flood risk estimates to benefit individuals. The start-up Jupiter Intelligence is working to make climate predictions more actionable by translating climate forecasts into localised flood and temperature risk scores. # 7.3 Discussion ML may change the way that scientific modeling is done. The examples above have shown that many com- ponents of large climate models can be replaced with ML models at lower computational costs. From an ML standpoint, learning from an existing model has many advantages: modelers can generate new training and test data on-demand, and the new ML model inherits some community trust from the old one. This is an area of active ML research. Recent papers have explored data-efficient techniques for learning dy- namical systems [507], including physics-informed neural networks [508] and neural ordinary differential equations [151]. In the further future, researchers are developing ML approaches for a wide range of scien- tific modeling challenges, including crash prediction [509], adaptive numerical meshing [510], uncertainty quantification [511, 512] and performance optimization [513]. If these strategies are effective, they may solve some of the largest structural challenges facing current climate models. New ML models for climate will be most successful if they are closely integrated into existing scientific models. This has been emphasized, again and again, by authors who have laid future paths for artificial 39 intelligence within climate science [477, 484, 485, 495, 500, 514]. New models need to leverage existing knowledge to make good predictions with limited data. In ten years, we will have more satellite data, more interpretable ML techniques, hopefully more trust from the scientific community, and possibly a new climate model written in Julia. For now, however, ML models must be creatively designed to work within existing climate models. The best of these models are likely to be built by close-knit teams including both climate and computational scientists. 40 # 8 Societal Impacts # by Kris Sankaran Changes in the atmosphere have impacts on the ground. The expected societal impacts of climate change include prolonged ecological and socioeconomic stresses as well as brief, but severe, societal disruptions. For example, impacts could include both gradual decreases in crop yield and localized food shortages. If we can anticipate climate impacts well enough, then we can prepare for them by asking: • How do we reduce vulnerability to climate impacts? • How do we support rapid recovery from climate-induced disruptions? A wide variety of strategies have been put forward, from robust power grids to food shortage prediction (Fig. 7), and while this is good news for society, it can be overwhelming for an ML practitioner hoping to contribute. Fortunately, a few critical needs tend to recur across strategies – it is by meeting these needs that ML has the greatest potential to support societal adaptation [8, 16, 515]. From a high level, these involve • Sounding alarms: Identifying and prioritizing the areas of highest risk, by using evidence of risk from historical data. • Providing annotation: Extracting actionable information or labels from unstructured raw data. • Promoting exchange: Making it easier to share resources and information to pool and reduce risk. These unifying threads will appear repeatedly in the sections below, where we review strategies to help ecosystems, infrastructure, and societies adapt to climate change, and explain how ML supports each strat- egy (Fig. 7). w Wo Wy classifying trap images citizen science detecting anomalies —_— predicting failures analyzing sensors — vulnerability Ey Ecological Resilient 20 hat EES infrastructure aerial monitoring ~S —_ targeting upgrades Societal Adaptation -2 Crisis Social protecting refugees readiness systems i) surveying resilient livelinoods epidemic risk | 3 haw public health enabling annotating monitoring _ predicting to food diagnoses disaster maps hw A food supply food demand insecurity Figure 7: Selected strategies to accelerate societal adaptation to climate change using machine learning. 41 We note that the projects involved vary in scale from local to global, from infrastructure upgrades and crisis preparedness planning to international ecosystem monitoring and disease surveillance. Hence, we an- ticipate valuable contributions by researchers who have the flexibility to formulate experimental approaches, by industrial engineers and entrepreneurs who have the expertise to translate prototypes into wide-reaching systems, and by civil servants who lead many existing climate adaptation efforts. # 8.1 Ecology Changes in climate are increasingly affecting the distribution and composition of ecosystems. This has profound implications for global biodiversity, as well as agriculture, disease, and natural resources such as wood and fish. ML can help by supporting efforts to monitor ecosystems and biodiversity. Monitoring ecosystems High Leverage To preserve ecosystems, it is important to know which are most at risk. This has traditionally been done via manual, on-the-ground observation, but the process can be accelerated by annotation of remote sensing data [516–519] (see also §5.1). For example, tree cover can be automatically extracted from aerial imagery to characterize deforestation [520, 521]. At the scale of regions or biomes, analysis of large-scale simulations can illuminate the evolution of ecosystems across potential climate futures [522, 523]. A more direct source of data is offered by environmental sensor networks, made from densely packed but low-cost devices [12, 524, 525]. To monitor ocean ecosystems, marine robots are useful, because they can be used to survey large areas on demand [526, 527]. For a system to have the most real-world impact, regardless of the underlying data source, it is necessary to “personalize” predictions across a range of ecosystems. A model trained on the Sahara would almost certainly fail if deployed in the Amazon. Hence, these applications may motivate ML researchers interested in heterogeneity, data collection, transfer learning, and rapid generalization. In sensor networks, individual nodes fail frequently, but are redundant by design – this is an opportunity for research into anomaly detection and missing data imputation [528, 529]. In marine robotics, improved techniques for sampling regions to explore and automatic summarization of expedition results would both provide value [530, 531]. Finally, beyond aiding adaptation by prioritizing at-risk environments, the design of effective methods for ecosystem monitoring will support the basic science necessary to shape adaptation in the long-run [11, 14, 532]. Monitoring biodiversity High Leverage Accurate estimates of species populations are the foundation on which conservation efforts are built. Cam- era traps and aerial imagery have increased the richness and coverage of sampling efforts. ML can help infer biodiversity counts from image-based sensors. For instance, camera traps take photos automatically whenever a motion sensor is activated – computer vision can be used to classify the species that pass by, supporting a real-time, less labor-intensive species count [533–535]. It is also possible to use aerial imagery to estimate the size of large herds [536] or count birds [537]. In underwater ecosystems, ML has been used to identify plankton automatically from underwater cameras [538] and to infer fish populations from the structure of coral reefs [539]. Citizen science can also enable dataset collection at a scale impossible in individual studies [540–543]. For example, by leveraging public enthusiasm for birdwatching, eBird has logged more than 140 million observations [540], which have been used for population and migration studies [544]. Computer vision al- gorithms that can classify species from photographs have furthered such citizen science efforts by making identifications easier and more accurate [545, 546], though these face challenges such as class imbalances in training data [547]. Work with citizen science data poses the additional challenge that researchers have no control over where samples come from. To incentivize observations from undersampled regions, mech- 42 anisms from game theory can be applied [548], and even when sampling biases persist, estimates of dataset shift can minimize their influence [549]. Monitoring biodiversity may be paired with interventions to protect rare species or control invasive pests. Machine learning is providing new solutions to assess the impact of ecological interventions [550–552] and prevent poaching [548]. # 8.2 Infrastructure Physical infrastructure is so tightly woven into the fabric of everyday life – like the buildings we inhabit and lights we switch on – that it is easy to forget that it exists (see §3). The fact that something so basic will have to be rethought in order to adapt to climate change can be unsettling, but viewed differently, the sheer necessity of radical redesign can inspire creative thinking. We first consider the impacts of climate change on the built environment. Shifts in weather patterns are likely to put infrastructure under more persistent stress. Heat and wind damage roads, buildings, and power lines. Rising water tables near the coast will lead to faults in pipelines. Urban heat islands will be exacerbated and it is likely that there will be an increased risk of flooding caused by heavy rain or coastal inundations, resulting in property damage and traffic blockages[553]. A clear target is construction of physical defenses – for example, “climate proofing” cities with new coastal embankments and increased storm drainage capacity. However, focusing solely on defending ex- isting structures can stifle proactive thinking about urban and social development – for example, floating buildings are being tested in Rotterdam – and one may alternatively consider resilience and recovery more broadly [554, 555]. From this more general perspective of improving social processes, ML can support two types of activities: Design and maintenance. Designing infrastructure Long-term How can infrastructure be (re)designed to dampen climate impacts? In road networks, it is possible to incorporate flood hazard and traffic information in order to uncover vulnerable stretches of road, especially those with few alternative routes [556]. If traffic data are not directly available, it is possible to construct proxies from mobile phone usage and city-wide CCTV streams – these are promising in rapidly developing urban centers [557, 558]. Beyond drawing from flood hazard maps, it is possible to use data from real-world flooding events [559], and to send localized predictions to those at risk [560]. For electrical, water, and waste collection networks, the same principle can guide investments in resilience – using proxy or historical data about disruptions to anticipate vulnerabilities [561–564]. Robust components can replace those at risk; for example, adaptive islands, parts of an energy grid that continue to provide power even when disconnected from the network, prevent cascading outages in power distribution [565]. Infrastructure is long-lived, but the future is uncertain, and planners must weigh immediate resource costs against future societal risks [566]. One area that urgently needs adaptation strategies is the consistent access to drinking water, which can be jeopardized by climate variability [567, 568]. Investments in water infrastructure can be optimized; for example, a larger dam might cost more up front, but would have a larger storage capacity, giving a stronger buffer against drought. To delay immediate decisions, infrastructure can be upgraded in phases – the technical challenge is to discover policies that minimize a combination of long- term resource and societal costs under plausible climate futures, with forecasts being updated as climates evolve [569–571]. Maintaining infrastructure High Leverage What types of systems can keep infrastructure functioning well under increased stress? Two strategies for efficiently managing limited maintenance resources are predictive maintenance and anomaly detection; both can be applied to electrical, water, and transportation infrastructure. In predictive maintenance, operations 43 are prioritized according to the predicted probability of a near-term breakdown [137, 138, 572, 573]. For anomaly detection, failures are discovered as soon as they occur, without having to wait for inspectors to show up, or complaints to stream in [574, 575]. The systems referenced here have required the manual curation of data streams, structured and unstruc- tured. The data are plentiful, just difficult to glue together. Ideas from the missing data, multimodal data, and AutoML communities have the potential to resolve some of these issues. # 8.3 Social systems While less tangible, the social systems we construct are just as critical to the smooth functioning of society as any physical infrastructure, and it is important that they adapt to changing climate conditions. First, consider what changes these systems may encounter. Decreases in crop yield, due to drought and other factors, will pose a threat to food security, as already evidenced by long periods of drought in North America, West Africa and East Asia [576, 577]. More generally, communities dependent on ecosystem resources will find their livelihoods at risk, and this may result in mass migrations, as people seek out more supportive environments. At first, these problems may seem beyond the reach of algorithmic thinking, but investments in social infrastructure can increase resilience. ML can amplify the reach and effectiveness of this infrastructure. See also §11 for perspective on how ML can support the function and analysis of complex social environments. Food security High Leverage Data can be used to monitor the risk of food insecurity in real time, to forecast near-term shortages, and to identify areas at risk in the long-term, all of which can guide interventions. For real-time and near-term systems, it is possible to distill relevant signals from mobile phones, credit card transactions, and social media data [578–580]. These have emerged as low-cost, high-reach alternatives to manual surveying. The idea is to train models that link these large, but decontextualized, data with ground truth consumption or survey information, collected on small representative samples. This process of developing proxies to link small, rich datasets with large, coarse ones can be viewed as a type of semi-supervised learning, and is fertile ground for research. For longer-term warnings, spatially localized crop yield predictions are needed. These can be generated by aerial imagery or meteorological data (see §5.2), if they can be linked with historical yield data [581, 582]. On the ground, it is possible to perform crop-disease identification from plant photos – this can alert communities to disease outbreaks, and enhance the capacity of agricultural inspectors. For even longer- run risk evaluation, it is possible to simulate crop yield via biological and ecological models [583–585], presenting another opportunity for blending large scale simulation with ML [586, 587]. Beyond sounding alarms, ML can improve resilience of food supply chains. As detailed in §4, ML can reduce waste along these chains; we emphasize that for adaptation, it is important that supply chains also be made robust to unexpected disruptions [588–591]. Resilient livelihoods Individuals whose livelihoods depend on one activity, and who have less access to community resources, are those who are most at risk [592, 593]. Resilient livelihoods can be promoted through increased diver- sification, cooperation, and exchange, all of which can be facilitated by ML systems. For example, they can guide equipment and information sharing in farming cooperatives, via growers’ social networks [594]. Mobile money efforts can increase access to liquid purchasing power; they can also be used to monitor eco- nomic health [595, 596]. Skill-matching programs and online training are often driven by data, with some programs specifically aiming to benefit refugees [597–599] (see also §12). 44 Supporting displaced people Long-term Uncertain Impact Human populations move in response to threats and opportunities, and ML can be used to predict large- scale migration patterns. Work in this area has relied on accessible proxies, like social media, where users’ often self-report location information, or aerial imagery, from which the extent of informal settlement can be gauged [600–603]. More than quantifying migration patterns, there have been efforts directly aimed at protecting refugees, either through improving rescue operations [604, 605] or monitoring negative public sentiment [606]. It is worth cautioning that immigrants and refugees are vulnerable groups, and systems that surveil them can easily be exploited by bad actors. Designing methodology and governance mechanisms that allow vulnerable populations to benefit from such data, without putting them at additional risk, should be a research priority. Assessing health risks Climate change will affect exposure to health hazards, and machine learning can play a role in measuring and mitigating their impacts across subpopulations. Two of the most relevant expected shifts are (1) heat waves will become more frequent and (2) outdoor and indoor air quality will deteriorate [607, 608]. These exposures have either direct or indirect effects on health. For example, prolonged heat episodes both directly cause heat stroke and can trigger acute episodes in chronic conditions, like heart or respiratory disease [609, 610]. Careful data collection and analysis have played a leading role in epidemiology and public health efforts for generations. It should be no surprise that ML has emerged as an important tool in these disciplines, supporting a variety of research efforts, from increasing the efficiency of disease simulators to supporting the fine-grained measurement of exposures and their health impacts [611, 612]. These disciplines are increasingly focused on the risks posed by climate change specifically. For ex- ample, new sources of data have enabled detailed sensing of urban heat islands [613–615], water quality [616, 617], and air pollution [618, 619]. Further, data on health indicators, which are already collected, can quantitatively characterize observed impacts across regions as well as illuminate which populations are most at risk to climate-change induced health hazards [620]. For example, it is known that the young, elderly, and socially isolated are especially vulnerable during heat waves, and finer-grained risk estimates could potentially drive outreach [621, 622]. Across social applications, there are worthwhile research challenges – guiding interventions based on purely observational, potentially unrepresentative data poses risks. In these contexts, transparency is nec- essary, and ideally, causal effects of interventions could be estimated, to prevent feedback loops in which certain subgroups are systematically ignored from policy interventions. # 8.4 Crisis Perhaps counterintuitively, natural disasters and health crises are not entirely unpredictable – they can be prepared for, risks can be reduced, and coordination can be streamlined. Furthermore, while crises may be some of the most distressing consequences of climate change, disaster response and public health are mature disciplines in their own right, and have already benefited extensively from ML methodology [623–625]. Managing epidemics Climate change will increase the range of vector and water-borne diseases, elevating the likelihood that these new environments experience epidemics [607]. Disease surveillance and outbreak forecasting systems can be built from web data and specially-designed apps, in addition to traditional surveys [626–628]. While non-survey proxies are observational and self-reported, current research attempts to address these issues [629, 630]. Beyond surveillance, point-of-care diagnostics have enjoyed a renaissance, thanks in part to ML [515, 631]. These are tools that allow health workers to make diagnoses when specialized lab equipment 45 is inaccessible. An example is malaria diagnosis based on photos of prepared pathology slides taken with a mobile phone [632]. Ensuring that these systems reliably and transparently augment extension workers, guiding data collection and route planning when appropriate, are active areas of study [633, 634]. Disaster response High Leverage In disaster preparation and response, two types of ML tasks have proven useful: creating maps from aerial imagery and performing information retrieval on social media data. Accurate and well-annotated maps can inform evacuation planning, retrofitting campaigns, and delivery of relief [635, 636]. Further, this imagery can assist damage assessment, by comparing scenes immediately pre- and post-disaster [637, 638]. Social media data can contain kernels of insight – places without water, clinics without supplies – which can inform relief efforts. ML can help properly surface these insights, compressing large volumes of social media data into the key takeaways, which can be acted upon by disaster managers [624, 639, 640]. # 8.5 Discussion Climate change will have profound effects on the planet, and the ML community can support efforts to minimize the damage it does to ecosystems and the harm it inflicts on people. This section has suggested areas of research that may help societies adapt more effectively to these ever changing realities. We have identified a few recurring themes, but also emphasized the role of understanding domain-specific needs. The use of ML to support societal resilience would be a noble goal at any time, but the need for tangible progress towards it may never have been so urgent as it is today, in the face of the wide-reaching consequences of climate change. 46 # 9 Solar Geoengineering # by Andrew S. Ross Airships floating through the sky, spraying aerosols; robotic boats crisscrossing the ocean, firing vertical jets of spray; arrays of mirrors carefully positioned in space, micro-adjusted by remote control: these images seem like science fiction, but they are actually real proposals for solar radiation management, commonly called solar geoengineering [641–644]. Solar geoengineering, much like the greenhouse gases causing climate change, shifts the balance between how much heat the Earth absorbs and how much it releases. The difference is that it is done deliberately, and in the opposite direction. The most common umbrella strategy is to make the Earth more reflective, keeping heat out, though there are also methods of helping heat escape (besides CO2 removal, which we discuss in §5 and §6). Solar geoengineering generally comes with a host of potential side effects and governance challenges. Moreover, unlike CO2 removal, it cannot simply reverse the effects of climate change (average temperatures may return to pre-industrial levels, but location-specific climates still change), and also comes with the risk of termination shock (fast, catastrophic warming if humanity undertakes solar geoengineering but stops suddenly) [645]. Because of these and other issues, it is not within the scope of this paper to evaluate or recommend any particular technique. However, the potential for solar geoengineering to moderate some of the most catastrophic hazards of climate change is well-established [646], and it has received increasing attention in the wake of societal inaction on mitigation. Although [644] argue that the “hardest and most important problems raised by solar geoengineering are non-technical,” there are still a number of important technical questions that machine learning may be able to help us study. Overview The primary candidate methods for geoengineering are marine cloud brightening [647] (making low-lying clouds more reflective), cirrus thinning [648] (making high-flying clouds trap less heat), and stratospheric aerosol injection [649] (which we discuss below). Other candidates (which are either less effective or harder to implement) include “white-roof” methods [650] and even launching sunshades into space [651]. Injecting sulfate aerosols into the stratosphere is considered a leading candidate for solar geoengineering both because of its economic and technological feasibility [652, 653] and because of a reason that should resonate with the ML community: we have data. (This data is largely in the form of temperature observations after volcanic eruptions, which release sulfates into the stratosphere when sufficiently large [654].) Once injected, sulfates circulate globally and remain aloft for 1 to 2 years. As a result, the process is reversible, but must also be continually maintained. Sulfates come with a well-studied risk of ozone loss [655], and they make sunlight slightly more diffuse, which can impact agriculture [656]. # 9.1 Understanding and improving aerosols Design Long-term The effects and side-effects of aerosols in the stratosphere (or at slightly lower altitudes for cirrus thinning [657]) vary significantly with their optical and chemical properties. Although sulfates are the best under- stood due to volcanic eruption data, many others have been studied, including zirconium dioxide, titanium dioxide, calcite (which preserves ozone), and even synthetic diamond [658]. However, the design space is far from fully explored. Machine learning has had recent success in predicting or even optimizing for specific chemical and material properties [87, 92, 93, 458]. Although speculative, it is conceivable that ML could accelerate the search for aerosols that are chemically nonreactive but still reflective, cheap, and easy to keep aloft. Modeling One reason that sulfates have been the focus for aerosol research is that atmospheric aerosol physics is not 47 perfectly captured by current climate models, so having natural data is important for validation. Further- more, even if current aerosol models are correct, their best-fit parameters must still be determined (using historical data), which comes with uncertainty and computational difficulty. ML may offer tools here, both to help quantify and constrain uncertainty, and to manage computational load. As a recent example, [659] use Gaussian processes to emulate climate model outputs based on nine possible aerosol parameter settings, allowing them to establish plausible parameter ranges (and thus much better calibrated error-bars) with only 350 climate model runs instead of >100,000. Although this is important progress, ideally we want uncertainty-aware aerosol simulations with a fraction of the cost of one climate model run, rather than 350. ML may be able to help here too (see §7 for more details). # 9.2 Engineering a planetary control system # High Leverage Long-term Uncertain Impact Efficient emulations and error-bars will be essential for what MacMartin and Kravitz [660] call “The Engi- neering of Climate Engineering.” According to [660], any practical deployment of geoengineering would constitute “one of the most critical engineering design and control challenges ever considered: making real-time decisions for a highly uncertain and nonlinear dynamic system with many input variables, many measurements, and a vast number of internal degrees of freedom, the dynamics of which span a wide range of timescales.” Bayesian and neural network-based approaches could facilitate the fast, uncertainty-aware nonlinear system identification this challenge might require. Additionally, there has been recent progress in reinforcement learning for control [661–663], which could be useful for fine-tuning geoengineering in- terventions such as deciding where and when to release aerosols. For an initial attempt at analyzing strato- spheric aerosol injection as a reinforcement learning problem (using a neural network climate model emu- lator), see [664]. # 9.3 Modeling impacts Long-term Of course, optimizing interventions requires defining objectives, and the choices here are far from clear. Although it is possible to stabilize global mean temperature and even regional temperatures through geo- engineering, it is most likely impossible to preserve all relevant climate characteristics in all locations. Furthermore, climate model outputs do not tell the full story; ultimately, the goal of climate engineering is to minimize harm to people, ecosystems, and society. It is therefore essential to develop robust tools for estimating the extent and distribution of these potential harms. There has been some recent work in applying ML to assess the impacts of geoengineering. For example, [665] use deep neural networks to estimate the effects of aerosols on human health, while [666] use them to estimate the effects of solar geoengineering on agriculture. References [667, 668] use relatively simple local and polynomial regression techniques but ap- plied to extensive empirical data to estimate the past and future effects of temperature change on economic production. More generally, the field of Integrated Assessment Modeling [669, 670] aims to map the outputs of a climate model to societal impacts; for a general discussion of potential opportunities for applying ML to IAMs, see §11.2. # 9.4 Discussion Any consideration of solar geoengineering raises many moral questions. It may help certain regions at the expense of others, introduce risks like termination shock, and serve as a “moral hazard”: widespread awareness of its very possibility may undermine mainstream efforts to cut emissions [671]. Because of these issues, there has been significant debate about whether it is ethically responsible to research this topic [672, 673]. However, although it creates new risks, solar geoengineering could actually be a moderat- ing force against the terrifying uncertainties climate change already introduces [646, 674], and ultimately 48 many environmental groups and governmental bodies have come down on the side of supporting further re- search.36, 37, 38 In this section, we have attempted to outline some of the technical challenges in implement- ing and evaluating solar geoengineering. We hope the ML community can help geoengineering researchers tackle these challenges. 36 https://www.edf.org/climate/our-position-geoengineering 37https://www.nrdc.org/media/2015/150210 38https://www.ucsusa.org/sites/default/files/attach/2019/gw-position-Solar- Geoengineering-022019.pdf 49 # Tools for Action # 10 Individual Action # by Natasha Jaques Individuals may worry that they are powerless to affect climate change, or lack clarity on which of their behaviors are most important to change. In fact, there are actions which can meaningfully reduce each person’s carbon footprint, and, if widely adopted, could have a significant impact on mitigating global emissions [404, 675]. AI can help to identify those behaviors, inform individuals, and provide constructive opportunities by modeling individual behavior. # 10.1 Understanding personal carbon footprint We as individuals are constantly confronted with decisions that affect our carbon footprint, but we may lack the data and knowledge to know which decisions are most impactful. Fortunately, ML can help determine an individual’s carbon footprint from their personal and household data.39 For example, natural language processing can be used to extract the flights a person takes from their email, or determine specific grocery items purchased from a bill, making it possible to predict the associated emissions. Systems that combine this information with data obtained from the user’s smartphone (e.g. from a ride-sharing app) can then help consumers who wish to identify which behaviors result in the highest emissions. Given such a ML model, counterfactual reasoning can potentially be used to demonstrate to consumers how much their emissions would be reduced for each behavior they changed. As a privacy-conscious alternative, emissions estimates could be directly incorporated into grocery labels [676] or interfaces for purchasing flights. Such information can empower people to understand how they can best help mitigate climate change through behavior change. Residences are responsible for a large share of GHG emissions [4] (see also §3). A large meta-analysis found that significant residential energy savings can be achieved [677], by targeting the right interventions to the right households [678–680]. ML can predict a household’s emissions in transportation, energy, water, waste, foods, goods, and services, as a function of its characteristics [681]. These predictions can be used to tailor customized interventions for high-emissions households [682]. Changing behavior both helps mitigate climate change and benefits individuals; studies have shown that many carbon mitigation strategies also provide cost savings to consumers [681]. Household energy disaggregation breaks down overall electricity consumption into energy use by indi- vidual appliances (see also §3.1) [683], which can help facilitate behavior change [684]. For example, it can be used to inform consumers of high-energy appliances of which they were previously unaware. This alone could have a significant impact, since many devices consume a large amount of electricity even when not in use; standby power consumption accounts for roughly 8% of residential electricity demand [685]. A variety of ML techniques have been used to effectively disaggregate household energy, such as spectral clustering, Hidden Markov Models, and neural networks [683]. ML can also be used to predict the marginal emissions of energy consumption in real time, on a scale of hours,40 potentially allowing consumers to effectively schedule activities such as charging an electric vehicle when the emissions (and prices [686]) will be lowest [687]. Combining these predictions with disaggregated energy data allows for the efficient automation of household energy consumption, ideally through products that present interpretable insights to the consumer (e.g. [688, 689]). Methods like reinforcement learning can be used to learn how to optimally schedule household appliances to consume energy more efficiently and sustainably [690, 691]. Multi-agent learning has also been applied to this problem, to ensure that groups of homes can coordinate to balance energy consumption to keep peak demand low [80, 83]. # 39See e.g. https://www.tmrow.com/ 40https://www.watttime.org/ 50 # 10.2 Facilitating behavior change # High Leverage ML is highly effective at modeling human preferences, and this can be leveraged to help mitigate climate change. Using ML, we can model and cluster individuals based on their climate knowledge, preferences, de- mographics, and consumption characteristics (e.g. [692–696]), and thus predict who will be most amenable to new technologies and sustainable behavior change. Such techniques have improved the enrollment rate of customers in an energy savings program by 2-3x [678]. Other works have used ML to predict how much consumers are willing to pay to avoid potential environmental harms of energy consumption [697], finding that some groups were totally insensitive to cost and would pay the maximum amount to mitigate harm, while other groups were willing to pay nothing. Given such disparate types of consumers, targeting inter- ventions toward particular households may be especially worthwhile; all the more so because data show that the size and composition of household carbon footprints varies dramatically across geographic regions and demographics [681]. Citizens who would like to engage with policy decisions, or explore different options to reduce their per- sonal carbon footprint, can have difficulty understanding existing laws and policies due to their complexity. They may benefit from tools that make policy information more manageable and relevant to the individual (e.g. based on where the individual lives). There is the potential for natural language processing to derive understandable insights from policy texts for these applications, similar to automated compliance checking [698, 699]. Understanding individual behavior can help signal how it can be nudged. For example, path analysis has shown that an individual’s psychological distance to climate change (on geographic, temporal, social, and uncertainty dimensions) fully mediates their level of climate change concern [700]. This suggests that interventions minimizing psychological distance to the effects of climate change may be most effective. Similarly, ML has revealed that cross-cultural support for international climate programs is not reduced, even when individuals are exposed to information about other countries’ climate behavior [701]. To make the effects of climate change more real for consumers, and thus help motivate those who wish to act, image generation techniques such as CycleGANs have been used to visualize the potential consequences of extreme weather events on houses and cities [702]. Gamification via deep learning has been proposed to further allow individuals to explore their personal energy usage [703]. All of these programs may be an incredibly cost- effective way to reduce energy consumption; behavior change programs can cost as little as 3 cents to save a kilowatt hour of electricity, whereas generating one kWh would cost 5-6 cents with a coal or wind power plant, and 10 cents with solar [704, 705]. # 10.3 Discussion While individuals can sometimes feel that their contributions to climate change are dwarfed by other factors, in reality individual actions can have a significant impact in mitigating climate change. ML can aid this process by empowering consumers to understand which of their behaviors lead to the highest emissions, automatically scheduling energy consumption, and providing insights into how to facilitate behavior change. 51 # 11 Collective Decisions by Tegan Maharaj and Nikola Milojevic-Dupont Addressing climate change requires swift and effective decision-making by groups at multiple levels – communities, unions, NGOs, businesses, governments, intergovernmental organizations, and many more. Such collective decision-making encompasses many kinds of action – for example, negotiating international treaties to reduce GHG emissions, designing carbon markets, building resilient infrastructure, and establish- ing community-owned solar farms. These decisions often involve multiple stakeholders with different goals and priorities, requiring difficult trade-offs. The economic and societal systems involved are often extremely complex, and the impacts of climate-related decisions can play out globally across long time horizons. To address some of these challenges, researchers are using empirical and mathematical methods from fields such as policy analysis, operations research, economics, game theory, and computational social science; there are many opportunities for ML to support and supplement these methods. # 11.1 Modeling social interactions When designing climate change strategies, it is critical to understand how organizations and individuals act and interact in response to different incentives and constraints. Agent-based models (ABMs) [706, 707] represent one approach used in simulating the actions and interactions of agents (people, companies, etc.) in their environment. ABMs have been applied to a multitude of problems relevant to climate change, in particular to study low-carbon technology adoption [708–711]. For example, when modeling solar PV adoption [712], agents may represent individuals who act based on factors such as financial interest and the behavior of their peers [713, 714]; the goal is then to study how these agents interact in response to different conditions, such as electricity rates, subsidy programs, and geographical considerations. Other applications of ABMs include modeling how behavior under social norms changes with external pressures [715], how the economy and climate may evolve given a diversity of political and economic beliefs [716], and how individuals may migrate in response to environmental changes [717]. While agent and environment models in ABMs are often hand-designed by experts, ML can help integrate data-driven insights into these models [718], for example by learning rules or models for agents based on observational data [712, 719], or by using unsupervised methods such as VAEs or GANs to discover salient features useful in modeling a complex environment. While the hope of learning or tuning behavior from data is promising for generalization, many data-driven approaches lose the interpretability for which ABMs are valued; work in interpretable ML methods could potentially help with this. In addition to ABMs, techniques from game theory can be valuable in modeling behavior, e.g. to ex- plore cooperation in the face of a depleting resource [720]. Multi-agent reinforcement learning can also be applied to understand the behavior of groups of agents who need to cooperate; see [721] for an overview and [722, 723] for recent examples. Combined with mechanism design,41 such approaches can be used to design methods for cooperation that lead to mutually beneficial outcomes, for example when formalizing procedures around international climate agreements [724, 725]. # 11.2 Informing policy The actions required to address climate change, both in mitigation and adaptation, require making policies42 at the local, national, and international levels [726]. Various institutions act as policy-makers: for instance, governments, international organizations, non-governmental organizations, standards committees, and pro- fessional institutions. Tools from policy analysis – the process of evaluating the outcomes of past policies 41Mechanism design is often called “inverse game theory” – rather than determining optimal strategies for players, mechanism design seeks to design games such that certain strategies are incentivized. 42Policy can refer, for example, to laws, measures, standards, or best practices. 52 and assessing future policy alternatives43 – can help inform the choices these institutions make. Policy anal- ysis uses quantitative tools from statistics, economics, and operations research such as cost-benefit analysis, uncertainty analysis, and multi-criteria decision making to inform the policy-making process; see [727, 728] for an introduction. ML can provide data for policy analysis, help improve existing tools for assessing policy options, and provide new tools for evaluating the effects of policies. Gathering data High Leverage When creating policies, decision-makers must often negotiate fundamental uncertainties in the underlying data. ML can help alleviate some of this uncertainty by providing data. For instance, as detailed elsewhere in this paper, ML can help pinpoint sources of emissions (§1.2,5.1), approximate traffic patterns (§2.1), identify infrastructure at risk (§8.2), and mine information from companies’ financial disclosures (§13). Natural language processing, network analysis, and clustering techniques can also be used to analyze social media data to understand public opinions and discourse around climate change [729–731]. These data can then be used to identify areas of intervention, compute the benefits and costs of a project, or evaluate the effectiveness of a policy after it has been implemented. Assessing policy options Decision-makers often construct mathematical models to help them assess or trade off between different policy alternatives. ML is particularly relevant to approaches that model large and complex socio-economic systems to assess outcomes of particular strategies, as well as optimization-based tools that help with navi- gating the decision. Policy-makers often wish to analyze how different policy alternatives may contribute to achieving a particular objective. Computational approaches such as simulation and (partial) equilibrium models can be used to compare different policy options, assess the effects of underlying assumptions, or propose strategies that are consistent with the objectives of decision-makers. Of particular relevance to climate change miti- gation are integrated assessment models (IAMs), which incorporate economic models, climate models, and policy information (see [732] for an overview). IAMs are used to explore future societal pathways that are consistent with climate goals (e.g. 1.5◦C mean global temperature increase), and play a prominent role in the IPCC assessments [733]. While these models can simulate interactions between many variables in great detail, this comes at the cost of computational complexity and presents opportunities for machine learning. Much as with Earth system models (§7), ML can be applied within any of the various sub-models that make up an IAM. One set of applications involves deriving results at the appropriate spatial resolution, since dif- ferent components of an IAM operate at different scales. Outputs with high resolution may be aggregated via clustering methods to provide insights [734], while at coarser resolution, statistical downscaling can help to disaggregate data to an appropriate spatial resolution, as seen in applications such as crop yield [735], wind speed [736] or surface temperature [737]. ML also has the potential to help with sensitivity and uncer- tainty analysis [738], with finding numerical solutions for computational expensive submodels [739, 740], and assessing the validity of the models [741]. In addition to assessing the outcomes of various policies, policy-makers may also employ optimization- based tools to figure out what decisions to make. For example, combinatorial optimization is a powerful tool used widely for decision-making in operations research. See [194] for a survey of how ML can be employed to help solve combinatorial optimization problems. Tools from the field of multi-criteria decision-making can also help policy-makers manage trade-offs between different policies by reconciling competing objectives and minimizing negative side-effects; in par- ticular, in cases where policy objectives and constraints can be mathematically formalized, multi-objective 43The former is often referred to as ex-post policy analysis and the latter as ex-ante policy analysis. 53 optimization can provide a pragmatic approach to making decisions. Here, a decision-maker would formu- late their decision-making process as an optimization problem by combining multiple optimization objec- tives subject to physical or other types of constraints; the goal is to then find a solution (or set of solutions) that is Pareto-optimal with respect to all of the objective functions. However, finding these solutions is often computationally expensive. Practitioners have applied bio-inspired algorithms such as particle swarm, genetic, or evolutionary algorithms to search for or compute Pareto-optimal solutions that satisfy the con- straints. This approach has been applied in a number of climate change-related fields, including energy and infrastructure planning [111, 742–746], industry [747, 748], land use [749, 750], and more [751–754]. Pre- vious work has also employed parallel surrogate search, assisted by ML, to efficiently solve multi-objective optimization problems [755]. Optimization algorithms which have been successful in the context of hy- perparameter tuning (e.g. Bayesian optimization [756, 757]) or guided search algorithms (e.g. tree search algorithms [758]) could also potentially be applied to this problem. Evaluating policy effects High Leverage When creating new policies, decision-makers may wish to understand previous policies (e.g. from other jurisdictions) and how these policies performed. ML can help analyze previous policy actions automatically and at scale by improving computational text analysis. In particular, natural language processing meth- ods are already used in the field of political science to analyze political texts and legislation [759]; these approaches could be promising for systematically studying climate change policies. Causal inference tech- niques can also help assess the effect of a particular policy or climate-related event from observed outcomes. ML can play a role in causal inference [760–762], including in the context of policy problems [763, 764] and in climate-relevant scenarios such as estimating the effects of temperature on human mortality [765] and the effects of World Bank projects on vegetative cover [766]. # 11.3 Designing markets In economics, GHG emissions can be seen as a negative externality: while a changing climate results in a cost for society, this cost is often not reflected in the market price of goods or services that cause GHG emissions. This is problematic, since organizations and individuals making decisions solely on the basis of market prices will tend to favor cheaper goods, even if those goods emit a large amount of GHGs. Market- based tools44 such as carbon taxes aim to enforce prices reflecting the societal cost of GHGs and thus encourage socially beneficial behavior through market forces. ML can help in understanding the impacts of market instruments; assessing their effectiveness at reducing emissions; and supporting a swift, effective and fair implementation.45 Predicting carbon prices There are several approaches to pricing GHG emissions. Carbon taxes and quotas aim to influence the behavior of organizations by shaping supply and demand within an existing market. By contrast, cap-and- trade approaches such as those within the European Union involve a completely new market, an Emissions Trading Scheme, within which companies can buy and sell a limited number of GHG emissions permits. Prices within such cap-and-trade markets are highly sensitive to control elements such as the number of permits released at a given time. ML can be used to analyze prices within these markets, for example by predicting prices via supervised learning [771–774] or analyzing the main drivers of prices via hierarchical clustering [775]. 44For background on market-based strategies, see [767–769]. 45For a review on ML for energy economics and finance, see [770]. 54 Non-carbon markets Market design can influence GHG emissions even in settings where such emissions are not directly penal- ized. For instance, dynamic pricing in electricity markets – varying the price of electricity to consumers based on, e.g., how much wind power is available – can shape demand for low-carbon energy sources (see §1.1.1). Following seminal research on modeling pricing in markets as a bandit problem [776], many works have applied bandit and other reinforcement learning (RL) algorithms to determine prices or other market values. For example, RL has been applied to predict bids [777] and market power [778] in electricity mar- kets, and to set dynamic prices in more general settings [79]. ML can also help solve auctions in supply chains [196]. Assessing market effects When designing market-based strategies, it is necessary to understand how effectively each strategy will reduce emissions, as well as how the underlying socio-technical system may be affected. Studies have considered effects of carbon pricing on economic growth and energy intensity [779, 780], or on electricity prices [781]. Effects of pricing mechanisms can also be indirect, as companies’ strategic decisions can have longer-term effects. ML can be useful in analyzing these effects. For example, self-organizing maps have been used to analyze how R&D investment in green technologies changes in response to fuel prices [782], while a game theoretical framework using neural networks has been used to study the optimal production strategies for companies under carbon quotas [783]. To ensure that market-based strategies are effective and equitable, it is important to understand their distributional effects, as certain social groups or classes of stakeholders may be affected more than others. For example, a flat carbon tax on gasoline will have a larger effect on lower-income populations, as fuel expenses are a bigger share of their total budget. Here, clustering can help identify permit allocation schemes that maximize social welfare [784], and supervised learning has been used to predict winners and losers from changing electricity tariff schemes [785]. Hedonic pricing can also help identify how much different consumers may be willing to pay for a environmental good or a service, which is a noisy measure for the monetary value of that good or service; these values are typically inferred using regression or ML techniques on historical market data [786–789]. It is also important to analyze which organizations or individuals can actually participate in a given market. For example, carbon markets can be more flexible if viable offsets exist, including those offered by landowners who sequester carbon through forest conservation and management; ML has been used to examine the factors influencing the financial viability of such projects [790]. # 11.4 Discussion The complexity, scale, and fundamental uncertainty inherent in the problems of climate change can pose challenges for collective decision-making. ML can help supplement existing mathematical frameworks that are employed to alleviate some of these challenges, including agent-based models, integrated assessment models, multi-objective optimization, and market design. Interpretable and fair ML techniques may be of particular importance in this context, as they may enable decision-makers to more effectively and equitably employ insights from ML models. While these quantitative assessment tools can provide useful input to the decision-making process, it is worth noting that decisions regarding climate change may ultimately depend on qualitative discussions around norms, values, or equity considerations that may not be captured in quantitative models. 55 # 12 Education # by Alexandra Luccioni Access to quality education is a key part of sustainable development, with significant benefits for climate and society at large. Education contributes to improving quality of life, helps individuals make informed decisions, and trains the next generation of innovators. Education is also paramount in helping people across societies understand and address the causes and consequences of climate change and provides the skills and tools necessary for adapting to its impacts. For instance, education can both improve the resilience of communities, particularly in developing countries that will be disproportionately affected by climate change [791], and empower individuals, especially from developed countries, to adopt more sustainable lifestyles [792]. As climate change itself may diminish educational outcomes for some populations, due to its negative effects on agricultural productivity and household income [793, 794], this makes providing high-quality educational interventions globally all the more important. AI in Education Long-term There are a number of ways that AI and ML can contribute to education and teaching – for instance by improving access to educational opportunities, helping personalize the teaching process, and stepping in when teachers have limited time. The field of AIED (Artificial Intelligence in EDucation) has existed for over 30 years, and until recently relied on explicitly modeling content, learners, and tutoring strategies based on psychological theories of learning. However, AIED is increasingly incorporating data-driven insights derived from ML techniques. One important area of AIED research has been Intelligent Tutoring Systems (ITSs), which can adapt their behavior in real time according to the needs of individuals or to support collaborative learning [795]. While ITSs have traditionally been defined and constructed by hand, recent approaches have applied ML techniques such as multi-armed bandit techniques to adaptively personalize sequences of learning activ- ities [796], LSTMs to generate questions to evaluate language comprehension [797], and reinforcement learning to improve the strategies used within the ITS [798, 799]. However, there remains much work to be done to bridge the performance gap between digital and human tutors, and ML-based approaches have an important role to play in this endeavor – for example, via natural language processing techniques for creating conversational agents [800], learner analytics for classifying student profiles, [801], and adaptive learning approaches to propose relevant educational activities and exercises [802]. 46 While ITSs generally focus on individualized or small-group instruction, AIED can also help provide tools that improve educational outcomes at scale for larger groups of learners. For instance, scalable, adap- tive online courses could give hundreds of thousands of learners access to learning resources that they would not usually have in their local educational facilities [806]. Furthermore, giving teachers guidance derived from computational teaching algorithms or heuristics could help them design better educational curricula and improve student learning outcomes [807]. In this context, AIED applications can be used either as a standalone tool for independent learners or as an educational resource that frees up teachers to have more one-on-one time with students. Key considerations for creating AIED tools that can be applied across the globe include adapting to local technological and cultural needs, addressing barriers such as access to electricity and internet [142, 143], and taking into account students’ computing skills, language, and cul- ture [808, 809]. Learning about climate Research has shown that educational activities centered on climate change and carbon footprints can en- gage learners in understanding the connection between personal and collective actions and their impact on 46For further background on this area, see [803–805]. 56 global climate, and can enable individuals to make climate-friendly lifestyle choices such as reducing en- ergy use [810]. There have also been proposals for interactive websites explaining climate science as well as educational interventions focusing on local and actionable aspects of sustainable development [811]. In these contexts, ML can help create personalized educational tools, for instance by generating images of future impacts of extreme weather events based on a learner’s address [702] or by anchoring an individual’s learning experience in a digital replica of their real-life location and allowing them to explore the way that climate change will impact a specific location [812]. 57 # 13 Finance # by Alexandra Luccioni The rise and fall of financial markets is linked to many events, both sporadic (e.g. the 2008 global financial crisis) and cyclical (e.g. the price of gas over the years), with profits and losses that can be measured in the billions of dollars and can have global consequences. Climate change poses a substantial financial risks to global assets measured in the trillions of dollars [813], and it is hard to forecast where, how, or when climate change will impact the stock price of a given company, or even the debt of an entire nation. While financial analysts and investors focus on pricing risk and forecasting potential earnings, the majority of the current financial system is based on quarterly or yearly performance. This fails to incentivize the prediction of medium or long-term risks, which include most climate change-related exposures such as physical impacts on assets or distribution chains, legislative impacts on profit generation, and indirect market consequences such as supply and demand.47 Climate investment Climate investment, the current dominant approach in climate finance, involves investing money in low- carbon assets [817]. The dominant ways in which major financial institutions take this approach are by creating “green” financial indexes that focus on low-carbon energy, clean technology, and/or environmental services [818] or by designing carbon-neutral investment portfolios that remove or under-weight companies with relatively high carbon footprints [819]. This investment strategy is creating major shifts in certain sectors of the market (e.g. utilities and energy) towards renewable energy alternatives, which are seen as having a greater growth potential than traditional energy sources such as oil and gas [820]. While this approach currently does not utilize ML directly, we see the potential in applying deep learning both for portfolio selection (based on features of the stocks involved) and investment timing (using historical patterns to predict future demand), to maximize both the impact and scope of climate investment strategies. Climate analytics High Leverage The other main approach to climate finance is climate analytics, which aims to predict the financial effects of climate change, and is still gaining momentum in the mainstream financial community [817]. Since this is a predictive approach to addressing climate change from a financial perspective, it is one where ML can poten- tially have greater impact. Climate analytics involves analyzing investment portfolios, funds, and companies in order to pinpoint areas with heightened risk due to climate change, such as timber companies that could be bankrupted by wildfires or water extraction initiatives that could see their sources polluted by shifting landscapes. Approaches used in this field include: natural language processing techniques for identifying climate risks and investment opportunities in disclosures made by companies [821] as well as for analyzing the evolution of climate coverage in the media to dynamically hedge climate change risk [822]; economet- ric approaches for developing arbitrage strategies that take advantage of the carbon risk factor in financial markets [823]; and ML approaches for forecasting the price of carbon in emission exchanges48 [825, 826]. To date, the field of climate finance has been largely neglected within the larger scope of financial re- search and analysis. This leaves many directions for improvement, such as (1) improving existing traditional portfolio optimization approaches; (2) in-depth modeling of variables linked to climate risk; (3) designing a statistical climate factor that can be used to project the variation of stock prices given a compound set of events; and (4) identifying direct and indirect climate risk exposure in annual company reports. ML plays a central role in these strategies, and can be a powerful tool in leveraging the financial sector to mitigate climate change and in reducing the financial impacts of climate change on society. 47For further reading regarding the impact of climate change on financial markets, see [814–816]. 48Carbon pricing, e.g. via CO2 cap-and-trade or a carbon tax, is a commonly-suggested policy approach for getting firms to price future climate change impacts into their financial calculations. For an introduction to these topics, see [824] and also §11.3. 58 # Conclusion Machine learning, like any technology, does not always make the world a better place — but it can. In the fight against climate change, we have seen that ML has significant contributions to offer across domain areas. ML can enable automatic monitoring through remote sensing (e.g. by pinpointing deforestation, gathering data on buildings, and assessing damage after disasters). It can accelerate the process of scientific discovery (e.g. by suggesting new materials for batteries, construction, and carbon capture). ML can optimize systems to improve efficiency (e.g. by consolidating freight, designing carbon markets, and reducing food waste). And it can accelerate computationally expensive physical simulations through hybrid modeling (e.g. climate models and energy scheduling models). These and other cross-cutting themes are shown in Table 2. We emphasize that in each application, ML is only one part of the solution; it is a tool that enables other tools across fields. Applying machine learning to tackle climate change has the potential both to benefit society and to advance the field of machine learning. Many of the problems we have discussed here highlight cutting- edge areas of ML, such as interpretability, causality, and uncertainty quantification. Moreover, meaningful action on climate problems requires dialogue with fields within and outside computer science and can lead to interdisciplinary methodological innovations, such as improved physics-constrained ML techniques. The nature of climate-relevant data poses challenges and opportunities. For many of the applications we identify, data can be proprietary or include sensitive personal information. Where datasets exist, they may not be organized with a specific task in mind, unlike typical ML benchmarks that have a clear objective. Datasets may include information from heterogeneous sources, which must be integrated using domain knowledge. Moreover, the available data may not be representative of global use cases. For example, forecasting weather or electricity demand in the US, where data are abundant, is very different from doing so in India, where data can be scarce. Tools from transfer learning and domain adaptation will likely prove essential in low-data settings. For some tasks, it may also be feasible to augment learning with carefully simulated data. Of course, the best option if possible is always more real data; we strongly encourage public and private entities to release datasets and to solicit involvement from the ML community. For those who want to apply ML to climate change, we provide a roadmap: • Learn. Identify how your skills may be useful – we hope this paper is a starting point. • Collaborate. Find collaborators, who may be researchers, entrepreneurs, established companies, or policy makers. Every domain discussed here has experts who understand its opportunities and pitfalls, even if they do not necessarily understand ML. • Listen. Listen to what your collaborators and other stakeholders say is needed. Groundbreaking technologies have an impact, but so do well-constructed solutions to mundane problems. • Deploy. Ensure that your work is deployed where its impact can be realized. We call upon the machine learning community to use its skills as part of the global effort against climate change. 59 d e t a r e l e c c A i r e p x e l o r t n o C g n i t s a c e r o F n a m u H n o i t c a r e t n i d i r b y H s l e d o m e v i t c i d e r P e c n a n e t n i a m 1 Electricity systems Enabling low-carbon electricity Reducing current-system impacts Ensuring global impact • • • • • • • • • 2 Transportation Reducing transport activity Improving vehicle efficiency Alternative fuels & electrification Modal shift • • • • • • • • • • • • 3 Buildings and cities Optimizing buildings Urban planning The future of cities • • • • 4 Industry Optimizing supply chains Improving materials Production & energy • • • • • 5 Farms & forests Remote sensing of emissions Precision agriculture Monitoring peatlands Managing forests • • • • 6 Carbon dioxide removal Direct air capture Sequestering CO2 7 Climate prediction • • • Uniting data, ML & climate science Forecasting extreme events • • • • 8 Societal impacts Ecology Infrastructure Social systems Crisis • • • • 9 Solar geoengineering Understanding & improving aerosols Engineering a planetary control system Modeling impacts 10 Individual action • • • • • Understanding personal footprint Facilitating behavior change • • • e t o m e R • • • • • • • • • • • • • • • m e t s y S • • • • • • • • • • • n o i t a z i m i t p o # 11 Collective decisions Modeling social interactions Informing policy Designing markets • # 12 Education 13 Finance • Table 2: Cross-cutting objectives that are relevant to many climate change domains. 60 # Acknowledgments Electricity systems. We thank James Kelloway (National Grid ESO), Jack Kelly (Open Climate Fix), Zico Kolter (CMU), and Henry Richardson (WattTime) for their help and ideas in shaping this section. We also thank Samuel Buteau (Dalhousie University) and Marc Cormier (Dalhousie University) for their in- puts on accelerated science and battery storage technologies; Julian Kates-Harbeck (Harvard) and Melrose Roderick (CMU) for their extensive inputs and ideas on nuclear fusion; and Alasdair Bruce (formerly Na- tional Grid ESO) for inputs on emissions factor forecasting and automated dispatch. Finally, we thank Lea Boche (EPRI), Carl Elkin (DeepMind), Jim Gao (DeepMind), Muhammad Hasan (DeepMind), Guannan He (CMU), Jeremy Keen (CMU), Zico Kolter (CMU), Luke Lavin (CMU), Sanam Mirzazad (EPRI), David Pfau (DeepMind), Crystal Qian (DeepMind), Juliet Rothenberg (DeepMind), Sims Witherspoon (Deep- Mind) and Matt Wytock (Gridmatic, Inc.) for helpful comments and feedback. Transportation. We are grateful for advice from Alan T. Jenn (UC Davis) and Prithvi S. Acharya (CMU) on electric vehicles, Alexandre Jacquillat (CMU) on decarbonizing aviation, Michael Whiston (CMU) on hydrogen fuel cells, Evan Sherwin (CMU) on alternative fuels, and Samuel Buteau (Dalhousie University) on batteries. Buildings and Cities. We thank ´Erika Mata (IVL - Swedish Environmental Research Institute, IPCC Lead Author Buildings section), Duccio Piovani (nam.R) and Jack Kelly (Open Climate Fix) for feedback and ideas. Industry. We appreciate all the constructive feedback from Angela Acocella (MIT), Kevin McCloskey (Google), and Bill Tubbs (University of British Columbia), and we are grateful to Kipp Bradford (Yale) for his recommendations around embodied energy and refrigeration. Thanks to Allie Schwertner (Rockwell Automation), Greg Kochanski (Google), and Paul Weaver (Abstract) for their suggestions around optimizing industrial processes for low-carbon energy. Farms & Forests. We would like to give thanks to David Marvin (Salo) and Remi Charpentier (Tesselo) on remote sensing for land use. Max Nova (SilviaTerra) provided insight on forestry, Mark Crowley (University of British Columbia) on forest fire management, Benjamin Deleener (ChrysaLabs) on precision agriculture, and Lindsay Brin (Element AI) on soil chemistry. Climate prediction. We thank Ghaleb Abdulla (LLNL), Ben Kravitz (PNNL) and David John Gagne II (UCAR) for enlightening conversations; Goodwin Gibbins (Imperial College London) and Ben Kravitz (PNNL) for detailed editing and feedback; and Claire Monteleoni (CU Boulder) and Prabhat (LBL) for feedback which improved the quality of this manuscript. Societal adaptation. We thank Loubna Benabbou (UQAR), Mike Sch¨afer (University of Zurich), Andrea Garcia Tapia (Stevens Tech), Slava Jankin Mikhaylov (Hertie School Berlin), and Sarah M. Fletcher (MIT) for valuable conversations on the social aspects of climate change. Solar geoengineering. We thank David Keith (Harvard), Peter Irvine (Harvard), Zhen Dai (Harvard), Colleen Golja (Harvard), Ross Boczar (UC Berkeley), Jon Proctor (UC Berkeley), Ben Kravitz (Indiana University), Andrew Lockley (University College London), Trude Storelvmo (University of Oslo), and Si- mon Gruber (University of Oslo) for help and useful feedback. 61 Individual action. We thank Priyanka deSouza (MIT), Olivier Corradi (Tomorrow), Jack Kelly (Open Climate Fix), Ioana Marinescu (UPenn), and Aven Satre-Meloy (Oxford). Collective Decisions. We thank Sebastian Sewerin (ETH Z¨urich), D. Cale Reeves (UT Austin), and Rahul Ladhania (UPenn). Education. We appreciated the constructive feedback received by Jacqueline Bourdeau (T ´ELUQ Univer- sity), who gave us valuable insights regarding the field of AIED. Finance. We thank Himanshu Gupta (ClimateAI), and Bjarne Steffen (ETH Z¨urich) for constructive dis- cussions and the valuable feedback. The authors gratefully acknowledge support from National Science Foundation grant 1803547, the Cen- ter for Climate and Energy Decision Making through a cooperative agreement between the National Sci- ence Foundation and Carnegie Mellon University (SES-00949710), US Department of Energy contract DE- FG02-97ER25308, the Natural Sciences and Engineering Research Council of Canada, and the MIT Media Lab Consortium. 62 # References [1] Joseph Romm. Climate Change: What Everyone Needs to Know. Oxford University Press, 2018. [2] David Archer and Stefan Rahmstorf. The climate crisis: An introductory guide to climate change. Cambridge University Press, 2010. [3] Christopher B Field, Vicente Barros, Thomas F Stocker, and Qin Dahe. Managing the risks of extreme events and disasters to advance climate change adaptation: special report of the intergovernmental panel on climate change. Cambridge University Press, 2012. [4] IPCC. Global warming of 1.5 ◦C. An IPCC special report on the impacts of global warming of 1.5 ◦C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty [V. Masson-Delmotte, P. Zhai, H. O. P¨ortner, D. Roberts, J. Skea, P.R. Shukla, A. Pirani, Y. Chen, S. Connors, M. Gomis, E. Lonnoy, J. B. R. Matthews, W. Moufouma-Okia, C. P´ean, R. Pidcock, N. Reay, M. Tignor, T. Waterfield, X. Zhou (eds.)]. 2018. [5] Gregory D Hager, Ann Drobnis, Fei Fang, Rayid Ghani, Amy Greenwald, Terah Lyons, David C Parkes, Jason Schultz, Suchi Saria, Stephen F Smith, et al. Artificial intelligence for social good. Preprint arXiv:1901.05406, 2019. [6] Bettina Berendt. AI for the common good?! pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics, 10(1):44–65, 2019. [7] Maria De-Arteaga, William Herlands, Daniel B Neill, and Artur Dubrawski. Machine learning for the develop- ing world. ACM Transactions on Management Information Systems (TMIS), 9(2):9, 2018. [8] Carla Gomes, Thomas Dietterich, Bistra Dilkina, Ermon Stefano, Fei Fang, Alan Farnsworth, Alan Fern, Xioali Fern, Daniel Fink, Douglas Fisher, Alexander Flecker, Daniel Freund, Angela Fuller, John Gregoire, John Hopcroft, Zico Kolter, Warren Powell, Nicole Santov, John Selker, Bart Selman, Daniel Shelcon, David Shmoys, Milind Tambe, Christopher Wood, Weng-Keen Wong, Xiaojian Wu, Steve Kelling, Yexiang Xue, Amulya Yadav, Aziz Yakubu, and Mary Lou Zeeman. Computational sustainability: Computing for a better world and a sustainable future. Communications of ACM (in the press), 2019. [9] Lucas N Joppa. The case for technology investments in the environment. Nature, pages 325 – 328, 2017. [10] J¨org L¨assig, Kristian Kersting, and Katharina Morik. Computational Sustainability, volume 645. Springer, 2016. [11] Carla P Gomes. Computational sustainability: Computational methods for a sustainable environment, economy, and society. The Bridge, 39(4):5–13, 2009. [12] Thomas G Dietterich. Machine learning in ecosystem informatics and sustainability. In Twenty-First Interna- tional Joint Conference on Artificial Intelligence, 2009. [13] C. Monteleoni, G.A. Schmidt, F. Alexander, A. Niculescu-Mizil, K. Steinhaeuser, M. Tippett, A. Banerjee, M.B. Blumenthal, A.R. Ganguly, J.E. Smerdon, and M. Tedesco. Climate informatic. In T. Yu, N. Chawla, and S. Simoff, editors, Computational Intelligent Data Analysis for Sustainable Development; Data Mining and Knowledge Discovery Series, chapter 4, pages 81–126. CRC Press, Taylor & Francis Group, 2013. [14] James H Faghmous and Vipin Kumar. A big data guide to understanding climate change: The case for theory- guided data science. Big data, 2(3):155–163, 2014. [15] Lynn Helena Kaack. Challenges and Prospects for Data-Driven Climate Change Mitigation. PhD thesis, Carnegie Mellon University, Pittsburgh, PA, 2019. 63 [16] James D Ford, Simon E Tilleard, Lea Berrang-Ford, Malcolm Araos, Robbert Biesbroek, Alexandra C Lesnikowski, Graham K MacDonald, Angel Hsu, Chen Chen, and Livia Bizikova. Opinion: Big data has big potential for applications to climate change adaptation. Proceedings of the National Academy of Sciences, 113(39):10729–10732, 2016. [17] Stanford Graduate School of Business. Andrew Ng: Artificial intelligence is the new electricity. https: //www.youtube.com/watch?v=21EiKfQYZXc, Feb 2017. [18] Sarvapali Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nicholas R Jennings. Putting the “smarts” into the smart grid: A grand challenge for artificial intelligence. Communications of the ACM, 55(4):86–97, 2012. [19] Kasun S Perera, Zeyar Aung, and Wei Lee Woon. Machine learning techniques for supporting renewable In International Workshop on Data Analytics for Renewable energy generation and integration: a survey. Energy Integration, pages 81–96. Springer, 2014. https: //www.brookings.edu/research/how-artificial-intelligence-will-affect-the- future-of-energy-and-climate/, 2019. [21] T. Bruckner, I.A. Bashmakov, Y. Mulugetta, H. Chum, A. de la Vega Navarro, J. Edmonds, A. Faaij, B. Fung- tammasan, A. Garg, E. Hertwich, D. Honnery, D. Infield, M. Kainuma, S. Khennas, S. Kim, H.B. Nimir, K. Riahi, N. Strachan, R. Wiser, and X. Zhang. Energy Systems, in IPCC, Working Group III contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, Climate Change 2014: Mit- igation of Climate Change, chapter 8. Geneva [O. Edenhofer, R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I. Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schl¨omer, C. von Stechow, T. Zwickel, J.C. Minx, (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2014. [22] Federal Energy Regulatory Commission et al. Energy primer: A handbook of energy market basics. Federal Energy Regulatory Commission: Washington, DC, USA, 2015. [23] Alexandra Von Meier. Electric Power Systems: A Conceptual Introduction. Wiley Online Library, 2006. [24] Daniel Sadi Kirschen and Goran Strbac. Fundamentals of Power System Economics, volume 1. Wiley Online Library, 2004. [25] Allen J Wood, Bruce F Wollenberg, and Gerald B Shebl´e. Power Generation, Operation, and Control. John Wiley & Sons, 2013. [26] IPCC. Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [O. Edenhofer, R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I. Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schl¨omer, C. von Stechow, T. Zwickel, J.C. Minx, (eds.)]. 2014. [27] Felix Creutzig. Economic and ecological views on climate change mitigation with bioenergy and negative emissions. GCB bioenergy, 8(1):4–10, 2016. [28] Alexey Lokhov. Technical and economic aspects of load following with nuclear power plants. NEA, OECD, Paris, France, 2011. [29] Annette Evans, Vladimir Strezov, and Tim J Evans. Assessment of utility energy storage options for increased renewable energy penetration. Renewable and Sustainable Energy Reviews, 16(6):4141–4147, 2012. [30] Eric S Hittinger and Inˆes ML Azevedo. Bulk energy storage increases United States electricity system emis- sions. Environmental science & technology, 49(5):3203–3210, 2015. 64 [31] Oytun Babacan, Ahmed Abdulla, Ryan Hanna, Jan Kleissl, and David G Victor. Unintended effects of res- idential energy storage on emissions from the electric power system. Environmental science & technology, 52(22):13600–13608, 2018. [32] Johan Mathe, Nina Miolane, Nicolas Sebastien, and Jeremie Lequeux. Pvnet: A lrcn architecture for spatio- temporal photovoltaic powerforecasting from numerical weather prediction. Preprint arXiv:1902.01453, 2019. [33] Utpal Kumar Das, Kok Soon Tey, Mehdi Seyedmahmoudian, Saad Mekhilef, Moh Yamani Idna Idris, Willem Van Deventer, Bend Horan, and Alex Stojcevski. Forecasting of photovoltaic power generation and model optimization: A review. Renewable and Sustainable Energy Reviews, 81:912–928, 2018. [34] Cyril Voyant, Gilles Notton, Soteris Kalogirou, Marie-Laure Nivet, Christophe Paoli, Fabrice Motte, and Alexis Fouilloy. Machine learning methods for solar radiation forecasting: A review. Renewable Energy, 105:569–582, 2017. [35] Can Wan, Jian Zhao, Yonghua Song, Zhao Xu, Jin Lin, and Zechun Hu. Photovoltaic and solar power forecast- ing for smart grid energy management. CSEE Journal of Power and Energy Systems, 1(4):38–46, 2015. [36] Yuchi Sun, Gergely Sz˝ucs, and Adam R Brandt. Solar pv output prediction from video streams using convolu- tional neural networks. Energy & Environmental Science, 11(7):1811–1818, 2018. [37] Alasdair Bruce and Lyndon Ruff. Deep learning solar PV and carbon intensity forecasts. http: //powerswarm.co.uk/wp-content/uploads/2018/10/2018.10.18-Bruce-National- Grid-ESO-Deep-Learning-Solar-PV-and-Carbon-Intensity.pdf. [38] Ahmad Alzahrani, Pourya Shamsi, Cihan Dagli, and Mehdi Ferdowsi. Solar irradiance forecasting using deep neural networks. Procedia Computer Science, 114:304–313, 2017. [39] Jiaming Li, John K Ward, Jingnan Tong, Lyle Collins, and Glenn Platt. Machine learning for solar irradiance forecasting of photovoltaic system. Renewable energy, 90:542–553, 2016. [40] Navin Sharma, Pranshu Sharma, David Irwin, and Prashant Shenoy. Predicting solar generation from weather In 2011 IEEE International Conference on Smart Grid Communications forecasts using machine learning. (SmartGridComm), pages 528–533. IEEE, 2011. [41] Aoife M Foley, Paul G Leahy, Antonino Marvuglia, and Eamon J McKeogh. Current methods and advances in forecasting of wind power generation. Renewable Energy, 37(1):1–8, 2012. [42] Machine learning can boost the value of wind energy. https://deepmind.com/blog/machine- learning-can-boost-value-wind-energy/. [43] Can Wan, Zhao Xu, Pierre Pinson, Zhao Yang Dong, and Kit Po Wong. Probabilistic forecasting of wind power generation using extreme learning machine. IEEE Transactions on Power Systems, 29(3):1033–1044, 2014. [44] Da Liu, Dongxiao Niu, Hui Wang, and Leilei Fan. Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm. Renewable Energy, 62:592–597, 2014. [45] Pierre Pinson and GN Kariniotakis. Wind power forecasting using fuzzy neural networks enhanced with on- line prediction risk assessment. In 2003 IEEE Bologna Power Tech Conference Proceedings,, volume 2, pages 8–pp. IEEE, 2003. [46] Tao Hong and Shu Fan. Probabilistic electric load forecasting: A tutorial review. International Journal of Forecasting, 32(3):914–938, 2016. [47] Soliman Abdel-hady Soliman and Ahmad Mohammad Al-Kandari. Electrical load forecasting: modeling and model construction. Elsevier, 2010. [48] Hesham K Alfares and Mohammad Nazeeruddin. Electric load forecasting: literature survey and classification of methods. International journal of systems science, 33(1):23–34, 2002. 65 [49] Henrique Steinherz Hippert, Carlos Eduardo Pedreira, and Reinaldo Castro Souza. Neural networks for short- term load forecasting: A review and evaluation. IEEE Transactions on power systems, 16(1):44–55, 2001. [50] Romain Juban, Henrik Ohlsson, Mehdi Maasoumy, Louis Poirier, and J Zico Kolter. A multiple quantile regression approach to the wind, solar, and price tracks of gefcom2014. International Journal of Forecasting, 32(3):1094–1102, 2016. [51] Matt Wytock and Zico Kolter. Sparse Gaussian conditional random fields: Algorithms, theory, and application to energy forecasting. In International conference on machine learning, pages 1265–1273, 2013. [52] Alexander Kell, A Stephen McGough, and Matthew Forshaw. Segmenting residential smart meter data for short-term load forecasting. In Proceedings of the Ninth International Conference on Future Energy Systems, pages 91–96. ACM, 2018. [53] Christian Beckel, Leyna Sadamori, and Silvia Santini. Automatic socio-economic classification of households using electricity consumption data. In Proceedings of the Fourth International Conference on Future Energy Systems, pages 75–86. ACM, 2013. [54] James Anderson, Fengyu Zhou, and Steven H Low. Disaggregation for networked power systems. In 2018 Power Systems Computation Conference (PSCC), pages 1–7. IEEE, 2018. [55] Emre C Kara, Ciaran M Roberts, Michaelangelo Tabone, Lilliana Alvarez, Duncan S Callaway, and Emma M Stewart. Disaggregating solar generation from feeder-level measurements. Sustainable Energy, Grids and Networks, 13:112–121, 2018. [56] Gregory S Ledva, Laura Balzano, and Johanna L Mathieu. Real-time energy disaggregation of a distribution feeder’s demand using online learning. IEEE Transactions on Power Systems, 33(5):4730–4740, 2018. [57] Lynn H Kaack, Jay Apt, M Granger Morgan, and Patrick McSharry. Empirical prediction intervals improve energy forecasting. Proceedings of the National Academy of Sciences, 114(33):8752–8757, 2017. [58] Priya Donti, Brandon Amos, and J Zico Kolter. Task-based end-to-end model learning in stochastic optimiza- tion. In Advances in Neural Information Processing Systems, pages 5484–5494, 2017. [59] Adam N Elmachtoub and Paul Grigas. Smart “predict, then optimize”. Preprint arXiv:1710.08005, 2017. [60] Bryan Wilder, Bistra Dilkina, and Milind Tambe. Melding the data-decisions pipeline: Decision-focused learn- ing for combinatorial optimization. Preprint arXiv:1809.05504, 2018. [61] Carlo Brancucci Martinez-Anido, Benjamin Botor, Anthony R Florita, Caroline Draxl, Siyuan Lu, Hendrik F Hamann, and Bri-Mathias Hodge. The value of day-ahead solar power forecasting improvement. Solar Energy, 129:192–203, 2016. [62] KS Pandya and SK Joshi. A survey of optimal power flow methods. Journal of Theoretical & Applied Infor- mation Technology, 4(5), 2008. [63] Neel Guha, Zhecheng Wang, Matt Wytock, and Arun Majumdar. Machine learning for AC optimal power flow. http://www.neelguha.com/opf.pdf, Jun 2019. [64] Dimitris Bertsimas and Bartolomeo Stellato. Online mixed-integer optimization in milliseconds. Preprint arXiv:1907.02206, 2019. [65] Ahmed Zamzam and Kyri Baker. Learning optimal solutions for extremely fast ac optimal power flow. arXiv preprint arXiv:1910.01213, 2019. [66] Mahdi Jamei, Letif Mones, Alex Robson, Lyndon White, James Requeima, and Cozmin Ududec. Meta-optimization of optimal power flow. https://www.climatechange.ai/CameraReady/43/ CameraReadySubmission/icml_invenia_cameraready.pdf, Jun 2019. 66 [67] Benjamin Donnot, Isabelle Guyon, Marc Schoenauer, Patrick Panciatici, and Antoine Marot. Introducing ma- chine learning for power system operation support. Preprint arXiv:1709.09527, 2017. [68] Andreas Essl, Andr´e Ortner, Reinhard Haas, and Peter Hettegger. Machine learning analysis for a flexibility energy approach towards renewable energy integration with dynamic forecasting of electricity balancing power. In 2017 14th International Conference on the European Energy Market (EEM), pages 1–6. IEEE, 2017. [69] Nicholas Moehle, Enzo Busseti, Stephen Boyd, and Matt Wytock. Dynamic energy management. Preprint arXiv:1903.06230, 2019. [70] Roel Dobbe, Oscar Sondermeijer, David Fridovich-Keil, Daniel Arnold, Duncan Callaway, and Claire Tom- lin. Towards distributed energy services: Decentralizing optimal power flow with machine learning. Preprint arXiv:1806.06790, 2019. [71] Stavros Karagiannopoulos, Roel Dobbe, Petros Aristidou, Duncan Callaway, and Gabriela Hug. Data-driven control design schemes in active distribution grids: Capabilities and challenges. In Proceedings of the 2019 IEEE PowerTech conference. IEEE, 2019. [72] Stavros Karagiannopoulos, Petros Aristidou, and Gabriela Hug. Data-driven local control design for active distribution grids using off-line optimal power flow and machine learning techniques. IEEE Transactions on Smart Grid, 2019. [73] Roel Dobbe, David Fridovich-Keil, and Claire Tomlin. Fully decentralized policies for multi-agent systems: An information theoretic approach. In Advances in Neural Information Processing Systems, pages 2941–2950, 2017. [74] Jianming Lian, Wei Zhang, Y Sun, Laurentiu D Marinovici, Karanjit Kalsi, and Steven E Widergren. Transac- tive system: Part i: Theoretical underpinnings of payoff functions, control decisions, information privacy, and solution concepts. Technical report, Pacific Northwest National Lab.(PNNL), Richland, WA (United States), 2018. [75] Jianming Lian, Y Sun, Karanjit Kalsi, Steven E Widergren, Di Wu, and Huiying Ren. Transactive system: Part ii: Analysis of two pilot transactive systems using foundational theory and metrics. Technical report, Pacific Northwest National Lab.(PNNL), Richland, WA (United States), 2018. [76] Lu Zhang, Jianjun Tan, Dan Han, and Hao Zhu. From machine learning to deep learning: progress in machine intelligence for rational drug discovery. Drug discovery today, 22(11):1680–1685, 2017. [77] Camus Energy. https://camus.energy/, 2019. [78] Christian Borgs, Ozan Candogan, Jennifer Chayes, Ilan Lobel, and Hamid Nazerzadeh. Optimal multiperiod pricing with service guarantees. Management Science, 60(7):1792–1811, 2014. [79] Roberto Maestre, Juan Ram´on Duque, Alberto Rubio, and Juan Ar´evalo. Reinforcement learning for fair dynamic pricing. CoRR, abs/1803.09967, 2018. [80] Sarvapali D Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nick Jennings. Agent-based control for decentralised demand side management in the smart grid. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages 5–12. International Foundation for Autonomous Agents and Multiagent Systems, 2011. [81] Sarvapali D Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nicholas R Jennings. Agent-based home- ostatic control for green energy in the smart grid. ACM Transactions on Intelligent Systems and Technology (TIST), 2(4):35, 2011. [82] Matthias Deindl, Carsten Block, Rustam Vahidov, and Dirk Neumann. Load shifting agents for automated In 2008 Second IEEE International Conference on Self- demand side management in micro energy grids. Adaptive and Self-Organizing Systems, pages 487–488. IEEE, 2008. 67 [83] Fredrik Ygge, JM Akkermans, Arne Andersson, Marko Krejic, and Erik Boertjes. The homebots system and field test: A multi-commodity market for predictive power load management. In Proceedings Fourth Interna- tional Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, volume 1, pages 363–382, 1999. [84] Niv Buchbinder, Navendu Jain, and Ishai Menache. Online job-migration for reducing the electricity bill in the cloud. In International Conference on Research in Networking, pages 172–185. Springer, 2011. [85] Daniel F Salas and Warren B Powell. Benchmarking a scalable approximate dynamic programming algorithm for stochastic control of grid-level energy storage. INFORMS Journal on Computing, 30(1):106–123, 2018. [86] Powertac. https://powertac.org/, 2019. [87] Keith T Butler, Daniel W Davies, Hugh Cartwright, Olexandr Isayev, and Aron Walsh. Machine learning for molecular and materials science. Nature, 559(7715):547, 2018. [88] Junwen Bai, Yexiang Xue, Johan Bjorck, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Santosh K Suram, Robert Bruce van Dover, John M Gregoire, and Carla P Gomes. Phase mapper: Accelerating materials discovery with ai. AI Magazine, 39(1):15–26, 2018. [89] Carla P Gomes, Junwen Bai, Yexiang Xue, Johan Bj¨orck, Brendan Rappazzo, Sebastian Ament, Richard Bern- stein, Shufeng Kong, Santosh K Suram, R Bruce van Dover, et al. Crystal: a multi-agent ai system for automated mapping of materials’ crystal structures. MRS Communications, pages 1–9, 2019. [90] Santosh K Suram, Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard Bernstein, Johan Bjorck, Lan Zhou, R Bruce van Dover, Carla P Gomes, et al. Automated phase mapping with agilefd and its application to light absorber discovery in the v–mn–nb oxide system. ACS combinatorial science, 19(1):37–46, 2016. [91] Koji Fujimura, Atsuto Seko, Yukinori Koyama, Akihide Kuwabara, Ippei Kishida, Kazuki Shitara, Craig AJ Fisher, Hiroki Moriwake, and Isao Tanaka. Accelerated materials design of lithium superionic conductors based on first-principles calculations and machine learning algorithms. Advanced Energy Materials, 3(8):980–985, 2013. [92] Yue Liu, Tianlu Zhao, Wangwei Ju, and Siqi Shi. Materials discovery and design using machine learning. Journal of Materiomics, 3(3):159–177, 2017. [93] Rafael G´omez-Bombarelli, Jennifer N Wei, David Duvenaud, Jos´e Miguel Hern´andez-Lobato, Benjam´ın S´anchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Al´an Aspuru-Guzik. Automatic chemical design using a data-driven continuous representation of molecules. ACS central science, 4(2):268–276, 2018. [94] Mitsutaro Umehara, Helge S Stein, Dan Guevarra, Paul F Newhouse, David A Boyd, and John M Gregoire. An- alyzing machine learning models to accelerate generation of fundamental materials insights. npj Computational Materials, 5(1):34, 2019. [95] Subhashini Venugopalan and Varun Rai. Topic based classification and pattern identification in patents. Tech- nological Forecasting and Social Change, 94:236–250, 2015. [96] David Abel, Edward C Williams, Stephen Brawner, Emily Reif, and Michael L Littman. Bandit-based solar panel control. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [97] Hany Abdelrahman, Felix Berkenkamp, Jan Poland, and Andreas Krause. Bayesian optimization for maximum power point tracking in photovoltaic power plants. In 2016 European Control Conference (ECC), pages 2078– 2083. IEEE, 2016. [98] J Zico Kolter, Zachary Jackowski, and Russ Tedrake. Design, analysis, and learning control of a fully actuated micro wind turbine. In 2012 American Control Conference (ACC), pages 2256–2263. IEEE, 2012. 68 [99] Srinivasan Iyengar, Stephen Lee, Daniel Sheldon, and Prashant Shenoy. Solarclique: Detecting anomalies in residential solar arrays. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, page 38. ACM, 2018. [100] Bistra Dilkina, Jayant R Kalagnanam, and Elena Novakovskaia. Method for designing the layout of turbines in a windfarm, November 17 2015. US Patent 9,189,570. [101] Rafał Weron. Electricity price forecasting: A review of the state-of-the-art with a look into the future. Interna- tional journal of forecasting, 30(4):1030–1081, 2014. [102] Jesus Lago, Fjo De Ridder, and Bart De Schutter. Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms. Applied Energy, 221:386–405, 2018. [103] Hao Wang and Baosen Zhang. Energy storage arbitrage in real-time markets via reinforcement learning. In 2018 IEEE Power & Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2018. [104] Jordan M Malof, Kyle Bradbury, Leslie M Collins, and Richard G Newell. Automatic detection of solar photo- voltaic arrays in high resolution aerial imagery. Applied energy, 183:229–240, 2016. [105] Jiafan Yu, Zhecheng Wang, Arun Majumdar, and Ram Rajagopal. DeepSolar: A machine learning framework to efficiently construct a solar deployment database in the United States. Joule, 2(12):2605–2617, 2018. [106] Priya L Donti, Liu Yajing, Andreas J Schmitt, Andrey Bernstein, Rui Yang, and Yingchen Zhang. Matrix completion for low-observability voltage estimation. Preprint arXiv:1801.09799, 2018. [107] Huaiguang Jiang and Yingchen Zhang. Short-term distribution system state forecast based on optimal syn- chrophasor sensor placement and extreme learning machine. In 2016 IEEE Power and Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2016. [108] Michael Pertl, Kai Heussen, Oliver Gehrke, and Michel Rezkalla. Voltage estimation in active distribution grids using neural networks. In 2016 IEEE Power and Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2016. [109] William Steinhurst, Patrick Knight, and Melissa Schultz. Hydropower greenhouse gas emissions. Conservation Law Foundation, 24:6, 2012. [110] Energy department awards $5.5 million to apply machine learning to geothermal exploration. https://www.energy.gov/eere/articles/energy-department-awards-55-million- apply-machine-learning-geothermal-exploration. [111] Xiaojian Wu, Jonathan Gomes-Selman, Qinru Shi, Yexiang Xue, Roosevelt Garcia-Villacorta, Elizabeth An- derson, Suresh Sethi, Scott Steinschneider, Alexander Flecker, and Carla Gomes. Efficiently approximating the pareto frontier: hydropower dam placement in the amazon basin. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [112] Fu-Chen Chen and Mohammad R Jahanshahi. NB-CNN: Deep learning-based crack detection using convolu- tional neural network and na¨ıve Bayes data fusion. IEEE Transactions on Industrial Electronics, 65(5):4392– 4400, 2018. [113] Francesco Caliv´a, Fabio Sousa De Ribeiro, Antonios Mylonakis, Christophe Demazi`ere, Paolo Vinai, Georgios Leontidis, and Stefanos Kollias. A deep learning approach to anomaly detection in nuclear reactors. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2018. [114] Nicholas W Touran, John Gilleland, Graham T Malmgren, Charles Whitmer, and William H Gates III. Com- putational tools for the integrated design of advanced nuclear reactors. Engineering, 3(4):518–526, 2017. [115] Nature Physics. Insight: Nuclear fusion. https://www.nature.com/collections/bccqhmkbyw. [116] Steven C Cowley. The quest for fusion power. Nature Physics, 12(5):384, 2016. 69 [117] EA Baltz, E Trask, M Binderbauer, M Dikovsky, H Gota, R Mendoza, JC Platt, and PF Riley. Achievement of sustained net plasma heating in a fusion experiment with the optometrist algorithm. Scientific reports, 7(1):6425, 2017. [118] Barbara Cannas, Alessandra Fanni, E Marongiu, and P Sonato. Disruption forecasting at jet using neural networks. Nuclear fusion, 44(1):68, 2003. [119] A Murari, G Vagliasindi, P Arena, L Fortuna, O Barana, M Johnson, JET-EFDA Contributors, et al. Prototype of an adaptive disruption predictor for jet based on fuzzy logic and regression trees. Nuclear Fusion, 48(3):035010, 2008. [120] Jes´us Vega, Sebasti´an Dormido-Canto, Juan M L´opez, Andrea Murari, Jes´us M Ram´ırez, Ra´ul Moreno, Mar- iano Ruiz, Diogo Alves, Robert Felton, JET-EFDA Contributors, et al. Results of the jet real-time disruption predictor in the iter-like wall campaigns. Fusion Engineering and Design, 88(6-8):1228–1231, 2013. [121] CG Windsor, G Pautasso, C Tichmann, RJ Buttery, TC Hender, JET EFDA Contributors, et al. A cross-tokamak neural network disruption predictor for the jet and asdex upgrade tokamaks. Nuclear fusion, 45(5):337, 2005. [122] D Wroblewski, GL Jahns, and JA Leuer. Tokamak disruption alarm based on a neural network model of the high-beta limit. Nuclear Fusion, 37(6):725, 1997. [123] Julian Kates-Harbeck, Alexey Svyatkovskiy, and William Tang. Predicting disruptive instabilities in controlled fusion plasmas through deep learning. Nature, 2019. [124] Justin E Barton, William P Wehner, Eugenio Schuster, Federico Felici, and Olivier Sauter. Simultaneous closed- loop control of the current profile and the electron temperature profile in the tcv tokamak. In 2015 American Control Conference (ACC), pages 3316–3321. IEEE, 2015. [125] F Felici, O Sauter, S Coda, BP Duval, TP Goodman, JM Moret, JI Paley, TCV Team, et al. Real-time physics- model-based simulation of the current density profile in tokamak plasmas. Nuclear Fusion, 51(8):083052, 2011. [126] F Felici and O Sauter. Non-linear model-based optimization of actuator trajectories for tokamak plasma profile control. Plasma Physics and Controlled Fusion, 54(2):025002, 2012. [127] Gregorij V Pereverzev and PN Yushmanov. Astra. automated system for transport analysis in a tokamak. 2002. [128] JF Artaud, V Basiuk, F Imbeaux, Martin Schneider, J Garcia, G Giruzzi, P Huynh, T Aniel, F Albajar, JM An´e, et al. The cronos suite of codes for integrated tokamak modelling. Nuclear Fusion, 50(4):043001, 2010. [129] RV Budny, R Andre, G Bateman, F Halpern, CE Kessel, A Kritz, and D McCune. Predictions of h-mode performance in iter. Technical report, Princeton Plasma Physics Lab.(PPPL), Princeton, NJ (United States), 2008. [130] Integrated plasma simulator (ips) v2.1 documentation. http://ipsframework.sourceforge.net/ doc/html/. [131] Stiffi Zukhrufany. The utilization of supervised machine learning in predicting corrosion to support preventing pipelines leakage in oil and gas industry. Master’s thesis, University of Stavanger, Norway, 2018. [132] Tim Edward and Rob Salkowitz. How machine learning contributes to smarter pipeline maintenance. https://www.oilandgaseng.com/articles/how-machine-learning-contributes- to-smarter-pipeline-maintenance/, Apr 2018. [133] Jiangwen Wan, Yang Yu, Yinfeng Wu, Renjian Feng, and Ning Yu. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks. Sensors, 12(1):189–214, 2012. [134] SwRI developing methane leak detection system for DOE. https://www.swri.org/press-release/ swri-developing-methane-leak-detection-system-doe, Oct 2016. 70 [135] Bluefield Technologies. http://bluefield.co/, 2016. [136] Biswarup Bhattacharya and Abhishek Sinha. Deep fault analysis and subset selection in solar power grids. Preprint arXiv:1711.02810, 2017. [137] Cynthia Rudin, David Waltz, Roger N Anderson, Albert Boulanger, Ansaf Salleb-Aouissi, Maggie Chow, Hai- monti Dutta, Philip N Gross, Bert Huang, Steve Ierome, et al. Machine learning for the New York City power grid. IEEE transactions on pattern analysis and machine intelligence, 34(2):328–345, 2012. [138] Van Nhan Nguyen, Robert Jenssen, and Davide Roverso. Automatic autonomous vision-based power line inspection: A review of current status and the potential role of deep learning. International Journal of Electrical Power & Energy Systems, 99:107–120, 2018. [139] WattTime. https://www.watttime.org/, 2019. [140] electricitymap. https://www.electricitymap.org, 2019. [141] Carbon intensity api. https://carbonintensity.org.uk/, 2019. [142] Shahidur R Khandker, Douglas F Barnes, and Hussain A Samad. Welfare impacts of rural electrification: a case study from Bangladesh. The World Bank, 2009. [143] Shahidur R Khandker, Douglas F Barnes, Hussain Samad, and Nguyen Huu Minh. Welfare impacts of rural electrification: evidence from Vietnam. The World Bank, 2009. [144] Douglas Douglas Austin Ellman. The reference electrification model: a computer model for planning rural electricity access. PhD thesis, Massachusetts Institute of Technology, 2015. [145] Martin Cenek, Rocco Haro, Brandon Sayers, and Jifeng Peng. Climate change and power security: Power load prediction for rural electrical microgrids using long short term memory and artificial neural networks. Applied Sciences, 8(5):749, 2018. [146] Fred Otieno, Nathan Williams, and Patrick McSharry. Forecasting energy demand for microgrids over multiple horizons. In 2018 IEEE PES/IAS PowerAfrica, pages 457–462. IEEE, 2018. [147] Hongyu Ren, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, and Stefano Ermon. Learning with weak supervision from physics and data-driven constraints. AI Magazine, 39(1), 2018. [148] Dimitry Gershenson, Brandon Rohrer, and Anna Lerner. A new predictive model for more accurate electrical grid mapping. https://code.fb.com/connectivity/electrical-grid-mapping/. [149] Carbon tracker to measure world’s power plant emissions from space with support from google.org. https://www.carbontracker.org/carbon-tracker-to-measure-worlds-power- plant-emissions-from-space-with-support-from-google-org/, May 2019. [150] Andrey Bogomolov, Bruno Lepri, Roberto Larcher, Fabrizio Antonelli, Fabio Pianesi, and Alex Pentland. En- ergy consumption prediction using people dynamics derived from cellular network data. EPJ Data Science, 5(1):13, 2016. [151] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equa- tions. In Advances in Neural Information Processing Systems, pages 6571–6583, 2018. [152] Connor Schenck and Dieter Fox. Spnets: Differentiable fluid dynamics for deep neural networks. Preprint arXiv:1806.06094, 2018. [153] Filipe de Avila Belbute-Peres, Kevin Smith, Kelsey Allen, Josh Tenenbaum, and J Zico Kolter. End-to-end differentiable physics for learning and control. In Advances in Neural Information Processing Systems, pages 7178–7189, 2018. 71 [154] Felix Creutzig, Patrick Jochem, Oreane Y. Edelenbosch, Linus Mattauch, Detlef P. van Vuuren, David Mc- Collum, and Jan Minx. Transport: A roadblock to climate change mitigation? Science, 350(6263):911–912, 2015. [155] Steven J. Davis, Nathan S. Lewis, Matthew Shaner, Sonia Aggarwal, Doug Arent, Inˆes L. Azevedo, Sally M. Benson, Thomas Bradley, Jack Brouwer, Yet-Ming Chiang, Christopher T. M. Clack, Armond Cohen, Stephen Doig, Jae Edmonds, Paul Fennell, Christopher B. Field, Bryan Hannegan, Bri-Mathias Hodge, Martin I. Hoffert, Eric Ingersoll, Paulina Jaramillo, Klaus S. Lackner, Katharine J. Mach, Michael Mastrandrea, Joan Ogden, Per F. Peterson, Daniel L. Sanchez, Daniel Sperling, Joseph Stagner, Jessika E. Trancik, Chi-Jen Yang, and Ken Caldeira. Net-zero emissions energy systems. Science, 360(6396), 2018. [156] R. Schaeffer, R. Sims, J. Corfee-Morlot, F. Creutzig, X. Cruz-Nunez, D. Dimitriu, and M. et al. D’Agosto. Transport, in IPCC, Working Group III contribution to the Fifth Assessment Report of the Intergovernmen- tal Panel on Climate Change, Climate Change 2014: Mitigation of Climate Change, chapter 8. Geneva [O. Edenhofer, R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I. Baum, S. Brun- ner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schl¨omer, C. von Stechow, T. Zwickel, J.C. Minx, (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2014. [157] Maria Figueroa, Oliver Lah, Lewis M Fulton, Alan McKinnon, and Geetam Tiwari. Energy for transport. Annual Review of Environment and Resources, 39:295–325, 2014. [158] Jacob Teter, Pierpaolo Cazzola, and Timur G¨ul. The Future of Trucks. International Energy Agency, 2017. [159] Lynn H Kaack, Parth Vaishnav, M Granger Morgan, Inˆes L Azevedo, and Srijana Rai. Decarbonizing intrare- gional freight systems with a focus on modal shift. Environmental Research Letters, 13(8):083001, 2018. [160] Zia Wadud, Don MacKenzie, and Paul Leiby. Help or hindrance? the travel, energy and carbon impacts of highly automated vehicles. Transportation Research Part A: Policy and Practice, 86:1 – 18, 2016. [161] Weiliang Zeng, Tomio Miwa, and Takayuki Morikawa. Application of the support vector machine and heuristic k-shortest path algorithm to determine the most eco-friendly path with a travel time constraint. Transportation Research Part D: Transport and Environment, 57:458 – 473, 2017. [162] M.H. Zaki and T. Sayed. Automated cyclist data collection under high density conditions. IET Intelligent Transport Systems, 10(5):361–369, 2016. [163] Robert Krile, Fred Todt, and Jeremy Schroeder. Assessing roadway traffic count duration and frequency impacts on annual average daily traffic estimation. Technical Report FHWA-PL-16-012, Federal Highway Administra- tion, Washington, D.C., United States, 2016. [164] Ioannis Tsapakis and William H Schneider. Use of support vector machines to assign short-term counts to seasonal adjustment factor groups. Transportation Research Record: Journal of the Transportation Research Board, (2527):8–17, 2015. [165] Massimiliano Gastaldi, Riccardo Rossi, Gregorio Gecchele, and Luca Della Lucia. Annual average daily traffic estimation from seasonal traffic counts. Procedia-Social and Behavioral Sciences, 87:279–291, 2013. [166] Lars Wilko Sommer, Tobias Schuchert, and J¨urgen Beyerer. Fast deep vehicle detection in aerial images. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on, pages 311–319. IEEE, 2017. [167] Qiling Jiang, Liujuan Cao, Ming Cheng, Cheng Wang, and Jonathan Li. Deep neural networks-based vehicle detection in satellite images. In Bioelectronics and Bioinformatics (ISBB), 2015 International Symposium on, pages 184–187. IEEE, 2015. [168] T Nathan Mundhenk, Goran Konjevod, Wesam A Sakla, and Kofi Boakye. A large contextual dataset for classification, detection and counting of cars with deep learning. In European Conference on Computer Vision, pages 785–800. Springer, 2016. 72 [169] Zhipeng Deng, Hao Sun, Shilin Zhou, Juanping Zhao, and Huanxin Zou. Toward fast and accurate vehicle detection in aerial images using coupled region-based convolutional neural networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017. [170] Lynn H. Kaack, George H. Chen, and M. Granger Morgan. Truck traffic monitoring with satellite images. In Proceedings of the 2Nd ACM SIGCAS Conference on Computing and Sustainable Societies, COMPASS ’19, pages 155–164, New York, NY, USA, 2019. ACM. [171] Silvio Nocera, Cayetano Ruiz-Alarc´on-Quintero, and Federico Cavallaro. Assessing carbon emissions from road transport through traffic flow estimators. Transportation Research Part C: Emerging Technologies, 95:125 – 148, 2018. [172] H. M. Abdul Aziz and Satish V. Ukkusuri. A novel approach to estimate emissions from large transportation networks: Hierarchical clustering-based link-driving-schedules for EPA-MOVES using dynamic time warping measures. International Journal of Sustainable Transportation, 12(3):192–204, 2018. [173] M. Yin, M. Sheehan, S. Feygin, J. Paiement, and A. Pozdnoukhov. A generative model of urban activities from cellular data. IEEE Transactions on Intelligent Transportation Systems, 19(6):1682–1696, June 2018. [174] Wei Ma and Zhen (Sean) Qian. Estimating multi-year 24/7 origin-destination demand using high-granular multi-source traffic data. Transportation Research Part C: Emerging Technologies, 96:96 – 121, 2018. [175] Alireza Ermagun and David Levinson. Spatiotemporal traffic forecasting: review and proposed directions. Transport Reviews, 38(6):786–814, 2018. [176] Liang Tang, Chenfeng Xiong, and Lei Zhang. Spatial transferability of neural network models in travel demand modeling. Journal of Computing in Civil Engineering, 32(3):04018010, 2018. [177] Xiaoqing Dai, Lijun Sun, and Yanyan Xu. Short-term origin-destination based metro flow prediction with probabilistic model selection approach. Journal of Advanced Transportation, 2018:15, 2018. [178] Peyman Noursalehi, Haris N. Koutsopoulos, and Jinhua Zhao. Real time transit demand prediction capturing station interactions and impact of special events. Transportation Research Part C: Emerging Technologies, 97:277 – 300, 2018. [179] Ed Manley, Chen Zhong, and Michael Batty. Spatiotemporal variation in travel regularity through transit user profiling. Transportation, 45(3):703–732, May 2018. [180] Mohammad Sajjad Ghaemi, Bruno Agard, Martin Tr´epanier, and Vahid Partovi Nia. A visual segmentation method for temporal smart card data. Transportmetrica A: Transport Science, 13(5):381–404, Jan 2017. [181] Calvin P Tribby, Harvey J Miller, Barbara B Brown, Carol M Werner, and Ken R Smith. Analyzing walking route choice through built environments using random forests and discrete choice techniques. Environment and Planning B: Urban Analytics and City Science, 44(6):1145–1167, 2017. [182] Alexandre Jacquillat and Amedeo R. Odoni. A roadmap toward airport demand and capacity management. Transportation Research Part A: Policy and Practice, 114:168 – 185, 2018. [183] Hanbong Lee, Waqar Malik, Bo Zhang, Balaji Nagarajan, and Yoon C Jung. Taxi time prediction at Char- lotte airport using fast-time simulation and machine learning techniques. In 15th AIAA Aviation Technology, Integration, and Operations Conference, page 2272, 2015. [184] J. Wen, J. Zhao, and P. Jaillet. Rebalancing shared mobility-on-demand systems: A reinforcement learning approach. In 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pages 220–225, Oct 2017. [185] Anissa Yuniashaesa Suatmadi, Felix Creutzig, and Ilona Otto. On-demand motorcycle taxis improve mobility, not sustainability. Case Studies on Transport Policy, 2019. 73 [186] Edgar G Hertwich, Saleem Ali, Luca Ciacci, Tomer Fishman, Niko Heeren, Eric Masanet, Farnaz Nojavan Asghari, Elsa Olivetti, Stefan Pauliuk, Qingshi Tu, and Paul Wolfram. Material efficiency strategies to reduc- ing greenhouse gas emissions associated with buildings, vehicles, and electronics—a review. Environmental Research Letters, 14(4):043004, apr 2019. [187] Xiqun (Michael) Chen, Majid Zahiri, and Shuaichao Zhang. Understanding ridesplitting behavior of on-demand ride services: An ensemble learning approach. Transportation Research Part C: Emerging Technologies, 76:51 – 70, 2017. [188] Aditi Moorthy, Robert De Kleine, Gregory Keoleian, Jeremy Good, and Geoff Lewis. Shared autonomous vehicles as a sustainable solution to the last mile problem: A case study of ann arbor-detroit area, mar 2017. [189] Namwoo Kang, Fred M. Feinberg, and Panos Y. Papalambros. Autonomous electric vehicle sharing system design. Journal of Mechanical Design, 139(1):011402–011402–10, 10 2016. [190] Miaojia Lu, Morteza Taiebat, Ming Xu, and Shu-Chien Hsu. Multiagent spatial simulation of autonomous taxis for urban commute: Travel economics and environmental impacts. Journal of Urban Planning and Develop- ment, 144(4):04018033, 2018. [191] T. Donna Chen, Kara M. Kockelman, and Josiah P. Hanna. Operations of a shared, autonomous, electric vehicle fleet: Implications of vehicle & charging infrastructure decisions. Transportation Research Part A: Policy and Practice, 94:243 – 254, 2016. [192] Jonn Axsen and Benjamin K. Sovacool. The roles of users in electric, shared and automated mobility transitions. Transportation Research Part D: Transport and Environment, 71:1 – 21, 2019. [193] W. Brian Arthur. Competing technologies, increasing returns, and lock-in by historical events. The Economic Journal, 99(394):116–131, 1989. [194] Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine Learning for Combinatorial Optimization: a Methodological Tour d’Horizon. arXiv e-prints, page arXiv:1811.06128, Nov 2018. [195] Ben Gesing and D. Peterson, S. and Michelsen. Artificial intelligence in logistics: a collaborative report by DHL and IBM on implications and use cases for the logistics industry. DHL Trend Research, Troisdorf, 2018. [196] Tuomas Sandholm. Very-large-scale generalized combinatorial multi-attribute auctions: Lessons from conduct- ing $60 billion of sourcing. [197] Tian Xie and Jeffrey C. Grossman. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett., 120:145301, Apr 2018. [198] Kyriacos Shiarlis, Joao Messias, Maarten van Someren, Shimon Whiteson, Jaebok Kim, Jered Vroon, Gwenn Englebienne, Khiet Truong, Vanessa Evers, No´e P´erez-Higueras, et al. TERESA: A socially intelligent semi- autonomous telepresence system. In Workshop on machine learning for social robotics at ICRA-2015 in Seattle, 2015. [199] Peter Arnfalk, Ulf Pilerot, Per Schillander, and Pontus Gr¨onvall. Green IT in practice: virtual meetings in Swedish public agencies. Journal of Cleaner Production, 123:101–112, 2016. [200] Jeffrey Marlow, Chiara Borrelli, Sean P. Jungbluth, Colleen Hoffman, Jennifer Marlow, and Peter R. Gir- guis. Opinion: Telepresence is a potentially transformative tool for field science. Proceedings of the National Academy of Sciences, 114(19):4841–4844, 2017. [201] Yolande Strengers. Meeting in the global workplace: Air travel, telepresence and the body. Mobilities, 10(4):592–608, 2015. [202] Andreas W. Sch¨afer, Antony D. Evans, Tom G. Reynolds, and Lynnette Dray. Costs of mitigating CO2 emis- sions from passenger aircraft. Nature Climate Change, 6:412 EP –, 11 2015. 74 [203] Alex Burnap, Yanxin Pan, Ye Liu, Yi Ren, Honglak Lee, Richard Gonzalez, and Panos Y. Papalambros. Journal of Mechanical Design, Improving design preference prediction accuracy using feature learning. 138(7):071404–071404–12, 05 2016. [204] Vijay Manikandan Janakiraman, XuanLong Nguyen, and Dennis Assanis. Stochastic gradient based extreme learning machines for stable online learning of advanced combustion engines. Neurocomputing, 177:304 – 316, 2016. [205] Ahmed M. Ali and Dirk S¨offker. Towards optimal power management of hybrid electric vehicles in real-time: A review on methods, challenges, and state-of-the-art solutions. Energies, 11(3), 2018. [206] Raul Yondo, Esther Andr´es, and Eusebio Valero. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses. Progress in Aerospace Sciences, 96:23 – 61, 2018. [207] Y-C Lai, C P L Barkan, J Drapa, N Ahuja, J M Hart, P J Narayanan, C V Jawahar, A Kumar, L R Milhon, and M P Stehly. Machine vision analysis of the energy efficiency of intermodal freight trains. Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, 221(3):353–364, 2007. [208] L. Scime and J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing, 19:114–126, 2018. [209] S.A. Shevchik, C. Kenel, C. Leinenbach, and K. Wasmer. Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks. Additive Manufacturing, 21:598–604, 2018. [210] G.X. Gu, C.-T. Chen, D.J. Richmond, and M.J. Buehler. Bioinspired hierarchical composite design using machine learning: Simulation, additive manufacturing, and experiment. Materials Horizons, 5(5):939–945, 2018. [211] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D. Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, Xin Zhang, Jake Zhao, and Karol Zieba. End to end learning for self-driving cars. CoRR, abs/1604.07316, 2016. [212] Austin Brown, Jeffrey Gonder, and Brittany Repac. An Analysis of Possible Energy Impacts of Automated Vehicles, pages 137–153. Springer International Publishing, Cham, 2014. [213] P. A. Hancock, Illah Nourbakhsh, and Jack Stewart. On the future of transportation in an era of automated and autonomous vehicles. Proceedings of the National Academy of Sciences, 116(16):7684–7691, 2019. [214] Joshuah K. Stolaroff, Constantine Samaras, Emma R. O’Neill, Alia Lubers, Alexandra S. Mitchell, and Daniel Ceperley. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery. Nature Communications, 9(1):409, 2018. [215] Mason Marks. Robots in space: Sharing our world with autonomous delivery vehicles. SSRN Electronic Journal, 01 2019. [216] Matthew Guttenberg, Shashank Sripad, and Venkatasubramanian Viswanathan. Evaluating the potential of platooning in lowering the required performance metrics of li-ion batteries to enable practical electric semi- trucks. ACS Energy Letters, 2(11):2642–2646, Oct 2017. [217] Cathy Wu, Aboudy Kreidieh, Eugene Vinitsky, and Alexandre M. Bayen. Emergent behaviors in mixed- autonomy traffic. In CoRL, 2017. [218] Cathy Wu, Aboudy Kreidieh, Kanaad Parvate, Eugene Vinitsky, and Alexandre M Bayen. Flow: Architecture and benchmarking for reinforcement learning in traffic control. Preprint arXiv:1710.05465, 2017. [219] David Jim´enez, Sara Hern´andez, Jes´us Fraile-Ardanuy, Javier Serrano, Rub´en Fern´andez, and Federico ´Alvarez. Modelling the effect of driving events on electrical vehicle energy consumption using inertial sensors in smart- phones. Energies, 11(2), 2018. 75 [220] E. S. Rigas, S. D. Ramchurn, and N. Bassiliades. Managing electric vehicles in the smart grid using artificial intelligence: A survey. IEEE Transactions on Intelligent Transportation Systems, 16(4):1619–1635, Aug 2015. [221] Terry Hansen and Chia-Jiu Wang. Support vector based battery state of charge estimator. Journal of Power Sources, 141(2):351 – 358, 2005. [222] R. Tavakoli and Z. Pantic. ANN-based algorithm for estimation and compensation of lateral misalignment in dynamic wireless power transfer systems for EV charging. In 2017 IEEE Energy Conversion Congress and Exposition (ECCE), pages 2602–2609, Oct 2017. [223] Shuangyuan Wang, Ran Li, Adrian Evans, and Furong Li. Electric vehicle load disaggregation based on limited Innovative Solutions for Energy activation matching pursuits. Energy Procedia, 158:2611 – 2616, 2019. Transitions. [224] Ye Tao, Miaohua Huang, and Lan Yang. Data-driven optimized layout of battery electric vehicle charging infrastructure. Energy, 150:735 – 744, 2018. [225] Matthias D. Galus, Marina Gonz´alez Vay´a, Thilo Krause, and G¨oran Andersson. The role of electric vehicles in smart grids. Wiley Interdisciplinary Reviews: Energy and Environment, 2(4):384–400, 2013. [226] Jos´e V´azquez-Canteli and Zolt´an Nagy. Reinforcement learning for demand response: A review of algorithms and modeling techniques. Applied Energy, 235:1072 – 1089, 2019. [227] Michael K. Hidrue and George R. Parsons. Is there a near-term market for vehicle-to-grid electric vehicles? Applied Energy, 151:67 – 76, 2015. [228] Lifeng Wu, Xiaohui Fu, and Yong Guan. Review of the remaining useful life prognostics of vehicle lithium-ion batteries using data-driven methodologies. Applied Sciences, 6(6):166, 2016. [229] Wladislaw Waag, Christian Fleischer, and Dirk Uwe Sauer. Critical review of the methods for monitoring of lithium-ion batteries in electric and hybrid vehicles. Journal of Power Sources, 258:321–339, 2014. [230] Xiao-Sheng Si, Wenbin Wang, Chang-Hua Hu, and Dong-Hua Zhou. Remaining useful life estimation–a review on the statistical data driven approaches. European journal of operational research, 213(1):1–14, 2011. [231] Kristen A Severson, Peter M Attia, Norman Jin, Nicholas Perkins, Benben Jiang, Zi Yang, Michael H Chen, Muratahan Aykol, Patrick K Herring, Dimitrios Fraggedakis, et al. Data-driven prediction of battery cycle life before capacity degradation. Nature Energy, page 1, 2019. [232] LD Ellis, S Buteau, Samuel G Hames, LM Thompson, DS Hall, and JR Dahn. A new method for determining the concentration of electrolyte components in lithium-ion cells, using fourier transform infrared spectroscopy and machine learning. Journal of The Electrochemical Society, 165(2):A256–A262, 2018. [233] Xiaosong Hu, Shengbo Eben Li, and Yalian Yang. Advanced machine learning approach for lithium-ion battery state estimation in electric vehicles. IEEE Transactions on Transportation electrification, 2(2):140–149, 2016. [234] Steven K Kauwe, Trevor David Rhone, and Taylor D Sparks. Data-driven studies of li-ion-battery materials. Crystals, 9(1):54, 2019. [235] Samuel Buteau and J R. Dahn. Analysis of thousands of electrochemical impedance spectra of lithium-ion cells through a machine learning inverse model. Journal of The Electrochemical Society, 166:A1611–A1622, 01 2019. [236] Selma Brynolf, Maria Taljegard, Maria Grahn, and Julia Hansson. Electrofuels for the transport sector: A review of production costs. Renewable and Sustainable Energy Reviews, 81:1887 – 1905, 2018. [237] International Energy Agency. Biofuels for Transport. 2011. 76 research, development, and demonstration plan. https://www.energy.gov/eere/fuelcells/downloads/fuel-cell- technologies-office-multi-year-research-development-and-22. [239] Zachary P. Cano, Dustin Banham, Siyu Ye, Andreas Hintennach, Jun Lu, Michael Fowler, and Zhongwei Chen. Batteries and fuel cells for emerging electric vehicle markets. Nature Energy, 3(4):279–289, 2018. [240] Fan Tong, Paulina Jaramillo, and Inˆes M. L. Azevedo. Comparison of life cycle greenhouse gases from natural gas pathways for medium and heavy-duty vehicles. Environmental Science & Technology, 49(12):7123–7133, 06 2015. [241] Hichem Omrani. Predicting travel mode of individuals by machine learning. Transportation Research Procedia, 10:840–849, 2015. [242] Daisik Nam, Hyunmyung Kim, Jaewoo Cho, and R Jayakrishnan. A model based on deep learning for pre- dicting travel mode choice. In Proceedings of the Transportation Research Board 96th Annual Meeting Trans- portation Research Board, Washington, DC, USA, pages 8–12, 2017. [243] Julian Hagenauer and Marco Helbich. A comparative study of machine learning classifiers for modeling travel mode choice. Expert Systems with Applications, 78:273 – 282, 2017. [244] Toru Seo, Takahiko Kusakabe, Hiroto Gotoh, and Yasuo Asakura. Interactive online machine learning approach for activity-travel survey. Transportation Research Part B: Methodological, 2017. [245] Yanshuo Sun, Zhibin Jiang, Jinjing Gu, Min Zhou, Yeming Li, and Lei Zhang. Analyzing high speed rail passengers’ train choices based on new online booking data in china. Transportation Research Part C: Emerging Technologies, 97:96 – 113, 2018. [246] Sina Dabiri and Kevin Heaslip. Inferring transportation modes from gps trajectories using a convolutional neural network. Transportation Research Part C: Emerging Technologies, 86:360 – 371, 2018. [247] Wei Tu, Jinzhou Cao, Yang Yue, Shih-Lung Shaw, Meng Zhou, Zhensheng Wang, Xiaomeng Chang, Yang Xu, and Qingquan Li. Coupling mobile phone and social media data: a new approach to understanding urban functions and diurnal patterns. International Journal of Geographical Information Science, 31(12):2331–2358, 2017. [248] Amir Samimi, Kazuya Kawamura, and Abolfazl Mohammadian. A behavioral analysis of freight mode choice decisions. Transportation Planning and Technology, 34(8):857–869, 2011. [249] Ali Jamshidi, Siamak Hajizadeh, Zhou Su, Meysam Naeimi, Alfredo N´u˜nez, Rolf Dollevoet, Bart De Schutter, and Zili Li. A decision support approach for condition-based maintenance of rails based on big data analysis. Transportation Research Part C: Emerging Technologies, 95:185 – 206, 2018. [250] Iman Soleimanmeigouni, Alireza Ahmadi, and Uday Kumar. Track geometry degradation and maintenance modelling: A review. Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, 232(1):73–102, 2018. [251] Michael Hyland, Zihan Hong, Helen Karla Ramalho de Farias Pinto, and Ying Chen. Hybrid cluster-regression approach to model bikeshare station usage. Transportation Research Part A: Policy and Practice, 115:71 – 89, 2018. Smart urban mobility. [252] Robert Regue and Will Recker. Proactive vehicle routing with inferred demand to solve the bikesharing rebal- ancing problem. Transportation Research Part E: Logistics and Transportation Review, 72:192 – 209, 2014. [253] Adish Singla, Marco Santoni, G¨ybor Bart´ok, Pratik Mukerji, Moritz Meenen, and Andreas Krause. Incentiviz- ing users for balancing bike sharing systems. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI’15, pages 723–729. AAAI Press, 2015. 77 [254] A. Ghanem, M. Elhenawy, M. Almannaa, H. I. Ashqar, and H. A. Rakha. Bike share travel time modeling: San Francisco bay area case study. In 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), pages 586–591, June 2017. [255] Kirstin Anderson-Hall, Brandon Bordenkircher, Riley O’Neil, and Smith C Scott. Governing micro-mobility: A nationwide assessment of electric scooter regulations. Technical report, 2019. [256] Ali Rahim Taleqani, Jill Hough, and Kendall E. Nygard. Public opinion on dockless bike sharing: A machine learning approach. Transportation Research Record, 0(0):0361198119838982, 0. The race to code the curb. https://www.citylab.com/transportation/2019/04/smart-cities-maps-curb- data-coord-sidewalk-tech-street-design/586177/. [258] Mehmet Altinkaya and Metin Zontul. Urban bus arrival time prediction: A review of computational models. International Journal of Recent Technology and Engineering (IJRTE), 2:164–169, 01 2013. [259] Ehsan Mazloumi, Geoff Rose, Graham Currie, and Sara Moridpour. Prediction intervals to account for uncer- tainties in neural network predictions: Methodology and application in bus travel time prediction. Engineering Applications of Artificial Intelligence, 24(3):534 – 542, 2011. [260] William Barbour, Juan Carlos Martinez Mori, Shankara Kuppa, and Daniel B. Work. Prediction of arrival times of freight traffic on US railroads using support vector regression. Transportation Research Part C: Emerging Technologies, 93:211 – 227, 2018. [261] Jos´e Antonio Moscoso-L´opez, Ignacio Turias, Maria Jes´us Jim´enez-Come, Juan Jes´us Ruiz-Aguilar, and Mar´ıa del Mar Cerb´an. A two-stage forecasting approach for short-term intermodal freight prediction. International Transactions in Operational Research, 26(2):642–666, 2019. [262] L. Zhou and G. Wu. An overload behavior detection system for engineering transport vehicles based on deep learning. In American Institute of Physics Conference Series, volume 1955 of American Institute of Physics Conference Series, page 040038, April 2018. [263] O. Lucon, D. ¨Urge Vorsatz, A. Zain Ahmed, P. Bertoldi, L.F. Cabeza, N. Eyre, A. Gadgil, L. D. D. Harvey, Y. Jiang, S. Liphoto, S. Mirasgedis, S. Murakami, J. Parikh, C. Pyke, and M.V. Vilari˜no. Buildings. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Edenhofer, O., R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler, I. Baum, S. Brunner, P. Eickemeier, B. Kriemann, J. Savolainen, S. Schl¨omer, C. von Stechow, T. Zwickel and J.C. Minx (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 2014. [264] Diana Urge-Vorsatz, Ksenia Petrichenko, Maja Staniec, and Jiyong Eom. Energy use in buildings in a long-term perspective. Current Opinion in Environmental Sustainability, 5(2):141–151, 2013. [265] Mark Olsthoorn, Joachim Schleich, and Corinne Faure. Exploring the diffusion of low-energy houses: An empirical study in the european union. Energy Policy, 129:1382 – 1393, 2019. [266] Janet Stephenson, Barry Barton, Gerry Carrington, Daniel Gnoth, Rob Lawson, and Paul Thorsnes. Energy cultures: A framework for understanding energy behaviours. Energy Policy, 38(10):6120 – 6129, 2010. [267] Camilo Mora, Chelsie WW Counsell, Coral R Bielecki, and Leo V Louis. Twenty-seven ways a heat wave can kill you: deadly heat in the era of climate change. Circulation: Cardiovascular Quality and Outcomes, 10(11):e004233, 2017. [268] Camilo Mora, B´en´edicte Dousset, Iain R Caldwell, Farrah E Powell, Rollan C Geronimo, Coral R Bielecki, Chelsie WW Counsell, Bonnie S Dietrich, Emily T Johnston, Leo V Louis, et al. Global risk of deadly heat. Nature Climate Change, 7(7):501, 2017. 78 [269] Felix Creutzig, Peter Agoston, Jan C. Minx, Josep G. Canadell, Robbie M. Andrew, Corinne Le Qu´er´e, Glen P. Peters, Ayyoob Sharifi, Yoshiki Yamagata, and Shobhakar Dhakal. Urban infrastructure choices structure cli- mate solutions. Nature Climate Change, 6(12):1054–1056, December 2016. [270] +Neil Gershenfeld, +Stephen Samouhos, and +Bruce Nordman. Intelligent infrastructure for energy efficiency. Science, 327(5969):1086–1088, 2010. [271] Sense. https://sense.com. [272] Kadir Amasyali and Nora M. El-Gohary. A review of data-driven building energy consumption prediction studies. Renewable and Sustainable Energy Reviews, 81:1192 – 1205, 2018. [273] JF Kreider, DE Claridge, P Curtiss, R Dodier, JS Haberl, and M Krarti. Building energy use prediction and system identification using recurrent neural networks. Journal of solar energy engineering, 117(3):161–166, 1995. [274] Nikolaos G Paterakis, Elena Mocanu, Madeleine Gibescu, Bart Stappers, and Walter van Alst. Deep learning In 2017 IEEE PES versus traditional machine learning methods for aggregated energy demand prediction. Innovative Smart Grid Technologies Conference Europe (ISGT-Europe), pages 1–6. IEEE, 2017. [275] Bing Dong, Zhaoxuan Li, SM Mahbobur Rahman, and Rolando Vega. A hybrid model approach for forecasting future residential electricity consumption. Energy and Buildings, 117:341–351, 2016. [276] Liesje Van Gelder, Payel Das, Hans Janssen, and Staf Roels. Comparative study of metamodelling techniques in building energy simulation: Guidelines for practitioners. Simulation Modelling Practice and Theory, 49:245 – 257, 2014. [277] Elena Mocanu, Phuong H. Nguyen, Wil L. Kling, and Madeleine Gibescu. Unsupervised energy prediction in a Smart Grid context using reinforcement cross-building transfer learning. Energy and Buildings, 116:646–655, March 2016. [278] J Zico Kolter and Tommi Jaakkola. Approximate inference in additive factorial hmms with application to energy disaggregation. In Artificial Intelligence and Statistics, pages 1472–1482, 2012. [279] J Zico Kolter, Siddharth Batra, and Andrew Y Ng. Energy disaggregation via discriminative sparse coding. In Advances in Neural Information Processing Systems, pages 1153–1161, 2010. [280] D. Srinivasan, W. S. Ng, and A. C. Liew. Neural-network-based signature recognition for harmonic source identification. IEEE Transactions on Power Delivery, 21(1):398–405, Jan 2006. [281] Jack Kelly and William Knottenbelt. Neural nilm: Deep neural networks applied to energy disaggregation. In Proceedings of the 2Nd ACM International Conference on Embedded Systems for Energy-Efficient Built Environments, BuildSys ’15, pages 55–64, New York, NY, USA, 2015. ACM. [282] Fiona Burlig, Christopher Knittel, David Rapson, Mar Reguant, and Catherine Wolfram. Machine learning from schools about energy efficiency. Technical report, National Bureau of Economic Research, 2017. [283] Jason Hartford, Greg Lewis, Kevin Leyton-Brown, and Matt Taddy. Deep IV: A flexible approach for coun- terfactual prediction. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1414–1423. JMLR. org, 2017. [284] Neil Gershenfeld, Stephen Samouhos, and Bruce Nordman. Intelligent infrastructure for energy efficiency. Science, 327(5969):1086–1088, 2010. [285] Zakia Afroz, GM Shafiullah, Tania Urmee, and Gary Higgins. Modeling techniques used in building HVAC control systems: A review. Renewable and Sustainable Energy Reviews, 83:64–84, 2018. [286] Guoyin Fu. Deep belief network based ensemble approach for cooling load forecasting of air-conditioning system. Energy, 148:269–282, 2018. 79 [287] Hussain Kazmi, Fahad Mehmood, Stefan Lodeweyckx, and Johan Driesen. Gigawatt-hour scale savings on a budget of zero: Deep reinforcement learning based optimal control of hot water systems. Energy, 144:159–168, February 2018. [288] Woohyun Kim and James E. Braun. Evaluation of the impacts of refrigerant charge on air conditioner and heat pump performance. International Journal of Refrigeration, 35(7):1805 – 1814, 2012. [289] Zhanwei Wang, Zhiwei Wang, Suowei He, Xiaowei Gu, and Zeng Feng Yan. Fault detection and diagnosis of chillers using Bayesian network merged distance rejection and multi-source non-sensor information. Applied Energy, 188:200–214, February 2017. [290] Feng Jia, Yaguo Lei, Jing Lin, Xin Zhou, and Na Lu. Deep neural networks: A promising tool for fault characteristic mining and intelligent diagnosis of rotating machinery with massive data. Mechanical Systems and Signal Processing, 72:303–315, 2016. [291] June Young Park, Thomas Dougherty, Hagen Fritz, and Zoltan Nagy. Lightlearn: An adaptive and occupant centered controller for lighting based on reinforcement learning. Building and Environment, 147:397–414, 2019. [292] Parisa Rashidi and Diane J Cook. Keeping the resident in the loop: Adapting the smart home to the user. IEEE Trans. Systems, Man, and Cybernetics, Part A, 39(5):949–959, 2009. [293] Simona D’Oca and Tianzhen Hong. Occupancy schedules learning process through a data mining framework. Energy and Buildings, 88:395–408, 2015. [294] Jie Zhao, Bertrand Lasternas, Khee Poh Lam, Ray Yun, and Vivian Loftness. Occupant behavior and schedule modeling for building energy simulation through office appliance power consumption data mining. Energy and Buildings, 82:341–355, October 2014. [295] Han Zou, Yuxun Zhou, Jianfei Yang, and Costas J Spanos. Towards occupant activity driven smart buildings via wifi-enabled iot devices and deep learning. Energy and Buildings, 177:12–22, 2018. [296] Han Zou, Yuxun Zhou, Jianfei Yang, and Costas J Spanos. Unsupervised wifi-enabled iot device-user associa- tion for personalized location-based service. IEEE Internet of Things Journal, 6(1):1238–1245, 2019. [297] Ana Carolina Riekstin, Antoine Langevin, Thomas Dandres, Ghyslain Gagnon, and Mohamed Cheriet. Time series-based GHG emissions prediction for smart homes. IEEE Transactions on Sustainable Computing, 2018. [298] Qinran Hu and Fangxing Li. Hardware design of smart home energy management system with dynamic price response. IEEE Transactions on Smart Grid, 4(4):1878–1887, 2013. [299] Xin Jin, Kyri Baker, Dane Christensen, and Steven Isley. Foresee: A user-centric home energy management system for energy efficiency and demand response. Applied Energy, 205:1583 – 1595, 2017. [300] Yi Liu, Chao Yang, Li Jiang, Shengli Xie, and Yan Zhang. Intelligent edge computing for IoT-based energy management in smart cities. IEEE Network, 33(2):111–117, 2019. [301] Eric Hittinger and Paulina Jaramillo. Internet of Things: Energy boon or bane? Science, 364(6438):326–328, 2019. [302] Muhammad Ateeq, Farruh Ishmanov, Muhammad Khalil Afzal, and Muhammad Naeem. Multi-parametric analysis of reliability and energy consumption in IoT: A deep learning approach. Sensors, 19(2):309, 2019. [303] Inˆes ML Azevedo. Consumer end-use energy efficiency and rebound effects. Annual Review of Environment and Resources, 39:393–418, 2014. [304] Pei-Luen Patrick Rau. Cross-Cultural Design. Applications in Cultural Heritage, Creativity and Social Devel- opment: 10th International Conference, CCD 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, volume 10912. Springer, 2018. 80 [305] Glenn Gregory Sias. Characterization of the Life Cycle Environmental Impacts and Benefits of Smart Electric Meters and Consequences of their Deployment in California. PhD thesis, UCLA, 2017. [306] Nick Couldry and Ulises A Mejias. Data colonialism: rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4):336–349, 2019. [307] Christoph F. Reinhart and Carlos Cerezo Davila. Urban building energy modeling – A review of a nascent field. Building and Environment, 97:196–202, February 2016. [308] nam.R. Digital twin. https://namr.com/#. [309] Annelies Vandermeulen, Bram van der Heijde, and Lieve Helsen. Controlling district heating and cooling networks to unlock flexibility: A review. Energy, 151:103–115, 2018. [310] J. Zico Kolter and Joseph Ferreira. A Large-Scale Study on Predicting and Contextualizing Building Energy Usage. In Twenty-Fifth AAAI Conference on Artificial Intelligence, August 2011. [311] Constantine E Kontokosta and Christopher Tull. A data-driven predictive model of city-scale energy use in buildings. Applied energy, 197:303–317, 2017. [312] Sokratis Papadopoulos, Bartosz Bonczak, and Constantine E Kontokosta. Pattern recognition in building energy performance over time using energy benchmarking data. Applied Energy, 221:576–586, 2018. [313] Sokratis Papadopoulos and Constantine E. Kontokosta. Grading buildings on energy performance using city benchmarking data. Applied Energy, 233-234:244 – 253, 2019. [314] Wenwen Zhang, Caleb Robinson, Subhrajit Guhathakurta, Venu M. Garikapati, Bistra Dilkina, Marilyn A. Brown, and Ram M. Pendyala. Estimating residential energy consumption in metropolitan areas: A microsim- ulation approach. Energy, 155:162 – 173, 2018. [315] Caleb Robinson, Bistra Dilkina, Jeffrey Hubbs, Wenwen Zhang, Subhrajit Guhathakurta, Marilyn A Brown, and Ram M Pendyala. Machine learning approaches for estimating commercial building energy consumption. Applied energy, 208:889–904, 2017. [316] Fazel Khayatian, Luca Sarto, et al. Building energy retrofit index for policy making and decision support at regional and national scales. Applied energy, 206:1062–1075, 2017. [317] Erwan Bocher, Gwendall Petit, J´er´emy Bernard, and Sylvain Palominos. A geoprocessing framework to com- pute urban indicators: The MApUCE tools chain. Urban climate, 24:153–174, 2018. [318] Alex Nutkiewicz, Zheng Yang, and Rishee K Jain. Data-driven urban energy simulation (DUE-S): A framework for integrating engineering simulation and machine learning methods in a multi-scale urban energy modeling workflow. Applied energy, 225:1176–1189, 2018. [319] Thomas Esch, Wieke Heldens, Andreas Hirner, Manfred Keil, Mattia Marconcini, Achim Roth, Julian Zeidler, Stefan Dech, and Emanuele Strano. Breaking new ground in mapping human settlements from space–the global urban footprint. ISPRS Journal of Photogrammetry and Remote Sensing, 134:30–42, 2017. [320] Microsoft. Computer generated building footprints for the United States. https://github.com/ Microsoft/USBuildingFootprints. [321] Zhenyu Lu, Jungho Im, Jinyoung Rhee, and Michael Hodgson. Building type classification using spatial and landscape attributes derived from LiDAR remote sensing data. Landscape and Urban Planning, 130:134 – 148, 2014. [322] Andr´e Henn, Christoph R¨omer, Gerhard Gr¨oger, and Lutz Pl¨umer. Automatic classification of building types in 3D city models. GeoInformatica, 16(2):281–306, Apr 2012. 81 [323] Maros Blaha, Christoph Vogel, Audrey Richard, Jan D Wegner, Thomas Pock, and Konrad Schindler. Large- scale semantic 3D reconstruction: an adaptive multi-resolution model for multi-class volumetric labeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3176–3184, 2016. [324] Xiao Xiang Zhu, Devis Tuia, Lichao Mou, Gui-Song Xia, Liangpei Zhang, Feng Xu, and Friedrich Fraundorfer. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 5(4):8–36, 2017. [325] Christian Geiß, Hannes Taubenb¨ock, Michael Wurm, Thomas Esch, Michael Nast, Christoph Schillings, and Thomas Blaschke. Remote sensing-based characterization of settlement structures for assessing local potential of district heat. Remote Sensing, 3(7):1447–1471, Jul 2011. [326] Filip Biljecki, Hugo Ledoux, and Jantien Stoter. Generating 3D city models without elevation data. Computers, Environment and Urban Systems, 64:1 – 18, 2017. [327] Paolo Neirotti, Alberto De Marco, Anna Corinna Cagliano, Giulio Mangano, and Francesco Scorrano. Current trends in smart city initiatives: Some stylised facts. Cities, 38:25–36, 2014. [328] H. Habibzadeh, A. Boggio-Dandry, Z. Qin, T. Soyata, B. Kantarci, and H. T. Mouftah. Soft sensing in smart cities: Handling 3vs using recommender systems, machine intelligence, and data analytics. IEEE Communica- tions Magazine, 56(2):78–86, Feb 2018. [329] Felix Creutzig, Martina Franzen, Rolf Moeckel, Dirk Heinrichs, Kai Nagel, and Helga Weisz. Leveraging Digitalization for Sustainability in Urban Transport (in print). Global Sustainability, 2019. [330] Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: Concepts, methodologies, and appli- cations. ACM Transaction on Intelligent Systems and Technology, October 2014. [331] Riccardo Di Clemente, Miguel Luengo-Oroz, Matias Travizano, Sharon Xu, Bapu Vaitla, and Marta C Gonz´alez. Sequences of purchases in credit card data reveal lifestyles in urban populations. Nature com- munications, 9, 2018. [332] Shan Jiang, Gaston A Fiore, Yingxiang Yang, Joseph Ferreira Jr, Emilio Frazzoli, and Marta C Gonz´alez. In A review of urban computing for mobile phone traces: current methods, challenges and opportunities. Proceedings of the 2nd ACM SIGKDD international workshop on Urban Computing, page 2. ACM, 2013. [333] Rositsa T Ilieva and Timon McPhearson. Social-media data for urban sustainability. Nature Sustainability, 1(10):553, 2018. [334] Derek Ruths and J¨urgen Pfeffer. Social media for large studies of behavior. Science, 346(6213):1063–1064, 2014. [335] Farnaz Mosannenzadeh, Maria Rosaria Di Nucci, and Daniele Vettorato. Identifying and prioritizing barriers to implementation of smart energy city projects in Europe: An empirical approach. Energy Policy, 105:191–201, 2017. [336] City of Los Angeles. Mobility data specification. https://github.com/CityOfLosAngeles/ mobility-data-specification.git, 2018. [337] Songnian Li, Suzana Dragicevic, Francesc Ant´on Castro, Monika Sester, Stephan Winter, Arzu Coltekin, Christopher Pettit, Bin Jiang, James Haworth, Alfred Stein, et al. Geospatial big data handling theory and methods: A review and research challenges. ISPRS journal of Photogrammetry and Remote Sensing, 115:119– 133, 2016. [338] Lorenzo Valerio, Andrea Passarella, and Marco Conti. Hypothesis transfer learning for efficient data computing in smart cities environments. In 2016 IEEE International Conference on Smart Computing (SMARTCOMP), pages 1–8. IEEE, 2016. 82 [339] Daniele Ravi, Charence Wong, Benny Lo, and Guang-Zhong Yang. A deep learning approach to on-node sensor data analytics for mobile or wearable devices. IEEE journal of biomedical and health informatics, 21(1):56–64, 2017. [340] Khan Muhammad, Jaime Lloret, and Sung Wook Baik. Intelligent and energy-efficient data prioritization in green smart cities: Current challenges and future directions. IEEE Communications Magazine, 57(2):60–65, 2019. [341] Sarah Giest. Big data analytics for mitigating carbon emissions in smart cities: opportunities and challenges. European Planning Studies, 25(6):941–957, Feb 2017. [342] Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259, 2014. [343] AnHai Doan, Jayant Madhavan, Pedro Domingos, and Alon Halevy. Ontology matching: A machine learning approach. In Handbook on ontologies, pages 385–403. Springer, 2004. [344] Yu Zheng. Methodologies for cross-domain data fusion: An overview. September 2015. [345] Bartosz Krawczyk, Leandro L Minku, Jo˜ao Gama, Jerzy Stefanowski, and Michał Wo´zniak. Ensemble learning for data stream analysis: A survey. Information Fusion, 37:132–156, 2017. [346] Jinsong Wu, Song Guo, Jie Li, and Deze Zeng. Big data meet green challenges: Big data toward green applica- tions. IEEE Systems Journal, 10(3):888–900, 2016. [347] Edward O’Dwyer, Indranil Pan, Salvador Acha, and Nilay Shah. Smart energy systems for sustainable smart cities: Current developments, trends and future directions. Applied Energy, 237:581–597, 2019. [348] Reid Ewing and Robert Cervero. “Does Compact Development Make People Drive Less?” The Answer Is Yes. Journal of the American Planning Association, 83(1):19–25, January 2017. [349] Felix Creutzig, Giovanni Baiocchi, Robert Bierkandt, Peter-Paul Pichler, and Karen C. Seto. Global typology of urban energy use and potentials for an urbanization mitigation wedge. Proceedings of the National Academy of Sciences, 112(20):6283–6288, May 2015. [350] Chuan Ding, Xinyu Jason Cao, and Petter Næss. Applying gradient boosting decision trees to examine non- linear effects of the built environment on driving distance in oslo. Transportation Research Part A: Policy and Practice, 110:107–117, 2018. [351] Mafalda Silva, V´ıtor Leal, V´ıtor Oliveira, and Isabel M Horta. A scenario-based approach for assessing the energy performance of urban development pathways. Sustainable cities and society, 40:372–382, 2018. [352] Saeed Monajem and Farzan Ekram Nosratian. The evaluation of the spatial integration of station areas via the node place model; an application to subway station areas in Tehran. Transportation Research Part D: Transport and Environment, 40:14–27, 2015. [353] Juan F De Paz, Javier Bajo, Sara Rodr´ıguez, Gabriel Villarrubia, and Juan M Corchado. Intelligent system for lighting control in smart cities. Information Sciences, 372:241–255, 2016. [354] William F Lamb, Felix Creutzig, Max W Callaghan, and Jan C Minx. Learning about urban climate solutions from case studies. Nature Climate Change, page 1, 2019. [355] Harini Nagendra, Xuemei Bai, Eduardo S Brondizio, and Shuaib Lwasa. The urban south and the predicament of global sustainability. Nature Sustainability, 1(7):341, 2018. [356] Yafei Han. Global urban typology discovery with a latent class choice model. Proceedings of the Transportation Research Board 97th Annual Meeting, page 5. 83 [357] R. Louf and M. Barthelemy. A typology of street patterns. Journal of The Royal Society Interface, 11(101):20140924–20140924, October 2014. [358] Xuemei Bai, Richard J Dawson, Diana ¨Urge-Vorsatz, Gian C Delgado, Aliyu Salisu Barau, Shobhakar Dhakal, David Dodman, Lykke Leonardsen, Val´erie Masson-Delmotte, Debra C Roberts, et al. Six research priorities for cities and climate change, 2018. [359] Mike Gualtieri, Noel Yuhanna, Holger Kisker, Rowan Curran, Brandon Purcell, Sophia Christakis, Shreyas Warrier, and Matthew Izzi. The Forrester Wave: Big data streaming analytics, Q1 2016. Forrester.com, January 2016. [360] Rubaiat Habib Kazi, Tovi Grossman, Hyunmin Cheong, Ali Hashemi, and George W Fitzmaurice. DreamS- In UIST, pages 401–414, ketch: Early stage 3D design explorations with sketching and generative design. 2017. [361] Richard Evans and Jim Gao. DeepMind AI reduces Google data centre cooling bill by 40%. DeepMind blog, 20, 2016. [362] Xiao Zhang, Gabriela Hug, J Zico Kolter, and Iiro Harjunkoski. Model predictive control of industrial loads and energy storage for demand response. In 2016 IEEE Power and Energy Society General Meeting (PESGM), pages 1–5. IEEE, 2016. [363] Josep Ll. Berral, ´I˜nigo Goiri, Ram´on Nou, Ferran Juli`a, Jordi Guitart, Ricard Gavald`a, and Jordi Torres. To- wards energy-aware scheduling in data centers using machine learning. In Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking, e-Energy ’10, pages 215–224, New York, NY, USA, 2010. ACM. [364] Steve Sorrell. Jevons’ paradox revisited: The evidence for backfire from improved energy efficiency. Energy policy, 37(4):1456–1469, 2009. [365] Auslan Cramb. 12,000-mile trip to have seafood shelled. The Telegraph, November 2006. [366] Xi Wang, Hua Cai, and H Keith Florig. Energy-saving implications from supply chain improvement: An exploratory study on China’s consumer goods retail system. Energy Policy, 95:411–420, 2016. [367] Andrew Winston. Excess inventory wastes carbon and energy, not just money. Harvard Business Review, 2011. [368] A Okay Akyuz, Mitat Uysal, Berna Atak Bulbul, and M Ozan Uysal. Ensemble approach for time series analysis in demand forecasting: Ensemble learning. In 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pages 7–12. IEEE, 2017. [369] Grigorios Tsoumakas. A survey of machine learning techniques for food sales prediction. Artificial Intelligence Review, 52(1):441–447, 2019. [370] SCM Globe. Zara clothing company supply chain. SCM Globe, 2015. [371] Christophe Rizet, Eric Corn´elis, Michael Browne, and Jacques L´eonardi. GHG emissions of supply chains from different retail systems in Europe. Procedia-Social and Behavioral Sciences, 2(3):6154–6164, 2010. [372] Gustavo M Ugarte, Jay S Golden, and Kevin J Dooley. Lean versus green: The impact of lean logistics on greenhouse gas emissions in consumer goods supply chains. Journal of Purchasing and Supply Management, 22(2):98–109, 2016. [373] Troy R Hawkins, Bhawna Singh, Guillaume Majeau-Bettez, and Anders Hammer Strømman. Comparative Journal of Industrial Ecology, environmental life cycle assessment of conventional and electric vehicles. 17(1):53–64, 2013. 84 [374] Gerald Rebitzer, Tomas Ekvall, Rolf Frischknecht, Davis Hunkeler, G Norris, Tomas Rydberg, W-P Schmidt, Sangwon Suh, B Pennington Weidema, and David W Pennington. Life cycle assessment: Part 1: Framework, goal and scope definition, inventory analysis, and applications. Environment international, 30(5):701–720, 2004. [375] Jenny Gustavsson, Christel Cederberg, Ulf Sonesson, Robert Van Otterdijk, and Alexandre Meybeck. Global food losses and food waste. 2011. [376] Antonella Meneghetti and Luca Monti. Greening the food supply chain: an optimisation model for sustainable International Journal of Production Research, 53(21):6567– design of refrigerated automated warehouses. 6587, 2015. [377] Guillermo Fuertes, Ismael Soto, Ra´ul Carrasco, Manuel Vargas, Jorge Sabattin, and Carolina Lagos. Intelligent packaging systems: sensors and nanosensors to monitor food quality and safety. Journal of Sensors, 2016, 2016. [378] Manfred Fischedick, Joyashree Roy, Amr Abdel-Aziz, Adolf Acquaye, Julian Allwood, Jean-Paul Ceron, Yong Geng, Haroon Kheshgi, Alessandro Lanza, Daniel Perczyk, Lynn Price, Estela Santalla, Claudia Sheinbaum, Kanako Tanaka, et al. Industry. In Climate change 2014: mitigation of climate change. Contribution of Work- ing Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, 2014. [379] Johanna Lehne and Felix Preston. Making concrete change, innovation in low-carbon cement and concrete. Chatham House Report, Energy Enivronment and Resources Department: London, UK, pages 1–66, 2018. [380] Ivanna Baturynska, Oleksandr Semeniuta, and Kristian Martinsen. Optimization of process parameters for powder bed fusion additive manufacturing by combination of machine learning and finite element method: A conceptual framework. Procedia CIRP, 67:227–232, 2018. [381] Anubhav Jain, Shyue Ping Ong, Geoffroy Hautier, Wei Chen, William Davidson Richards, Stephen Dacek, Shreyas Cholia, Dan Gunter, David Skinner, Gerbrand Ceder, et al. Commentary: The Materials Project: A materials genome approach to accelerating materials innovation. Apl Materials, 1(1):011002, 2013. [382] Xiou Ge, Richard T Goodwin, Jeremy R Gregory, Randolph E Kirchain, Joana Maria, and Lav R Varshney. Accelerated discovery of sustainable building materials. Preprint arXiv:1905.08222, 2019. [383] Logan Ward, Ankit Agrawal, Alok Choudhary, and Christopher Wolverton. A general-purpose machine learn- ing framework for predicting properties of inorganic materials. npj Computational Materials, 2:16028, 2016. Artificial neural network control of thermoelectrically-cooled microfluidics using computer vision based on ir thermography. Computers & Chem- ical Engineering, 121:584–593, 2019. [385] Connor W Coley, Wengong Jin, Luke Rogers, Timothy F Jamison, Tommi S Jaakkola, William H Green, Regina Barzilay, and Klavs F Jensen. A graph-convolutional neural network model for the prediction of chemical reactivity. Chemical science, 10(2):370–377, 2019. [386] Joseph H Montoya, Charlie Tsai, Aleksandra Vojvodic, and Jens K Nørskov. The challenge of electrochemical ammonia synthesis: A new perspective on the role of nitrogen scaling relations. ChemSusChem, 8(13):2180– 2186, 2015. [387] SW Wood and Annette Cowie. A review of greenhouse gas emission factors for fertiliser production. 2004. [388] Kenneth Gillingham and James H Stock. The cost of reducing greenhouse gas emissions. Journal of Economic Perspectives, 32(4):53–72, 2018. [389] Erica L Plambeck. Reducing greenhouse gas emissions through operations and supply chain management. Energy Economics, 34:S64–S74, 2012. 85 [390] Lan Tao, Elizabeth Garnsey, David Probert, and Tom Ridgman. Innovation as response to emissions legislation: revisiting the automotive catalytic converter at Johnson Matthey. R&d Management, 40(2):154–168, 2010. [391] Susan Helper, Raphael Martins, and Robert Seamans. Who profits from industry 4.0? theory and evidence from the automotive industry. SSRN preprint ssrn.3377771, 2019. [392] Muhammad Aftab, Chien Chen, Chi-Kin Chau, and Talal Rahwan. Automatic hvac control with real-time occupancy recognition and simulation-guided model predictive control in low-cost embedded system. Energy and Buildings, 154:141–156, 2017. [393] J´an Drgoˇna, Damien Picard, Michal Kvasnica, and Lieve Helsen. Approximate model predictive building control via machine learning. Applied Energy, 218:199–216, 2018. [394] Jim Gao. Machine learning applications for data center optimization. 2014. [395] Ejaz Ahmed, Ibrar Yaqoob, Arif Ahmed, Abdullah Gani, Muhammad Imran, and Sghaier Guizani. Green indus- trial networking: recent advances, taxonomy, and open research challenges. IEEE Communications Magazine, 54(10):38–45, 2016. [396] Edward Glaessgen and David Stargel. The digital twin paradigm for future NASA and US Air Force vehi- In 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 20th cles. AIAA/ASME/AHS Adaptive Structures Conference 14th AIAA, page 1818, 2012. [397] Fei Tao, Jiangfeng Cheng, Qinglin Qi, Meng Zhang, He Zhang, and Fangyuan Sui. Digital twin-driven prod- uct design, manufacturing and service with big data. The International Journal of Advanced Manufacturing Technology, 94(9-12):3563–3576, 2018. [398] Rockwell Automation. Akzonobel powder coatings saves over e15,000 per month thanks to advanced energy monitoring solution from rockwell automation, 2014. [399] Nathaniel Horner, Inˆes Azevedo, Doug Sicker, and Yuvraj Agarwal. Dynamic data center load response to variability in private and public electricity costs. In 2016 IEEE International Conference on Smart Grid Com- munications (SmartGridComm), pages 80–85. IEEE, 2016. [400] Brendan Coffey. Factory records: GE providing Procter & Gamble greater access to the cloud for analyzing manufacturing data, 2019. [401] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep rein- forcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [402] Natalie M Mahowald, Daniel S Ward, Scott C Doney, Peter G Hess, and James T Randerson. Are the impacts of land use on warming underestimated in climate policy? Environmental Research Letters, 12(9):094016, 2017. [403] Nasa Science. The study of Earth as an integrated system. https://climate.nasa.gov/nasa_ science/science/, 2019. [404] Paul Hawken. Drawdown: The most comprehensive plan ever proposed to reverse global warming. 2015. [405] Kevin P Gibbons. Hyperspectral imaging what is it? how does it work? Technical report, 2014. [406] Rebecca Scafutto and Carlos de Souza Filho. Detection of methane plumes using airborne midwave infrared (3–5 µm) hyperspectral data. Remote Sensing, 10(8):1237, 2018. [407] Daniel J Jacob, Alexander J Turner, Joannes D Maasakkers, Jianxiong Sheng, Kang Sun, Xiong Liu, Kelly Chance, Ilse Aben, Jason McKeever, and Christian Frankenberg. Satellite observations of atmospheric methane and their value for quantifying methane emissions. Atmospheric Chemistry and Physics, 16(22):14371–14396, 2016. 86 [408] Peter F Bernath, Mahdi Yousefi, Eric Buzan, and Chris D Boone. A near-global atmospheric distribution of n2o isotopologues. Geophysical Research Letters, 44(20):10–735, 2017. [409] G Philip Robertson and Peter M Vitousek. Nitrogen in agriculture: balancing the cost of an essential resource. Annual review of environment and resources, 34:97–125, 2009. [410] Salah Sukkarieh. Mobile on-farm digital technology for smallholder farmers. Technical report, 2017. [411] Asher Bender, Brett Whelan, and Salah Sukkarieh. Ladybird Cobbitty 2017 Brassica Dataset. 2019. [412] Mirwaes Wahabzada, Anne-Katrin Mahlein, Christian Bauckhage, Ulrike Steiner, Erich-Christian Oerke, and Kristian Kersting. Plant phenotyping using probabilistic topic models: uncovering the hyperspectral language of plants. Scientific reports, 6:22482, 2016. [413] Konstantinos Liakos, Patrizia Busato, Dimitrios Moshou, Simon Pearson, and Dionysis Bochtis. Machine learning in agriculture: A review. Sensors, 18(8):2674, 2018. [414] Raphael A Viscarra Rossel and Johan Bouma. Soil sensing: A new paradigm for agriculture. Agricultural Systems, 148:71–74, 2016. [415] Jiaxuan You, Xiaocheng Li, Melvin Low, David Lobell, and Stefano Ermon. Deep Gaussian process for crop yield prediction based on remote sensing data. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. [416] Wei Ma, Kendall Nowocin, Niraj Marathe, and George H Chen. An interpretable produce price forecasting system for small and marginal farmers in india using collaborative filtering and adaptive nearest neighbors. In Proceedings of the Tenth International Conference on Information and Communication Technologies and Development, page 6. ACM, 2019. [417] Agriculture, forestry, and fishing, value added. https://data.worldbank.org/indicator/NV. AGR.TOTL.CD, 2017. [418] Chi-Hua Chen, Hsu-Yang Kung, and Feng-Jang Hwang. Deep learning techniques for agronomy applications. Agronomy, 9(3), 2019. [419] Faizal Parish, AA Sirin, D Charman, Hans Joosten, T Yu Minaeva, and Marcel Silvius. Assessment on peat- lands, biodiversity and climate change. 2008. [420] Mike Flannigan, Chelene Krezek-Hanes, Mike Wotton, Mike Waddington, Merritt Turetsky, and Brian Ben- scoter. Peatland fires and carbon emissions (bulletin 50). Technical report, 2012. [421] Susan E Page, Florian Siegert, John O Rieley, Hans-Dieter V Boehm, Adi Jaya, and Suwido Limin. The amount of carbon released from peat and forest fires in indonesia during 1997. Nature, 420(6911):61, 2002. [422] Hans Joosten, Marja-Liisa Tapio-Bistr¨om, and Susanna Tol. Peatlands: guidance for climate change mitigation through conservation, rehabilitation and sustainable use. Food and Agriculture Organization of the United Nations, 2012. [423] Joseph Holden, PJ Chapman, and JC Labadz. Artificial drainage of peatlands: hydrological and hydrochemical process and wetland restoration. Progress in Physical Geography, 28(1):95–123, 2004. [424] Budiman Minasny, Budi Indra Setiawan, Satyanto Krido Saptomo, Alex B McBratney, et al. Open digital mapping as a cost-effective method for mapping peat thickness and assessing the carbon stock of tropical peatlands. Geoderma, 313:25–40, 2018. [425] Claudia Windeck. A new global peatland map expected for 2020. www.gislounge.com/new-global- peatland-map-expected-2020, 2018. [426] Pedro Rodr´ıguez-Veiga, James Wheeler, Valentin Louis, Kevin Tansey, and Heiko Balzter. Quantifying forest biomass carbon stocks from space. Current Forestry Reports, 3(1):1–18, 2017. 87 [427] Tara O’Shea. Developing the world’s first indicator of forest carbon stocks & emissions. //www.planet.com/pulse/developing-the-worlds-first-indicator-of-forest- carbon-stocks-emissions/, 2019. https: [428] Jean-Francois Bastin, Yelena Finegold, Claude Garcia, Danilo Mollicone, Marcelo Rezende, Devin Routh, Constantin M. Zohner, and Thomas W. Crowther. The global tree restoration potential. Science, 365(6448):76– 79, 2019. [429] Drones planting trees: An interview with BioCarbon engineering. https://medium.com/@ImpakterMag/drones- planting-trees-an-interview-with-biocarbon-engineering-33c536a22d5e. [430] Anthony LeRoy Westerling. Increasing western US forest wildfire activity: sensitivity to changes in the timing of spring. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1696):20150178, 2016. [431] Claire A Montgomery. An agent and a consequence of land use change. The Oxford Handbook of Land Economics, page 281, 2014. [432] J Rhee, J Im, and S Park. Drought forecasting based on machine learning of remote sensing and long-range forecast data. APEC Climate Center, Republic of Korea, 2016. [433] PG Brodrick, LDL Anderegg, and GP Asner. Forest drought resistance at large geographic scales. Geophysical Research Letters, 2019. [434] Sriram Ganapathi Subramanian and Mark Crowley. Using spatial reinforcement learning to build forest wildfire dynamics models from satellite images. Frontiers in ICT, 5:6, 2018. [435] Sriram Ganapathi Subramanian and Mark Crowley. Combining MCTS and A3C for prediction of spatially spreading processes in forest wildfire settings. In Advances in Artificial Intelligence: 31st Canadian Conference on Artificial Intelligence, Canadian AI 2018, Toronto, ON, Canada, May 8–11, 2018, Proceedings 31, pages 285–291. Springer, 2018. [436] Rachel M Houtman, Claire A Montgomery, Aaron R Gagnon, David E Calkin, Thomas G Dietterich, Sean McGregor, and Mark Crowley. Allowing a wildfire to burn: estimating the effect on future fire suppression costs. International Journal of Wildland Fire, 22(7):871–882, 2013. [437] K MacDicken, ¨O Jonsson, L Pi˜na, S Maulo, V Contessa, Y Adikari, M Garzuglia, E Lindquist, G Reams, and R D’Annunzio. Global forest resources assessment 2015: how are the world’s forests changing? FAO, 2016. [438] Matthew G Hethcoat, David P Edwards, Joao MB Carreiras, Robert G Bryant, Filipe M Franca, and Shaun Quegan. A machine learning approach to map tropical selective logging. Remote Sensing of Environment, 221:569–582, 2019. [439] Christopher D Lippitt, John Rogan, Zhe Li, J Ronald Eastman, and Trevor G Jones. Mapping selective logging in mixed deciduous forest. Photogrammetric Engineering & Remote Sensing, 74(10):1201–1211, 2008. [440] AGSJ Baccini, SJ Goetz, WS Walker, NT Laporte, M Sun, D Sulla-Menashe, J Hackler, PSA Beck, R Dubayah, MA Friedl, et al. Estimated carbon dioxide emissions from tropical deforestation improved by carbon-density maps. Nature climate change, 2(3):182, 2012. [441] Ruth S DeFries, Richard A Houghton, Matthew C Hansen, Christopher B Field, David Skole, and John Town- shend. Carbon emissions from tropical deforestation and regrowth based on satellite observations for the 1980s and 1990s. Proceedings of the National Academy of Sciences, 99(22):14256–14261, 2002. [442] Rainforest connection. https://rfcx.org. [443] Silviaterra. https://www.silviaterra.com. 88 [444] Sabine Fuss, Josep G Canadell, Glen P Peters, Massimo Tavoni, Robbie M Andrew, Philippe Ciais, Robert B Jackson, Chris D Jones, Florian Kraxner, Nebosja Nakicenovic, et al. Betting on negative emissions. Nature climate change, 4(10):850, 2014. [445] T Gasser, C´eline Guivarch, K Tachiiri, CD Jones, and P Ciais. Negative emissions physically needed to keep global warming below 2C. Nature communications, 6:7958, 2015. [446] Ocean Studies Board, Engineering National Academies of Sciences, Medicine, et al. Negative Emissions Tech- nologies and Reliable Sequestration: A Research Agenda. National Academies Press, 2019. [447] David Sandalow, Julio Friedmann, and Colin McCormick. Direct air capture of carbon dioxide: ICEF roadmap 2018. 2018. [448] Jan C Minx, William F Lamb, Max W Callaghan, Sabine Fuss, Jerome Hilaire, Felix Creutzig, Thorben Amann, Tim Beringer, Wagner de Oliveira Garcia, Jens Hartmann, et al. Negative emissions part 1: Research landscape and synthesis. Environmental Research Letters, 13(6):063001, 2018. [449] Sabine Fuss, William F Lamb, Max W Callaghan, J´erˆome Hilaire, Felix Creutzig, Thorben Amann, Tim Beringer, Wagner de Oliveira Garcia, Jens Hartmann, Tarun Khanna, et al. Negative emissions part 2: Costs, potentials and side effects. Environmental Research Letters, 13(6):063002, 2018. [450] Gregory F Nemet, Max W Callaghan, Felix Creutzig, Sabine Fuss, Jens Hartmann, J´erˆome Hilaire, William F Lamb, Jan C Minx, Sophia Rogers, and Pete Smith. Negative emissions part 3: Innovation and upscaling. Environmental Research Letters, 13(6):063003, 2018. [451] Felix Creutzig, Nijavalli H Ravindranath, G¨oran Berndes, Simon Bolwig, Ryan Bright, Francesco Cherubini, Helena Chum, Esteve Corbera, Mark Delucchi, Andre Faaij, et al. Bioenergy and climate change mitigation: an assessment. Gcb Bioenergy, 7(5):916–944, 2015. [452] Carmenza Robledo-Abad, Hans-J¨org Althaus, G¨oran Berndes, Simon Bolwig, Esteve Corbera, Felix Creutzig, John Garcia-Ulloa, Anna Geddes, Jay S Gregg, Helmut Haberl, et al. Bioenergy production and sustainable development: science base for policymaking remains limited. Gcb Bioenergy, 9(3):541–556, 2017. [453] RD Schuiling and P Krijgsman. Enhanced weathering: an effective and cheap tool to sequester CO2. Climatic Change, 74(1-3):349–354, 2006. [454] Edward S. Rubin, John E. Davison, and Howard J. Herzog. The cost of CO2 capture and storage. International Journal of Greenhouse Gas Control, 40:378–400, September 2015. [455] Felix Creutzig, Christian Breyer, Jerome Hilaire, Jan Minx, Glen Peters, and Robert H Socolow. The mutual dependence of negative emission technologies and energy systems. Energy & Environmental Science, 2019. [456] V Zeleˇn´ak, M Badaniˇcov´a, D Halamova, J ˇCejka, A Zukal, N Murafa, and G Goerigk. Amine-modified ordered mesoporous silica: effect of pore size on carbon dioxide capture. Chemical Engineering Journal, 144(2):336– 342, 2008. [457] Veronica B Cashin, Daniel S Eldridge, Aimin Yu, and Dongyuan Zhao. Surface functionalization and manip- ulation of mesoporous silica adsorbents for improved removal of pollutants: a review. Environmental Science: Water Research & Technology, 4(2):110–128, 2018. [458] Paul Raccuglia, Katherine C Elbert, Philip DF Adler, Casey Falk, Malia B Wenny, Aurelio Mollo, Matthias Zeller, Sorelle A Friedler, Joshua Schrier, and Alexander J Norquist. Machine-learning-assisted materials dis- covery using failed experiments. Nature, 533(7601):73, 2016. [459] Geoffrey Holmes and David W Keith. An air–liquid contactor for large-scale capture of CO2 from air. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370(1974):4380–4403, 2012. 89 [460] M. D. Zoback and S. M. Gorelick. Earthquake triggering and large-scale geologic storage of carbon dioxide. Proceedings of the National Academy of Sciences, 109(26):10164–10168, June 2012. [461] Sandra ´O Snæbj¨ornsd´ottir and Sigurdur R Gislason. CO2 storage potential of basaltic rocks offshore Iceland. Energy Procedia, 86:371–380, 2016. [462] Mauricio Araya-Polo, Joseph Jennings, Amir Adler, and Taylor Dahlke. Deep-learning tomography. The Leading Edge, 37(1):58–66, 2018. [463] MA Celia, S Bachu, JM Nordbotten, and KW Bandilla. Status of CO2 storage in deep saline aquifers with emphasis on modeling approaches and practical simulations. Water Resources Research, 51(9):6846–6892, 2015. [464] Shaoxing Mo, Yinhao Zhu, Nicholas Zabaras, Xiaoqing Shi, and Jichun Wu. Deep convolutional encoder- decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media. Water Resources Research, 55(1):703–728, 2019. [465] Dylan Moriarty, Laura Dobeck, and Sally Benson. Rapid surface detection of CO2 leaks from geologic seques- tration sites. Energy Procedia, 63:3975–3983, 2014. [466] Bailian Chen, Dylan R Harp, Youzuo Lin, Elizabeth H Keating, and Rajesh J Pawar. Geologic co 2 seques- tration monitoring design: A machine learning and uncertainty quantification based approach. Applied energy, 225:332–345, 2018. [467] Jingfan Wang, Lyne P. Tchapmi, Arvind P. Ravikumar, Mike McGuire, Clay S. Bell, Daniel Zimmerle, Silvio Savarese, and Adam R. Brandt. Machine vision for natural gas methane emissions detection using an infrared camera. Applied Energy, 257:113998, 2020. [468] H Goosse, P Barriat, W Lefebvre, M Loutre, and V Zunz. Introduction to climate dynamics and climate modeling. 2008–2010. [469] IPCC. Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, R.K. Pachauri and L.A. Meyer (eds.)]. 2014. [470] K E Taylor, R J Stouffer, and G A Meehl. An overview of CMIP5 and the experiment design. Bulletin of the American Meteorological Society, 93(4):485–498, 2012. [471] V Eyring, S Bony, G A Meehl, C A Senior, R J Stouffer, and K E Taylor. Overview of the Coupled Model Inter- comparison Project phase 6 (CMIP6) experimental design and organization. Geoscientific Model Development, 6(LLNL-JRNL-736881), 2016. [472] J Kay, C Deser, A Phillips, A Mai, C Hannay, G Strand, J M Arblaster, S C Bates, G Danabasoglu, J Edwards, M Holland, P Kushner, J-F Lamarque, D Lawrence, K Lindsay, A Middleton, E Munoz, R Neale, K Oleson, L Polvani, and M Vertenstein. The Community Earth System Model (CESM) Large Ensemble project: A community resource for studying climate change in the presence of internal climate variability. Bulletin of the American Meteorological Society, 96(8):1333–1349, 2015. [473] J Carman, T Clune, F Giraldo, M Govett, B Gross, A Kamrathe, T Lee, D McCarren, J Michalakes, S Sandgathe, and T Whitcomb. Position paper on high performance computing needs in Earth system prediction. National Earth System Prediction Capability. Technical report, 2017. [474] D J Lary. Artificial intelligence in geoscience and remote sensing. In Aerospace Technologies Advancements, edited. 2010. [475] David J. Lary, Amir H. Alavi, Amir H. Gandomi, and Annette L. Walker. Machine learning in geosciences and remote sensing. Geoscience Frontiers, pages 1–9, 2015. 90 [476] Nataliia Kussul, Mykola Lavreniuk, Sergii Skakun, and Andrii Shelestov. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 14(5):778–782, 2017. [477] Y Gil, S. Pierce, Hassan Babaie, Arindam Banerjee, Kirk Borne, Gary Bust, Michelle Cheatham, Imme Ebert- Uphoff, Carla Gomes, Mary Hill, John Horel, Leslie Hsu, Jim Kinter, Craig Knoblock, David Krum, Vipin Kumar, Pierre Lermusiaux, Yan Liu, Chris North, Victor Pankratius, Shanan Peters, Beth Plale, Allen Pope, Sai Ravela, Juan Restrepo, Aaron Ridley, Hanan Samet, and Shashi Shekhar. Intelligent systems for geosciences: An essential research agenda. Communications of the ACM, 62:76–84, January 2019. [478] Surya Karthik Mukkavilli. EnviroNet: ImageNet for environment. In 18th Conference on Artificial and Com- putational Intelligence and its Applications to the Environmental Sciences. American Meteorological Society 99th Annual Meeting, 2019. [479] Imme Ebert-Uphoff, David Thompson, Ibrahim Demir, Yulia Gel, Mary Hill, Anuj Karpatne, Mariana Guereque, Vipin Kumar, Enrique Cabal-Cano, and Padhraic Smyth. A vision for the development of bench- marks to bridge geoscience and data science. 17th International Workshop on Climate Informatics, 2017. [480] Evan Racah, Christopher Beckham, Tegan Maharaj, Samira Ebrahimi Kahou, Prabhat, and Chris Pal. Ex- tremeWeather: A large-scale climate dataset for semi-supervised detection, localization, and understanding of extreme weather events. In Advances in Neural Information Processing Systems 30, pages 3402–3413. 2017. [481] Frederic Hourdin, Thorsten Mauritsen, Andrew Gettelman, Jean-Christophe Golaz, Venkatramani Balaji, Qingyun Duan, Doris Folini, Duoying Ji, Daniel Klocke, Yun Qian, Florian Rauser, Catherine Rio, Lorenzo Tomassini, Masahiro Watanabe, and Daniel Williamson. The art and science of climate model tuning. Bulletin of the American Meteorological Society, 98(3):589–602, 2017. [482] Steven C Sherwood, Sandrine Bony, and Jean-louis Dufresne. Spread in model climate sensitivity traced to atmospheric convective mixing. Nature, 505:37–42, 2014. [483] P Gentine, M Pritchard, S Rasp, G Reinaudi, and G Yacalis. Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45:5742–5751, 2018. [484] Stephan Rasp, Michael S Pritchard, and Pierre Gentine. Deep learning to represent subgrid processes in climate models. Proceedings of the National Academy of Sciences, 115(39):1–6, 2018. [485] Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Carvalhais, and Prabhat. Deep learning and process understanding for data-driven earth system science. Nature, 566(7743):195– 204, 2019. [486] Robert E Kopp, Robert M Deconto, Daniel A Bader, Carling C Hay, M Radley, Scott Kulp, Michael Oppen- heimer, David Pollard, and Benjamin H Strauss. Evolving understanding of Antarctic ice-sheet physics and ambiguity in probabilistic sea-level projections. Earth’s Future, 5(12):1217–1233, 2017. [487] M.- `E. Gagn´e, N. P. Gillett, and J. C. Fyfe. Observed and simulated changes in Antarctic sea ice extent over the past 50 years. Geophysical Research Letters, 42:90–95, 2015. [488] Edward Hanna, Francisco J Navarro, Frank Pattyn, Catia M Domingues, Xavier Fettweis, Erik R Ivins, Robert J Nicholls, Catherine Ritz, Ben Smith, Slawek Tulaczyk, Pippa L Whitehouse, and H Jay Zwally. Ice-sheet mass balance and climate change. Nature, 498(7452):51–59, 2013. [489] Peer Nowack, Peter Braesicke, Joanna Haigh, Nathan Luke Abraham, and John Pyle. Using machine learning to build temperature-based ozone parameterizations for climate sensitivity simulations. Environmental Research Letters, 13(104016), 2018. [490] Claudia Tebaldi and Reto Knutti. The use of the multi-model ensemble in probabilistic climate projections. Philosophical Transactions of the Royal Society A, 365:2053–2075, 2007. 91 [491] Claire Monteleoni, Gavin A Schmidt, Shailesh Saroha, and Eva Asplund. Tracking climate models. Statistical Analysis and Data Mining, 4:372–392, 2011. [492] Scott Mcquade and Claire Monteleoni. Global climate model tracking using geospatial neighborhoods. Twenty- Sixth AAAI Conference on Artificial Intelligence, 2012. [493] E Strobach and G Bel. Improvement of climate predictions and reduction of their uncertainties using learning algorithms. Atmospheric Chemistry and Physics, 15:8631–8641, 2015. [494] Gemma Anderson and Donald D Lucas. Machine learning predictions of a multiresolution climate model ensemble. Geophysical Research Letters, 45:4273–4280, 2018. [495] Tapio Schneider, Shiwei Lan, Andrew Stuart, and Jo˜ao Teixeira. Earth system modeling 2.0 : a blueprint for models that learn from observations and targeted high-resolution simulations. Geophysical Research Letters, 44:12396–12417, 2017. [496] J Shukla. Predictability in the midst of chaos: a scientific basis for climate forecasting. Science, 282:728–731, 1998. [497] Judah Cohen, Dim Coumou, Jessica Hwang, Lester Mackey, Paulo Orenstein, Sonja Totz, and Eli Tziperman. S2S reboot: an argument for greater inclusion of machine learning in subseasonal to seasonal forecasts. WIREs Climate Change, 10, 2018. [498] Jessica Hwang, Paulo Orenstein, Judah Cohen, Karl Pfeiffer, and Lester Mackey. Improving subseasonal fore- casting in the western U.S. with machine learning. Proceedings of the 25th ACM SIGKDD International Con- ference on Knowledge Discovery & Data Mining, 2019. [499] David John Gagne, Amy McGovern, Sue Ellen Haupt, Ryan A Sobash, John K Williams, and Ming Xue. Storm-based probabilistic hail forecasting with machine learning applied to convection-allowing ensembles. Weather and forecasting, 32(5):1819–1840, 2017. [500] Amy McGovern, Kimberly L Elmore, David John Gagne, Sue Ellen Haupt, Christopher D Karstens, Ryan Lagerquist, Travis Smith, and John K Williams. Using artificial intelligence to improve real-time decision- making for high-impact weather. Bulletin of the American Meteorological Society, 98(10):2073–2090, 2017. [501] Yunjie Liu, Evan Racah, Prabhat, Joaquin Correa, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, and William Collins. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. International Conference on Advances in Big Data Analytics, 2016. [502] Thorsten Kurth, Sean Treichler, Joshua Romero, Mayur Mudigonda, Nathan Luehr, Everett Phillips, Ankur Mahesh, Michael Matheson, Jack Deslippe, Massimiliano Fatica, Prabhat, and Michael Houston. Exascale In Proceedings of the International Conference for High Performance deep learning for climate analytics. Computing, Networking, Storage, and Analysis, SC ’18, pages 51:1–51:12, Piscataway, NJ, USA, 2018. IEEE Press. [503] Valliappa Lakshmanan and Travis Smith. An objective method of evaluating and devising storm-tracking algo- rithms. Weather and Forecasting, 25:701–709, 2010. [504] Wan Li, Li Ni, Zhao-liang Li, Si-bo Duan, and Hua Wu. Evaluation of machine learning algorithms in spatial downscaling of MODIS land surface temperature. IEEE Journal of Selected Topics in Applied Earth Observa- tions and Remote Sensing, 12:2299–2307, 2019. [505] MC Perignon, P Passalacqua, TM Jarriel, JM Adams, and I Overeem. Patterns of geomorphic processes across deltas using image analysis and machine learning. In AGU Fall Meeting Abstracts, 2018. [506] Muhammed Sit and Ibrahim Demir. Decentralized flood forecasting using deep neural networks. Preprint arXiv:1902.02308, 2019. 92 [507] Maziar Raissi and George Em Karniadakis. Hidden physics models: machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357:125–141, 2018. [508] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (Part I): data- driven solutions of nonlinear partial differential equations. Preprint, 2017. [509] D. D. Lucas, R. Klein, J. Tannahill, D. Ivanova, S. Brandon, D. Domyancic, and Y. Zhang. Failure analysis of parameter-induced simulation crashes in climate models. Geoscientific Model Development, 6:1157–1171, 2013. [510] M Jiang, B Gallagher, J Kallman, and D Laney. A supervised learning framework for arbitrary Lagrangian- Eulerian simulations. In 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, 2016. [511] J Ling and J Templeton. Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged Navier Stokes uncertainty. Physics of Fluids, 27(085103), 2015. [512] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in Neural Information Processing Systems, 2017. [513] Jayaraman Thiagarajan, Nikhil Jain, Rushil Anirudh, Alfredo Giminez, Rahul Sridhar, Marathe Aniruddha, Tao Wang, Mural Emani, Abhinav Bhatele, and Todd Gamblin. Bootstrapping parameter space exploration for fast tuning. In Proceedings of the 2018 International Conference on Supercomputing, pages 385–395, 2018. [514] D J Lary, G K Zewdie, X Liu, D Wu, E Levetin, Allee R J, Nabin Malakar, A Walker, H Mussa, Mannino A, and Aurin D. Machine learning for applications for Earth observation. Earth Observation Open Science and Innovation, (165), 2018. [515] John Quinn, Vanessa Frias-Martinez, and Lakshminarayan Subramanian. Computational sustainability and artificial intelligence in the developing world. AI Magazine, 35(3):36, 2014. [516] Christopher Potter, Shyam Boriah, Michael Steinbach, Vipin Kumar, and Steven Klooster. Terrestrial vegetation dynamics and global climate controls. Climate Dynamics, 31(1):67–78, 2008. [517] Shyam Boriah, Vipin Kumar, Michael Steinbach, Christopher Potter, and Steven Klooster. Land cover change In Proceedings of the 14th ACM SIGKDD international conference on Knowledge detection: a case study. discovery and data mining, pages 857–865. ACM, 2008. [518] Kolya Malkin, Caleb Robinson, Le Hou, Rachel Soobitsky, Jacob Czawlytko, Dimitris Samaras, Joel Saltz, Lucas Joppa, and Nebojsa Jojic. Label super-resolution networks. 2018. [519] Lior Bragilevsky and Ivan V Baji´c. Deep learning for Amazon satellite image analysis. In 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), pages 1–5. IEEE, 2017. [520] Nate G McDowell, Nicholas C Coops, Pieter SA Beck, Jeffrey Q Chambers, Chandana Gangodagamage, Jef- frey A Hicke, Cho-ying Huang, Robert Kennedy, Dan J Krofcheck, Marcy Litvak, et al. Global satellite moni- toring of climate-induced vegetation disturbances. Trends in plant science, 20(2):114–123, 2015. [521] Duy Huynh and Nathalie Neptune. Annotation automatique d’images: le cas de la d´eforestation. In Actes de la conf´erence Traitement Automatique de la Langue Naturelle, TALN 2018, page 101. [522] Kirk R Klausmeyer and M Rebecca Shaw. Climate change, habitat loss, protected areas and the climate adap- tation potential of species in Mediterranean ecosystems worldwide. PloS one, 4(7):e6392, 2009. [523] Xiaohui Feng, Mar´ıa Uriarte, Grizelle Gonz´alez, Sasha Reed, Jill Thompson, Jess K Zimmerman, and Lora Murphy. Improving predictions of tropical forest response to climate change through integration of field studies and ecosystem modeling. Global change biology, 24(1):e213–e232, 2018. 93 [524] Jane K Hart and Kirk Martinez. Environmental sensor networks: A revolution in the earth system science? Earth-Science Reviews, 78(3-4):177–191, 2006. [525] RW Hut, NC van de Giesen, and JS Selker. The TAHMO project: Designing an unconventional weather station. In EGU General Assembly Conference Abstracts, volume 14, page 8963, 2012. [526] G Griffiths, NW Millard, SD McPhail, P Stevenson, JR Perrett, M Peabody, AT Webb, and DT Meldrum. Towards environmental monitoring with the Autosub autonomous underwater vehicle. In Proceedings of 1998 International Symposium on Underwater Technology, pages 121–125. IEEE, 1998. [527] Matthew Dunbabin and Lino Marques. Robots for environmental monitoring: Significant advancements and applications. IEEE Robotics & Automation Magazine, 19(1):24–39, 2012. [528] Ethan W Dereszynski and Thomas G Dietterich. Probabilistic models for anomaly detection in remote sensor data streams. Preprint arXiv:1206.5250, 2012. [529] David J Hill and Barbara S Minsker. Anomaly detection in streaming environmental sensor data: A data-driven modeling approach. Environmental Modelling & Software, 25(9):1014–1022, 2010. [530] Jnaneshwar Das, Fr´ed´eric Py, Julio BJ Harvey, John P Ryan, Alyssa Gellene, Rishi Graham, David A Caron, Kanna Rajan, and Gaurav S Sukhatme. Data-driven robotic sampling for marine ecosystem monitoring. The International Journal of Robotics Research, 34(12):1435–1452, 2015. [531] Genevieve Flaspohler, Nicholas Roy, and Yogesh Girdhar. Feature discovery and visualization of robot mission data using convolutional autoencoders and bayesian nonparametric topic models. In 2017 IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems (IROS), pages 1–8. IEEE, 2017. [532] Jochem Marotzke, Christian Jakob, Sandrine Bony, Paul A Dirmeyer, Paul A O’Gorman, Ed Hawkins, Sarah Perkins-Kirkpatrick, Corinne Le Quere, Sophie Nowicki, Katsia Paulavets, et al. Climate research must sharpen its view. Nature climate change, 7(2):89, 2017. [533] Project Zamba computer vision for wildlife research & conservation. https://zamba.drivendata. org/. [534] Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Markus Meister, and Pietro Perona. Synthetic examples improve generalization for rare classes. Preprint arXiv:1904.05916, 2019. [535] Mohammad Sadegh Norouzzadeh, Anh Nguyen, Margaret Kosmala, Alexandra Swanson, Meredith S Palmer, Craig Packer, and Jeff Clune. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proceedings of the National Academy of Sciences, 115(25):E5716–E5725, 2018. [536] Jan C van Gemert, Camiel R Verschoor, Pascal Mettes, Kitso Epema, Lian Pin Koh, and Serge Wich. Nature conservation drones for automatic localization and counting of animals. In European Conference on Computer Vision, pages 255–270. Springer, 2014. [537] Dana M Ghioca-Robrecht, Carol A Johnston, and Mirela G Tulbure. Assessing the use of multiseason quickbird imagery for mapping invasive species in a lake erie coastal marsh. Wetlands, 28(4):1028–1039, 2008. [538] Robin Faillettaz, Marc Picheral, Jessica Y Luo, C´edric Guigand, Robert K Cowen, and Jean-Olivier Irisson. Imperfect automatic image classification successfully describes plankton distribution patterns. Methods in Oceanography, 15:60–77, 2016. [539] Grace Young, Vassileios Balntas, and Victor Prisacariu. Convolutional neural networks predict fish abundance from underlying coral reef texture. MarXiv. August, 31, 2018. [540] Brian L Sullivan, Christopher L Wood, Marshall J Iliff, Rick E Bonney, Daniel Fink, and Steve Kelling. ebird: A citizen-based bird observation network in the biological sciences. Biological Conservation, 142(10):2282– 2292, 2009. 94 [541] PlantSnap. Homepage. https://www.plantsnap.com/. [542] Simone Branchini, Francesco Pensa, Patrizia Neri, Bianca Maria Tonucci, Lisa Mattielli, Anna Collavo, Maria Elena Sillingardi, Corrado Piccinetti, Francesco Zaccanti, and Stefano Goffredo. Using a citizen science program to monitor coral reef biodiversity through space and time. Biodiversity and conservation, 24(2):319– 336, 2015. [543] Sreejith Menon, Tanya Berger-Wolf, Emre Kiciman, Lucas Joppa, Charles V Stewart, Jason Parham, Jonathan Crall, Jason Holmberg, and Jonathan Van Oast. Animal population estimation using flickr images. 2016. [544] Jeffrey F Kelly, Kyle G Horton, Phillip M Stepanian, Kirsten M de Beurs, Todd Fagin, Eli S Bridge, and Phillip B Chilson. Novel measures of continental-scale avian migration phenology related to proximate envi- ronmental cues. Ecosphere, 7(9), 2016. [545] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print In Proceedings of the IEEE Conference on Computer Vision and Pattern in fine-grained dataset collection. Recognition, pages 595–604, 2015. [546] Eric Ralls. Systems and methods for electronically identifying plant species, November 8 2018. US Patent App. 15/973,660. [547] Grant Van Horn and Pietro Perona. The devil is in the tails: Fine-grained classification in the wild. Preprint arXiv:1709.01450, 2017. [548] Yexiang Xue, Ian Davies, Daniel Fink, Christopher Wood, and Carla P Gomes. Avicaching: A two stage game In Proceedings of the 2016 International Conference on Autonomous for bias reduction in citizen science. Agents & Multiagent Systems, pages 776–785. International Foundation for Autonomous Agents and Multiagent Systems, 2016. [549] Di Chen and Carla P Gomes. Bias reduction via end-to-end shift learning: Application to citizen science. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019. [550] Pushpendra Rana and Daniel C Miller. Machine learning to analyze the social-ecological impacts of natural resource policy: insights from community forest management in the indian himalaya. Environmental Research Letters, 2018. [551] Heidi J Albers, Kim Meyer Hall, Katherine D Lee, Majid Alkaee Taleghan, and Thomas G Dietterich. The role of restoration and key ecological invasion mechanisms in optimal spatial-dynamic management of invasive species. Ecological Economics, 151:44–54, 2018. [552] Andreas Lydakis, Jenica M Allen, Marek Petrik, and Tim Szewczyk. Computing robust strategies for managing invasive plants. [553] Rajendra K Pachauri. Climate Change 2014 Synthesis Report. 2014. [554] Mark Pelling. Adaptation to climate change: from resilience to transformation. Routledge, 2010. [555] Linda Shi, Eric Chu, Isabelle Anguelovski, Alexander Aylett, Jessica Debats, Kian Goh, Todd Schenk, Karen C Seto, David Dodman, Debra Roberts, et al. Roadmap towards justice in urban climate adaptation research. Nature Climate Change, 6(2):131, 2016. [556] Amrita Gupta, Caleb Robinson, and Bistra Dilkina. Infrastructure resilience for climate adaptation. In Pro- ceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, page 28. ACM, 2018. [557] Vanessa Frias-Martinez, Cristina Soguero, and Enrique Frias-Martinez. Estimation of urban commuting pat- terns using cellphone network data. In Proceedings of the ACM SIGKDD international workshop on urban computing, pages 9–16. ACM, 2012. 95 [558] Vipin Jain, Ashlesh Sharma, and Lakshminarayanan Subramanian. Road traffic congestion in the developing world. In Proceedings of the 2nd ACM Symposium on Computing for Development, page 11. ACM, 2012. [559] David Pastor-Escuredo, Alfredo Morales-Guzm´an, Yolanda Torres-Fern´andez, Jean-Martin Bauer, Amit Wad- hwa, Carlos Castro-Correa, Liudmyla Romanoff, Jong Gun Lee, Alex Rutherford, Vanessa Frias-Martinez, et al. Flooding through the lens of mobile phone activity. In IEEE Global Humanitarian Technology Confer- ence (GHTC 2014), pages 279–286. IEEE, 2014. [560] Ami Wiesel, Avinatan Hassidim, Gal Elidan, Guy Shalev, Mor Schlesinger, Oleg Zlydenko, Ran El-Yaniv, Sella Nevo, Yossi Matias, Yotam Gigi, et al. Ml for flood forecasting at scale. 2018. [561] Barak Oshri, Annie Hu, Peter Adelson, Xiao Chen, Pascaline Dupas, Jeremy Weinstein, Marshall Burke, David Lobell, and Stefano Ermon. Infrastructure quality assessment in africa using satellite imagery and deep learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 616–625. ACM, 2018. [562] Fitore Muharemi, Doina Logof˘atu, and Florin Leon. Machine learning approaches for anomaly detection of water quality on a real-world data set. Journal of Information and Telecommunication, pages 1–14, 2019. [563] Roshanak Nateghi. Multi-dimensional infrastructure resilience modeling: An application to hurricane-prone electric power distribution systems. IEEE Access, 6:13478–13489, 2018. [564] Mathaios Panteli and Pierluigi Mancarella. The grid: Stronger bigger smarter?: Presenting a conceptual frame- work of power system resilience. IEEE Power Energy Mag, 13(3):58–66, 2015. [565] Xi Fang, Satyajayant Misra, Guoliang Xue, and Dejun Yang. Smart grid—the new and improved power grid: A survey. IEEE communications surveys & tutorials, 14(4):944–980, 2012. [566] Sarah Fletcher, Megan Lickley, and Kenneth Strzepek. Learning about climate change uncertainty enables flexible water infrastructure planning. Nature communications, 10(1):1782, 2019. [567] I Delpla, A-V Jung, E Baures, M Clement, and O Thomas. Impacts of climate change on surface water quality in relation to drinking water production. Environment international, 35(8):1225–1233, 2009. [568] The water, peace and security partnership. Institute for Water Education website, 2019. [569] Julianne D Quinn, Patrick M Reed, and Klaus Keller. Direct policy search for robust multi-objective manage- ment of deeply uncertain socio-ecological tipping points. Environmental Modelling & Software, 92:125–141, 2017. [570] Matteo Giuliani, Andrea Castelletti, Francesca Pianosi, Emanuele Mason, and Patrick M Reed. Curses, trade- offs, and scalable management: Advancing evolutionary multiobjective direct policy search to improve water reservoir operations. Journal of Water Resources Planning and Management, 142(2):04015050, 2015. [571] Chaopeng Shen. A trans-disciplinary review of deep learning research for water resources scientists. Preprint arXiv:1712.02162, 2017. [572] Satyam Srivastava, Saikrishna Vaddadi, Pankaj Kumar, and Shashikant Sadistap. Design and development of reverse osmosis (RO) plant status monitoring system for early fault prediction and predictive maintenance. Applied Water Science, 8(6):159, 2018. [573] Otilia Elena Dragomir, Rafael Gouriveau, Florin Dragomir, Eugenia Minca, and Noureddine Zerhouni. Review of prognostic problem in condition-based maintenance. In 2009 European Control Conference (ECC), pages 1587–1592. IEEE, 2009. [574] Zubair A Baig. On the use of pattern matching for rapid anomaly detection in smart grid infrastructures. In 2011 IEEE International Conference on Smart Grid Communications (SmartGridComm), pages 214–219. IEEE, 2011. 96 [575] Djellel Eddine Difallah, Philippe Cudre-Mauroux, and Sean A McKenna. Scalable anomaly detection for smart city infrastructure networks. IEEE Internet Computing, 17(6):39–47, 2013. [576] JR Porter, L Xie, AJ Challinor, K Cochrane, MM Howden, DB Lobell, and MI Travasso. Food security and food production systems. Climate Change 2014: Impacts, Adaptation, Vulnerability., pages 485–533, 2014. [577] Aiguo Dai. Drought under global warming: a review. Wiley Interdisciplinary Reviews: Climate Change, 2(1):45–65, 2011. [578] Adeline Decuyper, Alex Rutherford, Amit Wadhwa, Jean-Martin Bauer, Gautier Krings, Thoralf Gutierrez, Vincent D Blondel, and Miguel A Luengo-Oroz. Estimating food consumption and poverty indices with mobile phone data. Preprint arXiv:1412.2595, 2014. [579] UN Global Pulse. Using mobile phone data and airtime credit purchases to estimate food security. New York: UN World Food Programme (WFP), Universit´e Catholique de Louvain, Real Impact Analytics, Pulse Lab New York, 2015. [580] Jaewoo Kim, Meeyoung Cha, and Jong Gun Lee. Nowcasting commodity prices using social media. PeerJ Computer Science, 3:e126, 2017. [581] S. Chakraborty and A. C. Newton. Climate change, plant diseases and food security: an overview. Plant Pathology, 60(1):2–14, jan 2011. [582] Anna X Wang, Caelin Tran, Nikhil Desai, David Lobell, and Stefano Ermon. Deep transfer learning for crop yield prediction with remote sensing data. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, page 50. ACM, 2018. [583] C Tebaldi and DB Lobell. Towards probabilistic projections of climate change impacts on global crop yields. Geophysical Research Letters, 35(8), 2008. [584] Cynthia Rosenzweig, Joshua Elliott, Delphine Deryng, Alex C Ruane, Christoph M¨uller, Almut Arneth, Ken- neth J Boote, Christian Folberth, Michael Glotter, Nikolay Khabarov, et al. Assessing agricultural risks of climate change in the 21st century in a global gridded crop model intercomparison. Proceedings of the Na- tional Academy of Sciences, 111(9):3268–3273, 2014. [585] Venkata Shashank Konduri, Jitendra Kumar, Forrest Hoffman, Udit Bhatia, Tarik Gouthier, and Auroop Gan- guly. Physics-guided data science for food security and climate. KDD Feed Workshop 2019. [586] Michela Paganini, Luke de Oliveira, and Benjamin Nachman. Accelerating science with generative adver- sarial networks: an application to 3D particle showers in multilayer calorimeters. Physical review letters, 120(4):042003, 2018. [587] Max Welling. Are ML and statistics complementary? In IMS-ISBA Meeting on ‘Data Science in the Next 50 Years, 2015. [588] Marie- `Eve Rancourt, Jean-Franc¸ois Cordeau, Gilbert Laporte, and Ben Watkins. Tactical network planning for food aid distribution in Kenya. Computers & Operations Research, 56:68–83, 2015. [589] Gautam Prasad, Upendra Reddy Vuyyuru, and Mithun Das Gupta. Agriculture commodity arrival prediction using remote sensing data: Insights and beyond. KDD Feed Workshop 2019. https://drive.google. com/file/d/1BQ5QH036yifiza8TOKt_8FbimYyQ0SYA/view. [590] DrivenData. Mapping agricultural supply chains from source to shelf. http://drivendata.co/case- studies/mapping-agricultural-supply-chains-from-source-to-shelf/. [591] Ernest Mwebaze, Washington Okori, and John Alexander Quinn. Causal structure learning for famine predic- tion. In 2010 AAAI Spring Symposium Series, 2010. 97 [592] Arun Agrawal and Nicolas Perrin. Climate adaptation, local institutions and rural livelihoods. Adapting to climate change: thresholds, values, governance, pages 350–367, 2009. [593] Daivi Rodima-Taylor. Social innovation and climate adaptation: Local collective action in diversifying Tanza- nia. Applied Geography, 33:128–134, 2012. [594] Solomon Assefa. Hello Tractor pilot agriculture digital wallet based on AI and blockchain. [595] UN Global Pulse. Landscaping study: Digital signals & access to finance in Kenya, Sep 2013. [596] Vanessa Frias-Martinez, Victor Soto, Jesus Virseda, and Enrique Frias-Martinez. Computing cost-effective census maps from cell phone traces. In Workshop on pervasive urban applications, 2012. [597] Vukosi Marivate and Nyalleng Moorosi. Employment relations: a data driven analysis of job markets using online job boards and online professional networks. In Proceedings of the International Conference on Web Intelligence, pages 1110–1113. ACM, 2017. [598] Kirk Bansak, Jeremy Ferwerda, Jens Hainmueller, Andrea Dillon, Dominik Hangartner, Duncan Lawrence, and Jeremy Weinstein. Improving refugee integration through data-driven algorithmic assignment. Science, 359(6373):325–329, 2018. [599] UN Global Pulse. Improving professional training in Indonesia with gaming data, 2017. [600] Emilio Zagheni, Ingmar Weber, Krishna Gummadi, et al. Leveraging Facebook’s advertising platform to mon- itor stocks of migrants. Population and Development Review, 43(4):721–734, 2017. [601] Sibren Isaacman, Vanessa Frias-Martinez, Lingzi Hong, and Enrique Frias-Martinez. Climate change induced migrations from a cell phone perspective. NetMob, page 46, 2017. [602] Joshua E Blumenstock. Inferring patterns of internal migration from mobile phone call records: evidence from Rwanda. Information Technology for Development, 18(2):107–125, 2012. [603] John A Quinn, Marguerite M Nyhan, Celia Navarro, Davide Coluccia, Lars Bromley, and Miguel Luengo- Oroz. Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128):20170363, 2018. [604] Katherine Hoffmann Pham, Jeremy Boy, and Miguel Luengo-Oroz. Data fusion to describe and quantify search and rescue operations in the Mediterranean sea. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pages 514–523. IEEE, 2018. [605] Vincenzo Lomonaco, Angelo Trotta, Marta Ziosi, Juan De Dios Y´a˜nez ´Avila, and Natalia D´ıaz-Rodr´ıguez. Intelligent drone swarm for search and rescue operations at sea. Preprint arXiv:1811.05291, 2018. [606] UN Global Pulse. Social media and forced displacement: Big data analytics & machine learning, Sep 2017. [607] Andy Haines, R Sari Kovats, Diarmid Campbell-Lendrum, and Carlos Corval´an. Climate change and human health: impacts, vulnerability and public health. Public health, 120(7):585–596, 2006. [608] MC Sarofim, Shubhayu Saha, MD Hawkins, DM Mills, Jeremy J Hess, Radley M Horton, Patrick L Kinney, Joel D Schwartz, and Alexis St Juliana. The impacts of climate change on human health in the United States: a scientific assessment. The Impacts of Climate Change on Human Health in the United States: A Scientific Assessment, 2016. [609] Joel Schwartz, Jonathan M Samet, and Jonathan A Patz. Hospital admissions for heart disease: the effects of temperature and humidity. Epidemiology, 15(6):755–761, 2004. [610] Francesca Dominici, Roger D Peng, Michelle L Bell, Luu Pham, Aidan McDermott, Scott L Zeger, and Jonathan M Samet. Fine particulate air pollution and hospital admission for cardiovascular and respiratory diseases. Jama, 295(10):1127–1134, 2006. 98 [611] Muin J Khoury, Tram Kim Lam, John PA Ioannidis, Patricia Hartge, Margaret R Spitz, Julie E Buring, Stephen J Chanock, Robert T Croyle, Katrina A Goddard, Geoffrey S Ginsburg, et al. Transforming epidemiology for 21st century medicine and public health. Cancer Epidemiology and Prevention Biomarkers, 22(4):508–516, 2013. [612] Marcel Salathe, Linus Bengtsson, Todd J Bodnar, Devon D Brewer, John S Brownstein, Caroline Buckee, Ellsworth M Campbell, Ciro Cattuto, Shashank Khandelwal, Patricia L Mabry, et al. Digital epidemiology. PLoS computational biology, 8(7):e1002616, 2012. [613] Nicholas Clinton and Peng Gong. MODIS detected surface urban heat islands and sinks: Global locations and controls. Remote Sensing of Environment, 134:294–304, 2013. [614] Hung Chak Ho, Anders Knudby, Paul Sirovyak, Yongming Xu, Matus Hodul, and Sarah B Henderson. Mapping maximum urban air temperature on hot summer days. Remote Sensing of Environment, 154:38–45, 2014. [615] Jackson Voelkel, Vivek Shandas, and Brendon Haggerty. Peer reviewed: Developing high-resolution descrip- tions of urban heat islands: A public health imperative. Preventing chronic disease, 13, 2016. [616] Sidrah Hafeez, Man Sing Wong, Hung Chak Ho, Majid Nazeer, Janet Nichol, Sawaid Abbas, Danling Tang, Kwon Ho Lee, and Lilian Pun. Comparison of machine learning algorithms for retrieval of water quality indicators in case-II waters: a case study of Hong Kong. Remote Sensing, 11(6):617, 2019. [617] Nikhil Kumar Koditala and Purnendu Shekar Pandey. Water quality monitoring system using IoT and machine learning. In 2018 International Conference on Research in Intelligent and Computing in Engineering (RICE), pages 1–5. IEEE, 2018. [618] Qian Di, Petros Koutrakis, Christine Choirat, Francesca Dominici, and Joel D Schwartz. Machine learning In ISEE approach for spatially and temporally resolved PM2.5 exposures in the continental United States. Conference Abstracts, 2018. [619] Jie Chen, Kees de Hoogh, Maciek Strak, Jules Kerckhoffs, Roel Vermeulen, Bert Brunekreef, and Gerard Hoek. OP III–4 exposure assessment models for NO2 and PM2.5 in the elapse study: a comparison of supervised linear regression and machine learning approaches, 2018. [620] Nick Watts, W Neil Adger, Sonja Ayeb-Karlsson, Yuqi Bai, Peter Byass, Diarmid Campbell-Lendrum, Tim Colbourn, Peter Cox, Michael Davies, Michael Depledge, et al. The Lancet Countdown: tracking progress on health and climate change. The Lancet, 389(10074):1151–1164, 2017. [621] Alex Pentland, David Lazer, Devon Brewer, and Tracy Heibeck. Using reality mining to improve public health and medicine. Stud Health Technol Inform, 149:93–102, 2009. [622] Sherri Rose. Mortality risk score prediction in an elderly population using machine learning. American journal of epidemiology, 177(5):443–452, 2013. [623] Patrick Meier. Human computation for disaster response. In Handbook of human computation, pages 95–104. Springer, 2013. [624] Carlos Castillo. Big crisis data: Social media in disasters and time-critical situations. Cambridge University Press, 2016. [625] William A Yasnoff, Patrick W O Carroll, Denise Koo, Robert W Linkins, and Edwin M Kilbourne. Public health informatics: improving and transforming public health in the information age. Journal of Public Health Management and Practice, 6(6):67–75, 2000. [626] Fahad Pervaiz, Mansoor Pervaiz, Nabeel Abdur Rehman, and Umar Saif. FluBreaks: early epidemic detection from Google flu trends. Journal of medical Internet research, 14(5):e125, 2012. 99 In Joint European conference on machine learning and knowledge discovery in databases, pages 599–602. Springer, 2010. [628] Michael A Johansson, Nicholas G Reich, Aditi Hota, John S Brownstein, and Mauricio Santillana. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico. Scientific reports, 6:33707, 2016. [629] David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. The parable of Google Flu: traps in big data analysis. Science, 343(6176):1203–1205, 2014. [630] Sudhakar V Nuti, Brian Wayda, Isuru Ranasinghe, Sisi Wang, Rachel P Dreyer, Serene I Chen, and Karthik Murugiah. The use of google trends in health care research: a systematic review. PloS one, 9(10):e109583, 2014. [631] Charles C Onu, Innocent Udeogu, Eyenimi Ndiomu, Urbain Kengni, Doina Precup, Guilherme M Sant’Anna, Edward Alikor, and Peace Opara. Ubenwa: Cry-based diagnosis of birth asphyxia. Preprint arXiv:1711.06405, 2017. [632] John A Quinn, Alfred Andama, Ian Munabi, and Fred N Kiwanuka. Automated blood smear analysis for mobile malaria diagnosis. Mobile Point-of-Care Monitors and Diagnostic Device Design, 31:115, 2014. [633] Joel Robertson and Del J DeHart. An agile and accessible adaptation of Bayesian inference to medical diag- nostics for rural health extension workers. In 2010 AAAI Spring Symposium Series, 2010. [634] Emma Brunskill and Neal Lesh. Routing for rural health: optimizing community health worker visit schedules. In 2010 AAAI Spring Symposium Series, 2010. [635] Jigar Doshi, Saikat Basu, and Guan Pang. From satellite imagery to disaster insights. arXiv:1812.07033, 2018. Preprint [636] Favyen Bastani, Songtao He, Sofiane Abbar, Mohammad Alizadeh, Hari Balakrishnan, Sanjay Chawla, and In Proceedings of the 26th ACM SIGSPATIAL International Sam Madden. Machine-assisted map editing. Conference on Advances in Geographic Information Systems, pages 23–32. ACM, 2018. [637] Stefan Voigt, Thomas Kemper, Torsten Riedlinger, Ralph Kiefl, Klaas Scholte, and Harald Mehl. Satellite image IEEE transactions on geoscience and remote sensing, analysis for disaster and crisis-management support. 45(6):1520–1528, 2007. [638] Ritwik Gupta, Bryce Goodman, Nirav Patel, Ricky Hosfelt, Sandra Sajeev, Eric Heim, Jigar Doshi, Keane Lu- cas, Howie Choset, and Matthew Gaston. Creating xBD: A dataset for assessing building damage from satellite In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, imagery. pages 10–17, 2019. [639] Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. CrisisLex: A lexicon for collecting and filtering microblogged communications in crises. In Eighth International AAAI Conference on Weblogs and Social Media, 2014. [640] Muhammad Imran, Carlos Castillo, Fernando Diaz, and Sarah Vieweg. Processing social media messages in mass emergency: A survey. ACM Computing Surveys (CSUR), 47(4):67, 2015. [641] David W Keith. Geoengineering the climate: History and prospect. Annual review of energy and the environ- ment, 25(1):245–284, 2000. [642] John G Shepherd. Geoengineering the climate: science, governance and uncertainty. Royal Society, 2009. [643] Peter J Irvine, Ben Kravitz, Mark G Lawrence, and Helene Muri. An overview of the Earth system science of solar geoengineering. Wiley Interdisciplinary Reviews: Climate Change, 7(6):815–833, 2016. 100 [644] David Keith and Peter Irvine. The science and technology of solar geoengineering: A compact summary. Governance of the Deployment of Solar Geoengineering, page 1, 2018. [645] Andy Parker and Peter J Irvine. The risk of termination shock from solar geoengineering. Earth’s Future, 6(3):456–467, 2018. [646] Peter Irvine, Kerry Emanuel, Jie He, Larry W Horowitz, Gabriel Vecchi, and David Keith. Halving warming with idealized solar geoengineering moderates key climate hazards. Nature Climate Change, page 1, 2019. [647] Andy Jones, Jim Haywood, and Olivier Boucher. Climate impacts of geoengineering marine stratocumulus clouds. Journal of Geophysical Research: Atmospheres, 114(D10), 2009. [648] Trude Storelvmo, WR Boos, and N Herger. Cirrus cloud seeding: a climate engineering mechanism with reduced side effects? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engi- neering Sciences, 372(2031):20140116, 2014. [649] Philip J Rasch, Simone Tilmes, Richard P Turco, Alan Robock, Luke Oman, Chih-Chieh Chen, Georgiy L Stenchikov, and Rolando R Garcia. An overview of geoengineering of climate using stratospheric sulphate aerosols. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1882):4007–4037, 2008. [650] Hashem Akbari, H Damon Matthews, and Donny Seto. The long-term effect of increasing the albedo of urban areas. Environmental Research Letters, 7(2):024004, 2012. [651] Roger Angel. Feasibility of cooling the Earth with a cloud of small spacecraft near the inner Lagrange point (L1). Proceedings of the National Academy of Sciences, 103(46):17184–17189, 2006. [652] Justin McClellan, David W Keith, and Jay Apt. Cost analysis of stratospheric albedo modification delivery systems. Environmental Research Letters, 7(3):034019, 2012. [653] Jordan P Smith, John A Dykema, and David W Keith. Production of sulfates onboard an aircraft: implications for the cost and feasibility of stratospheric solar geoengineering. Earth and Space Science, 5(4):150–162, 2018. [654] Alan Robock, Douglas G MacMartin, Riley Duren, and Matthew W Christensen. Studying geoengineering with natural and anthropogenic analogs. Climatic Change, 121(3):445–458, 2013. [655] Sebastian D Eastham, Debra K Weisenstein, David W Keith, and Steven RH Barrett. Quantifying the impact of sulfate geoengineering on mortality from air quality and UV-B exposure. Atmospheric environment, 187:424– 434, 2018. [656] Jonathan Proctor, Solomon Hsiang, Jennifer Burney, Marshall Burke, and Wolfram Schlenker. Estimating global agricultural effects of geoengineering using volcanic eruptions. Nature, 560(7719):480, 2018. [657] Simon Gruber, Ulrich Blahak, Florian Haenel, Christoph Kottmeier, Thomas Leisner, Harel Muskatel, Trude Storelvmo, and Bernhard Vogel. A process study on thinning of arctic winter cirrus clouds with high-resolution icon-art simulations. Journal of Geophysical Research: Atmospheres, 0(ja), 2019. [658] JA Dykema, DW Keith, and FN Keutsch. Improved aerosol radiative properties as a foundation for solar geoengineering risk assessment. Geophysical Research Letters, 43(14):7758–7766, 2016. [659] Christopher G Fletcher, Ben Kravitz, and Bakr Badawy. Quantifying uncertainty from aerosol and atmospheric parameters and their impact on climate sensitivity. Atmospheric Chemistry and Physics, 18(23):17529–17543, 2018. [660] Douglas G MacMartin and Ben Kravitz. The engineering of climate engineering. Annual Review of Control, Robotics, and Autonomous Systems, (0), 2018. 101 [661] Signe Moe, Anne Marthine Rustad, and Kristian G Hanssen. Machine learning in control systems: An overview In International Conference on Innovative Techniques and Applications of Artificial of the state of the art. Intelligence, pages 250–265. Springer, 2018. [662] Ross Boczar, Nikolai Matni, and Benjamin Recht. Finite-data performance guarantees for the output-feedback control of an unknown system. In 2018 IEEE Conference on Decision and Control (CDC), pages 2994–2999. IEEE, 2018. [663] Brandon Amos, Ivan Jimenez, Jacob Sacks, Byron Boots, and J Zico Kolter. Differentiable MPC for end-to-end planning and control. In Advances in Neural Information Processing Systems, pages 8289–8300, 2018. [664] Christian Schroeder de Witt and Thomas Hornigold. Stratospheric aerosol injection as a deep reinforcement learning problem. Preprint arXiv:1905.07366, 2019. [665] Qian Di, Itai Kloog, Petros Koutrakis, Alexei Lyapustin, Yujie Wang, and Joel Schwartz. Assessing PM2.5 exposures with high spatiotemporal resolution across the continental United States. Environmental science & technology, 50(9):4712–4721, 2016. [666] A Crane-Droesch, B Kravitz, and JT Abatzoglou. Using deep learning to model potential impacts of geoengi- neering via solar radiation management on US agriculture. In AGU Fall Meeting Abstracts, 2018. [667] Marshall Burke, Solomon M Hsiang, and Edward Miguel. Global non-linear effect of temperature on economic production. Nature, 527(7577):235, 2015. [668] Noah S Diffenbaugh and Marshall Burke. Global warming has increased global economic inequality. Proceed- ings of the National Academy of Sciences, 116(20):9808–9813, 2019. [669] David L Kelly and Charles D Kolstad. Integrated assessment models for climate change control. International yearbook of environmental and resource economics, 2000:171–197, 1999. [670] John Weyant. Some contributions of integrated assessment models of global climate change. Review of Envi- ronmental Economics and Policy, 11(1):115–137, 2017. [671] Albert C Lin. Does geoengineering present a moral hazard. Ecology LQ, 40:673, 2013. [672] Christopher J Preston. Ethics and geoengineering: reviewing the moral issues raised by solar radiation man- agement and carbon dioxide removal. Wiley Interdisciplinary Reviews: Climate Change, 4(1):23–37, 2013. [673] David W Keith. Toward a responsible solar geoengineering research program. Issues in Science and Technology, 33(3):71–77, 2017. [674] Douglas G MacMartin, Ben Kravitz, and Philip J Rasch. On solar geoengineering and climate uncertainty. Geophysical Research Letters, 42(17):7156–7161, 2015. [675] K. Williamson, A. Satre-Meloy, K. Velasco, and K. Green. Climate Change Needs Behavior Change: Making the case for behavioral solutions to reduce global warming. Technical report, Center for Behavior and the Environment, 2018. [676] Alberto Mucci. The supermarket of the future knows exactly what you’re eating. //www.vice.com/en_us/article/4xbppn/the-supermarket-of-the-future-knows- exactly-what-youre-eating, 2016. https: [677] Karen Ehrhardt-Martinez, Kat A Donnelly, Skip Laitner, et al. Advanced metering initiatives and residential feedback programs: a meta-review for household electricity-saving opportunities. American Council for an Energy-Efficient Economy Washington, DC, 2010. [678] Adrian Albert and Mehdi Maasoumy. Predictive segmentation of energy consumers. Applied energy, 177:435– 448, 2016. 102 [679] Hunt Allcott. Social norms and energy conservation. Journal of public Economics, 95(9-10):1082–1095, 2011. [680] Hunt Allcott and Todd Rogers. The short-run and long-run effects of behavioral interventions: Experimental evidence from energy conservation. American Economic Review, 104(10):3003–37, 2014. [681] Christopher M Jones and Daniel M Kammen. Quantifying carbon footprint reduction opportunities for US households and communities. Environmental science & technology, 45(9):4088–4095, 2011. [682] Christopher Jones and Daniel M Kammen. Spatial distribution of US household carbon footprints reveals suburbanization undermines greenhouse gas benefits of urban population density. Environmental science & technology, 48(2):895–902, 2014. [683] K Carrie Armel, Abhay Gupta, Gireesh Shrimali, and Adrian Albert. Is disaggregation the holy grail of energy efficiency? the case of electricity. Energy Policy, 52:213–234, 2013. [684] Vasughi Sundramoorthy, Grahame Cooper, Nigel Linge, and Qi Liu. Domesticating energy-monitoring systems: Challenges and design concerns. IEEE pervasive Computing, 10(1):20–27, 2011. [685] David MacKay. Sustainable Energy-without the hot air. UIT Cambridge, 2008. [686] David Klenert, Linus Mattauch, Emmanuel Combet, Ottmar Edenhofer, Cameron Hepburn, Ryan Rafaty, and Nicholas Stern. Making carbon pricing work for citizens. Nature Climate Change, 8(8):669–677, 2018. Estimating the marginal carbon intensity of electricity with machine learn- ing. https://medium.com/electricitymap/using-machine-learning-to-estimate- the-hourly-marginal-carbon-intensity-of-electricity-49eade43b421, 2018. [688] Goran Strbac. Demand side management: Benefits and challenges. Energy policy, 36(12):4419–4426, 2008. [689] FC Schweppe, B Daryanian, and RD Tabors. Algorithms for a spot price responding residential load controller. IEEE Transactions on Power Systems, 4(2):507–516, 1989. [690] Elena Mocanu, Decebal Constantin Mocanu, Phuong H Nguyen, Antonio Liotta, Michael E Webber, Madeleine Gibescu, and Johannes G Slootweg. On-line building energy optimization using deep reinforcement learning. IEEE Transactions on Smart Grid, 2018. [691] T Remani, EA Jasmin, and TP Imthias Ahamed. Residential load scheduling with renewable generation in the smart grid: A reinforcement learning approach. IEEE Systems Journal, (99):1–12, 2018. [692] Liam F Beiser-McGrath and Robert A Huber. Assessing the relative importance of psychological and demo- graphic factors for predicting climate and environmental attitudes. Climatic change, 149(3-4):335–347, 2018. [693] Simone Carr-Cornish, Peta Ashworth, John Gardner, and Stephen J Fraser. Exploring the orientations which characterise the likely public acceptance of low emission energy technologies. Climatic change, 107(3-4):549– 565, 2011. [694] Crist´obal De La Maza, Alex Davis, Cleotilde Gonzalez, and Inˆes Azevedo. A graph-based model to discover preference structure from choice data. In 40th Annual Meeting of the Cognitive Science Society (CogSci 2018), pages 25–28, 2018. [695] Elizabeth Gabe-Thomas, Ian Walker, Bas Verplanken, and Gavin Shaddick. Householders’ mental models of domestic energy consumption: using a sort-and-cluster method to identify shared concepts of appliance similarity. PloS one, 11(7):e0158949, 2016. [696] Shan-lin Yang, Chao Shen, et al. A review of electric load classification in smart grid environment. Renewable and Sustainable Energy Reviews, 24:103–110, 2013. [697] Crist´obal de la Maza Guzm´an. Willingness to pay to avoid environmental impacts of electricity generation. Technical report, Latin American and Caribbean Environmental Economics Program, 2013. 103 [698] Jiansong Zhang and Nora M. El-Gohary. Automated information transformation for automated regulatory compliance checking in construction. Journal of Computing in Civil Engineering, 29(4):B4015001, 2015. [699] Wanda Bell, Lewis Ahron Kaufman, William Joseph Krajewski, John J McGillicuddy, Paul Aloysius Scanlon Jr, Abhijit Dey, Sharon Ameet Fanse, Giridhar Holenarsipur Nagaraj, Shyamli Rai, Sunitha Sundaramurthy, et al. Systems and methods for automated data privacy compliance, November 29 2016. US Patent 9,507,960. [700] Charlotte Jones, Donald W Hine, and Anthony DG Marks. The future is now: reducing psychological distance to increase public engagement with climate change. Risk Analysis, 37(2):331–341, 2017. [701] Liam F Beiser-McGrath and Thomas Bernauer. Commitment failures are unlikely to undermine public support for the paris agreement. Nature climate change, 9(3):248, 2019. [702] Victor Schmidt, Alexandra Luccioni, S Karthik Mukkavilli, Narmada Balasooriya, Kris Sankaran, Jennifer Chayes, and Yoshua Bengio. Visualizing the consequences of climate change using cycle-consistent adversarial networks. Preprint arXiv:1905.03709, 2019. [703] Ioannis C Konstantakopoulos, Andrew R Barkan, Shiying He, Tanya Veeravalli, Huihan Liu, and Costas Spanos. A deep learning and gamification approach to improving human-building interaction and energy effi- ciency in smart infrastructure. Applied Energy, 237:810–821, 2019. [704] Marc Gunther. The power of peer pressure in combatting climate change. https://www.greenbiz.com/ blog/2010/01/19/power-peer-pressure-combatting-climate-change, 2010. re- port claims. https://www.forbes.com/sites/dominicdudley/2018/01/13/renewable- energy-cost-effective-fossil-fuels-2020/#63a450834ff2, 2018. [706] Scott De Marchi and Scott E Page. Agent-based models. Annual Review of political science, 17:1–20, 2014. [707] Joshua M. Epstein. Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton University Press, stu - student edition edition, 2006. [708] Varun Rai and Scott A. Robinson. Agent-based modeling of energy technology adoption: Empirical integration of social, behavioral, economic, and environmental factors. Environmental Modelling & Software, 70:163 – 177, 2015. [709] Leonore Haelg, Marius Waelchli, and Tobias S Schmidt. Supporting energy technology deployment while avoiding unintended technological lock-in: a policy design perspective. Environmental Research Letters, 13(10):104011, oct 2018. [710] Tao Zhang and William J Nuttall. An agent-based simulation of smart metering technology adoption. Interna- tional Journal of Agent Technologies and Systems (IJATS), 4(1):17–38, 2012. [711] Mehdi Noori and Omer Tatari. Development of an agent-based model for regional market penetration projec- tions of electric vehicles in the united states. Energy, 96:215 – 230, 2016. [712] Haifeng Zhang, Yevgeniy Vorobeychik, Joshua Letchford, and Kiran Lakkaraju. Data-driven agent-based mod- eling, with application to rooftop solar adoption. Autonomous Agents and Multi-Agent Systems, 30(6):1023– 1049, Nov 2016. [713] Varun Rai, D Cale Reeves, and Robert Margolis. Overcoming barriers and uncertainties in the adoption of residential solar pv. Renewable Energy, 89:498–505, 2016. [714] Bryan Bollinger and Kenneth Gillingham. Peer effects in the diffusion of solar photovoltaic panels. Marketing Science, 31(6):900–912, 2012. [715] Maja Schl¨uter, Alessandro Tavoni, and Simon Levin. Robustness of norm-driven cooperation in the commons. Proceedings of the Royal Society B: Biological Sciences, 283(1822):20152431, 2016. 104 [716] Sylvie Geisendorf. Evolutionary climate-change modelling: A multi-agent climate-economic model. Compu- tational Economics, 52(3):921–951, Oct 2018. [717] Jule Thober, Nina Schwarz, and Kathleen Hermans. Agent-based modeling of environment-migration linkages: a review. Ecology and society, 23(2), 2018. [718] Haifeng Zhang and Yevgeniy Vorobeychik. Empirically grounded agent-based models of innovation diffusion: a critical review. Artificial Intelligence Review, 52(1):707–741, Jun 2019. [719] Chathika Gunaratne, Ivan Garibay, and Nguyen Dang. Evolutionary model discovery of causal factors behind the socio-agricultural behavior of the ancestral pueblo, 2018. [720] Christian Hilbe, ˇStˇep´an ˇSimsa, Krishnendu Chatterjee, and Martin A. Nowak. Evolution of cooperation in stochastic games. Nature, 559(7713):246–249, 2018. [721] Liviu Panait and Sean Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems, 11(3):387–434, November 2005. [722] Hyun-Rok Lee and Taesik Lee. Improved cooperative multi-agent reinforcement learning algorithm augmented by mixing demonstrations from centralized policy. In Proceedings of the 18th International Conference on Au- tonomous Agents and MultiAgent Systems, AAMAS ’19, pages 1089–1098, Richland, SC, 2019. International Foundation for Autonomous Agents and Multiagent Systems. [723] Natasha Jaques, Angeliki Lazaridou, Edward Hughes, C¸ aglar G¨ulc¸ehre, Pedro A. Ortega, DJ Strouse, Joel Z. Intrinsic social motivation via causal influence in multi-agent RL. CoRR, Leibo, and Nando de Freitas. abs/1810.08647, 2018. [724] David Martimort and Wilfried Sand-Zantman. A mechanism design approach to climate agreements. 2011. [725] Ariel D. Procaccia. Cake cutting: Not just child’s play. Commun. ACM, 56(7):78–87, July 2013. [726] Thomas Sterner, Edward B. Barbier, Ian Bateman, Inge van den Bijgaart, Anne-Sophie Cr´epin, Ottmar Eden- hofer, Carolyn Fischer, Wolfgang Habla, John Hassler, Olof Johansson-Stenman, Andreas Lange, Stephen Polasky, Johan Rockstr¨om, Henrik G. Smith, Will Steffen, Gernot Wagner, James E. Wilen, Francisco Alp´ızar, Christian Azar, Donna Carless, Carlos Ch´avez, Jessica Coria, Gustav Engstr¨om, Sverker C. Jagers, Gunnar K¨ohlin, ˚Asa L¨ofgren, H˚akan Pleijel, and Amanda Robinson. Policy design for the anthropocene. Nature Sustainability, 2(1):14–21, 2019. [727] M. Granger Morgan. Theory and Practice in Policy Analysis: Including Applications in Science and Technol- ogy. Cambridge University Press, 2017. [728] D. S. Patton, C. V. annd Sawicki and J. Clark. Basic methods of policy analysis and planning. 2015. [729] Giuseppe A Veltri and Dimitrinka Atanasova. Climate change on twitter: Content, media ecology and informa- tion sharing behaviour. Public Understanding of Science, 26(6):721–737, 2017. [730] Hywel TP Williams, James R McMurray, Tim Kurz, and F Hugo Lambert. Network analysis reveals open fo- rums and echo chambers in social media discussions of climate change. Global environmental change, 32:126– 138, 2015. [731] Andrei P Kirilenko and Svetlana O Stepchenkova. Public microblogging on climate change: One year of twitter worldwide. Global environmental change, 26:171–182, 2014. [732] John Weyant. Some Contributions of Integrated Assessment Models of Global Climate Change. Review of Environmental Economics and Policy, 11(1):115–137, 03 2017. 105 [733] Richard H. Moss, Jae A. Edmonds, Kathy A. Hibbard, Martin R. Manning, Steven K. Rose, Detlef P. van Vuuren, Timothy R. Carter, Seita Emori, Mikiko Kainuma, Tom Kram, Gerald A. Meehl, John F. B. Mitchell, Nebojsa Nakicenovic, Keywan Riahi, Steven J. Smith, Ronald J. Stouffer, Allison M. Thomson, John P. Weyant, and Thomas J. Wilbanks. The next generation of scenarios for climate change research and assessment. Nature, 463(7282):747–756, 2010. [734] Jan Philipp Dietrich, Alexander Popp, and Hermann Lotze-Campen. Reducing the loss of information and gaining accuracy with clustering methods in a global land-use model. Ecological modelling, 263:233–243, 2013. [735] Christian Folberth, Artem Baklanov, Juraj Balkoviˇc, Rastislav Skalsk`y, Nikolay Khabarov, and Michael Ober- steiner. Spatio-temporal downscaling of gridded crop model yield estimates based on machine learning. Agri- cultural and forest meteorology, 264:1–15, 2019. [736] Lianfa Li. Geographically weighted machine learning and downscaling for high-resolution spatiotemporal estimations of wind speed. Remote Sensing, 11(11):1378, 2019. [737] Wan Li, Li Ni, Zhao-Liang Li, Si-Bo Duan, and Hua Wu. Evaluation of machine learning algorithms in spatial downscaling of modis land surface temperature. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12:2299–2307, 2019. [738] Marc Jaxa-Rozen and Jan Kwakkel. Tree-based ensemble methods for sensitivity analysis of environmental models: A performance comparison with sobol and morris techniques. Environmental Modelling & Software, 107:245 – 266, 2018. [739] Simon Scheidegger and Ilias Bilionis. Machine learning for high-dimensional dynamic stochastic economies. Journal of Computational Science, 33:68–82, 2019. [740] Victor Duarte. Machine learning for continuous-time economics. 2018. [741] Shunsuke Mori, Toyoaki Washida, Atsushi Kurosawa, and Toshihiko Masui. Assessment of mitigation strate- gies as tools for risk management under future uncertainties: a multi-model approach. Sustainability Science, 13(2):329–349, Mar 2018. [742] S.D. Pohekar and M. Ramachandran. Application of multi-criteria decision making to sustainable energy plan- ning—a review. Renewable and Sustainable Energy Reviews, 8(4):365 – 381, 2004. [743] Alessandro Mattiussi, Michele Rosano, and Patrizia Simeoni. A decision support system for sustainable energy supply combining multi-objective and multi-attribute analysis: An Australian case study. Decision Support Systems, 57:150 – 159, 2014. [744] Multi-objective optimization for sustainable development of the power sector: An economic, environmental, and social analysis of Iran. Energy, 161:493 – 507, 2018. [745] Qinru Shi, Jonathan M. Gomes-Selman, Roosevelt Garc´ıa-Villacorta, Suresh Sethi, Alexander S. Flecker, and Carla P. Gomes. Efficiently optimizing for dendritic connectivity on tree-structured networks in a multi- In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable objective framework. Societies, COMPASS ’18, pages 26:1–26:8, New York, NY, USA, 2018. ACM. [746] Jee-Hoon Han, Yu-Chan Ahn, and In-Beum Lee. A multi-objective optimization model for sustainable elec- tricity generation and CO2 mitigation (EGCM) infrastructure design considering economic profit and financial risk. Applied Energy, 95:186 – 195, 2012. [747] H Hassine, Maher Barkallah, and A Bellacicco. Multi objective optimization for sustainable manufacturing, application in turning. International Journal of Simulation Modelling, 14:98–109, 03 2015. [748] A. Chaabane, A. Ramudhin, and M. Paquet. Design of sustainable supply chains under the emission trading scheme. International Journal of Production Economics, 135(1):37 – 49, 2012. Advances in Optimization and Design of Supply Chains. 106 [749] Milena Lakicevic, Zorica Srdjevic, Bojan Srdjevic, and Miodrag Zlatic. Decision making in urban forestry by using approval voting and multicriteria approval method (case study: Zvezdarska forest, Belgrade, Serbia). Urban Forestry & Urban Greening, 13(1):114 – 120, 2014. [750] Vivek K. Varma, Ian Ferguson, and Ian Wild. Decision support system for the sustainable forest management. Forest Ecology and Management, 128(1):49 – 55, 2000. [751] Serna-Gonz´alez M. Ponce-Ortega J.M. et al. Guti´errez-Arriaga, C.G. Multi-objective optimization of steam power plants for sustainable generation of electricity. Clean Techn Environ Policy, 15(551), 2013. [752] Riccardo Minciardi, Massimo Paolucci, Michela Robba, and Roberto Sacile. Multi-objective optimization of solid waste flows: Environmentally sustainable strategies for municipalities. Waste Management, 28(11):2202 – 2212, 2008. [753] Ching-Ho Chen, Wei-Lin Liu, and Chia-Hsing Chen. Development of a multiple objective planning theory and system for sustainable air quality monitoring networks. Science of The Total Environment, 354(1):1 – 19, 2006. [754] Dalia Streimikiene and Tomas Balezentis. Multi-objective ranking of climate change mitigation policies and measures in Lithuania. Renewable and Sustainable Energy Reviews, 18:144 – 153, 2013. [755] Taimoor Akhtar and Christine A Shoemaker. Efficient multi-objective optimization through population-based parallel surrogate search. arXiv preprint arXiv:1903.02167, 2019. [756] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando De Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2015. [757] Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, NIPS’12, pages 2951–2959, USA, 2012. Curran Associates Inc. [758] David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre, George van den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Do- minik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–503, 2016. [759] Justin Grimmer and Brandon M. Stewart. Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21(3):267–297, 2013. [760] Judea Pearl. The seven tools of causal inference, with reflections on machine learning. Commun. ACM, 62(3):54–60, February 2019. [761] Susan Athey and Guido W Imbens. Machine learning methods that economists should know about. Annual Review of Economics, 11, 2019. [762] Miguel A. Hern´an, John Hsu, and Brian Healy. A second chance to get causal inference right: A classification of data science tasks. CHANCE, 32(1):42–49, 2019. [763] Noemi Kreif and Karla DiazOrdaz. Machine learning in policy evaluation: new tools for causal inference, 2019. [764] Susan Athey. Beyond prediction: Using big data for policy problems. Science, 355(6324):483–485, 2017. [765] Isabel Hovdahl. On the use of machine learning for causal inference in climate economics. 2019. [766] Jianing Zhao, Daniel M. Runfola, and Peter Kemper. Quantifying heterogeneous causal treatment effects in world bank development finance projects. In Yasemin Altun, Kamalika Das, Taneli Mielik¨ainen, Donato Malerba, Jerzy Stefanowski, Jesse Read, Marinka ˇZitnik, Michelangelo Ceci, and Saˇso Dˇzeroski, editors, Ma- chine Learning and Knowledge Discovery in Databases, pages 204–215, Cham, 2017. Springer International Publishing. 107 [767] Joseph E Stiglitz, Nicholas Stern, Maosheng Duan, Ottmar Edenhofer, Ga¨el Giraud, Geoffrey M Heal, Emilio L`ebre la Rovere, Adele Morris, Elisabeth Moyer, Mari Pangestu, et al. Report of the high-level com- mission on carbon prices. 2017. [768] Nicholas Stern. The economics of climate change. American Economic Review, 98(2):1–37, 2008. [769] A Denny Ellerman, Frank J Convery, and Christian De Perthuis. Pricing carbon: the European Union emissions trading scheme. Cambridge University Press, 2010. [770] Hamed Ghoddusi, Germ´an G Creamer, and Nima Rafizadeh. Machine learning in energy economics and fi- nance: A review. Energy Economics, 81:709–727, 2019. [771] Bangzhu Zhu, Dong Han, Ping Wang, Zhanchi Wu, Tao Zhang, and Yi-Ming Wei. Forecasting carbon price using empirical mode decomposition and evolutionary least squares support vector regression. Applied energy, 191:521–530, 2017. [772] Wei Sun and Chongchong Zhang. Analysis and forecasting of the carbon price using multi—resolution sin- gular value decomposition and extreme learning machine optimized by adaptive whale optimization algorithm. Applied energy, 231:1354–1371, 2018. [773] Bangzhu Zhu, Shunxin Ye, Ping Wang, Kaijian He, Tao Zhang, and Yi-Ming Wei. A novel multiscale nonlinear ensemble leaning paradigm for carbon price forecasting. Energy Economics, 70:143–157, 2018. [774] Sun Wei, Zhang Chongchong, and Sun Cuiping. Carbon pricing prediction based on wavelet transform and K- ELM optimized by bat optimization algorithm in China ETS: the case of Shanghai and Hubei carbon markets. Carbon Management, 9(6):605–617, 2018. [775] Bangzhu Zhu, Ping Wang, Julien Chevallier, and Yiming Wei. Carbon price analysis using empirical mode decomposition. Computational Economics, 45(2):195–206, 2015. [776] Michael Rothschild. A two-armed bandit theory of market pricing. Journal of Economic Theory, 9(2):185 – 202, 1974. [777] Rajkumar Ragupathi and Tapas Das. A stochastic game approach for modeling wholesale energy bidding in deregulated power markets. Power Systems, IEEE Transactions on, 19:849 – 856, 06 2004. [778] Vishnuteja Nanduri and Tapas Das. A reinforcement learning model to assess market power under auction- based energy pricing. Power Systems, IEEE Transactions on, 22:85 – 95, 03 2007. [779] Guochang Fang, Lixin Tian, Min Fu, Mei Sun, Ruijin Du, and Menghe Liu. Investigating carbon tax pilot in YRD urban agglomerations – analysis of a novel ESER system with carbon tax constraints and its application. Applied energy, 194:635–647, 2017. [780] Guochang Fang, Lixin Tian, Menghe Liu, Min Fu, and Mei Sun. How to optimize the development of carbon trading in China – enlightenment from evolution rules of the EU carbon price. Applied energy, 211:1039–1049, 2018. [781] Prashant Nagapurkar and Joseph D Smith. Techno-economic optimization and social costs assessment of microgrid-conventional grid integration using genetic algorithm and artificial neural networks: A case study for two US cities. Journal of Cleaner Production, 229:552–569, 2019. [782] Nicol`o Barbieri. Fuel prices and the invention crowding out effect: Releasing the automotive industry from its dependence on fossil fuel. Technological Forecasting and Social Change, 111:222–234, 2016. [783] Xiping Zheng, Qiang Guo, Zenglu Li, and Ting Zhang. Optimal choice of enterprise’s production strategy under constraints of carbon quota. International Journal of Computational Intelligence Systems, 11(1):1268–1277, 2018. 108 [784] Qunli Wu and Hongjie Zhang. Research on optimization allocation scheme of initial carbon emission quota from the perspective of welfare effect. Energies, 12(11):2118, 2019. [785] Ramon Granell, Colin J Axon, and David CH Wallom. Predicting winning and losing businesses when changing electricity tariffs. Applied energy, 133:298–307, 2014. [786] Paulo Picchetti. Hedonic residential property price estimation using geospatial data: a machine-learning ap- proach. Instituto Brasileiro de Economia, 04 2017. [787] Byeonghwa Park and Jae Bae. Using machine learning algorithms for housing price prediction: The case of Fairfax County, Virginia housing data. Expert Systems with Applications, 42, 04 2015. [788] Timothy Oladunni and Sharad Sharma. Hedonic housing theory – a machine learning investigation. 12 2016. [789] Mark A Delucchi, James J Murphy, and Donald R McCubbin. The health and visibility cost of air pollution: a comparison of estimation methods. Journal of Environmental Management, 64(2):139 – 152, 2002. [790] Charles D Kerchner and William S Keeton. California’s regulatory forest carbon market: Viability for northeast landowners. Forest Policy and Economics, 50:70–81, 2015. [791] UNESCO. Not just hot air: putting climate change education into practice. United Nations Educational, Scientific and Cultural Organization, 2015. [792] Douglas H Fisher, Zimei Bian, and Selina Chen. Incorporating sustainability into computing education. IEEE Intelligent Systems, 31(5):93–96, 2016. [793] Heather Randell and Clark Gray. Climate variability and educational attainment: Evidence from rural ethiopia. Global environmental change, 41:111–123, 2016. [794] Heather Randell and Clark Gray. Climate change and educational attainment in the global tropics. Proceedings of the National Academy of Sciences, 116(18):8840–8845, 2019. [795] Devendra Singh Chaplot, Christopher MacLellan, Ruslan Salakhutdinov, and Kenneth Koedinger. Learning cognitive models using neural networks. In International Conference on Artificial Intelligence in Education, pages 43–56. Springer, 2018. [796] Benjamin Clement, Didier Roy, Pierre-Yves Oudeyer, and Manuel Lopes. Multi-armed bandits for intelligent tutoring systems. Journal of Educational Data Mining, 7(2):20–48, 2013. [797] Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neural question generation for reading comprehen- sion. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017. [798] Ana Iglesias, Paloma Mart´ınez, Ricardo Aler, and Fernando Fern´andez. Learning teaching strategies in an adaptive and intelligent educational system through reinforcement learning. Applied Intelligence, 31(1):89– 106, 2009. [799] Kenneth R Koedinger, Emma Brunskill, Ryan SJd Baker, Elizabeth A McLaughlin, and John Stamper. New potentials for data-driven intelligent tutoring system development and optimization. AI Magazine, 34(3):27–41, 2013. [800] Ulrich Gnewuch, Stefan Morana, Carl Heckmann, and Alexander Maedche. Designing conversational agents In International Conference on Design Science Research in Information Systems and for energy feedback. Technology, pages 18–33. Springer, 2018. [801] Crist´obal Romero, Sebasti´an Ventura, Pedro G Espejo, and C´esar Herv´as. Data mining algorithms to classify students. In Educational data mining 2008, 2008. [802] Vanessa Svihla and Marcia C Linn. A design-based approach to fostering understanding of global climate change. International Journal of Science Education, 34(5):651–676, 2012. 109 [803] Roger Nkambou, Riichiro Mizoguchi, and Jacqueline Bourdeau. Advances in intelligent tutoring systems, volume 308. Springer Science & Business Media, 2010. [804] Sre´cko Joksimovi´c, Oleksandra Poquet, Vitomir Kovanovi´c, Nia Dowell, Caitlin Mills, Dragan Gaˇsevi´c, Shane Dawson, Arthur C Graesser, and Christopher Brooks. How do we model learning at scale? A systematic review of research on MOOCs. Review of Educational Research, 88(1):43–86, 2018. [805] Niels Pinkwart. Another 25 years of AIED? challenges and opportunities for intelligent educational technolo- gies of the future. International journal of artificial intelligence in education, 26(2):771–783, 2016. [806] Ido Roll, Daniel M Russell, and Dragan Gaˇsevi´c. Learning at scale. International Journal of Artificial Intelli- gence in Education, 28(4):471–477, 2018. [807] Chris Dede. Immersive interfaces for engagement and learning. Science, 323(5910):66–69, 2009. [808] Maya Cakmak and Andrea L Thomaz. Eliciting good teaching from humans for machine learners. Artificial Intelligence, 217:198–215, 2014. [809] Benjamin D Nye. Intelligent tutoring systems by and for the developing world: A review of trends and ap- International Journal of Artificial Intelligence in proaches for educational technology in a global context. Education, 25(2):177–203, 2015. [810] Eugene C Cordero, Anne Marie Todd, and Diana Abellera. Climate change education and the ecological footprint. Bulletin of the American Meteorological Society, 89(6):865–872, 2008. [811] Allison Anderson. Climate change education for mitigation and adaptation. Journal of Education for Sustain- able Development, 6(2):191–206, 2012. [812] Jeannette Angel, Alicia LaValle, Deepti Mathew Iype, Stephen Sheppard, and Aleksandra Dulic. Future delta 2.0 an experiential learning context for a serious game about local climate change. In SIGGRAPH Asia 2015 Symposium on Education, page 12. ACM, 2015. [813] Simon Dietz, Alex Bowen, Charlie Dixon, and Philip Gradwell. ’climate value at risk’ of global financial assets. Nature Climate Change, 6(7):676, 2016. [814] Jean Boissinot, Doryane Huber, and Gildas Lame. Finance and climate: The transition to a low-carbon and climate-resilient economy from a financial sector perspective. OECD Journal: Financial Market Trends, 2016. [815] Stefano Battiston, Antoine Mandel, Irene Monasterolo, Franziska Sch¨utze, and Gabriele Visentin. A climate stress-test of the financial system. Nature Climate Change, 7(4):283, 2017. [816] Emanuele Campiglio, Yannis Dafermos, Pierre Monnin, Josh Ryan-Collins, Guido Schotten, and Misa Tanaka. Climate change challenges for central banks and financial regulators. Nature Climate Change, 8(6):462, 2018. [817] Luc Eyraud, Benedict Clements, and Abdoul Wane. Green investment: Trends and determinants. Energy Policy, 60:852–865, 2013. [818] Ivan Diaz-Rainey, Becky Robertson, and Charlie Wilson. Stranded research? leading finance journals are silent on climate change. Climatic Change, 143(1-2):243–260, 2017. [819] Gianfranco Gianfrate. Designing carbon-neutral investment portfolios. In Designing a Sustainable Financial System, pages 151–171. Springer, 2018. [820] Ariel Bergmann, Nick Hanley, and Robert Wright. Valuing the attributes of renewable energy investments. Energy policy, 34(9):1004–1014, 2006. [821] Elizabeth Stanny and Kirsten Ely. Corporate environmental disclosures about the effects of climate change. Corporate Social Responsibility and Environmental Management, 15(6):338–348, 2008. 110 [822] Robert F Engle, Stefano Giglio, Heebum Lee, Bryan T Kelly, and Johannes Stroebel. Hedging climate change news. Available at SSRN 3317570, 2019. [823] Mats Andersson, Patrick Bolton, and Fr´ed´eric Samama. Hedging climate risk. Financial Analysts Journal, 72(3):13–32, 2016. [824] William A Pizer. Choosing price or quantity controls for greenhouse gases. In Wallace E Oates, editor, The RFF Reader in Environmental and Resource Policy, pages 225–234. Resources for the Future, 2006. [825] Bangzhu Zhu and Julien Chevallier. Carbon price forecasting with a hybrid arima and least squares support vector machines methodology. In Pricing and Forecasting Carbon Markets, pages 87–107. Springer, 2017. [826] Jianguo Zhou, Xuechao Yu, and Xiaolei Yuan. Predicting the carbon price sequence in the shenzhen emissions exchange using a multiscale ensemble forecasting model based on ensemble empirical mode decomposition. Energies, 11(7):1907, 2018. 111
{ "id": "1806.06094" }
1906.03193
Fighting Quantization Bias With Bias
Low-precision representation of deep neural networks (DNNs) is critical for efficient deployment of deep learning application on embedded platforms, however, converting the network to low precision degrades its performance. Crucially, networks that are designed for embedded applications usually suffer from increased degradation since they have less redundancy. This is most evident for the ubiquitous MobileNet architecture which requires a costly quantization-aware training cycle to achieve acceptable performance when quantized to 8-bits. In this paper, we trace the source of the degradation in MobileNets to a shift in the mean activation value. This shift is caused by an inherent bias in the quantization process which builds up across layers, shifting all network statistics away from the learned distribution. We show that this phenomenon happens in other architectures as well. We propose a simple remedy - compensating for the quantization induced shift by adding a constant to the additive bias term of each channel. We develop two simple methods for estimating the correction constants - one using iterative evaluation of the quantized network and one where the constants are set using a short training phase. Both methods are fast and require only a small amount of unlabeled data, making them appealing for rapid deployment of neural networks. Using the above methods we are able to match the performance of training-based quantization of MobileNets at a fraction of the cost.
http://arxiv.org/pdf/1906.03193
Alexander Finkelstein, Uri Almog, Mark Grobman
cs.LG, stat.ML
Accepted to ECV workshop at CVPR2019
null
cs.LG
20190607
20190607
9 1 0 2 n u J 7 ] G L . s c [ 1 v 3 9 1 3 0 . 6 0 9 1 : v i X r a # Fighting Quantization Bias With Bias # Alexander Finkelstein∗ # Uri Almog∗ Hailo Technologies alex.finkelstein,uri.almog,[email protected] Mark Grobman # Abstract Low-precision representation of deep neural networks (DNNs) is critical for efficient deployment of deep learn- ing application on embedded platforms, however, convert- ing the network to low precision degrades its performance. Crucially, networks that are designed for embedded appli- cations usually suffer from increased degradation since they have less redundancy. This is most evident for the ubiq- uitous MobileNet architecture [10, 20] which requires a costly quantization-aware training cycle to achieve accept- able performance when quantized to 8-bits. In this paper, we trace the source of the degradation in MobileNets to a shift in the mean activation value. This shift is caused by an inherent bias in the quantization process which builds up across layers, shifting all network statistics away from the learned distribution. We show that this phenomenon hap- pens in other architectures as well. We propose a simple remedy - compensating for the quantization induced shift by adding a constant to the additive bias term of each channel. We develop two simple methods for estimating the correc- tion constants - one using iterative evaluation of the quan- tized network and one where the constants are set using a short training phase. Both methods are fast and require only a small amount of unlabeled data, making them appealing for rapid deployment of neural networks. Using the above methods we are able to match the performance of training- based quantization of MobileNets at a fraction of the cost. # 1. Introduction In the last years, an increasing amount of effort is in- vested into executing DNN inference in low-precision arith- metic. While very effective in cutting down on memory, compute and power usage, this comes at a price of degraded network performance caused by weight- and activation- rounding errors. Quantization, the conversion of a net to its low-precision version, is performed according to a ”scheme”, a conceptual model of the hardware execution environment. In this paper we follow the 8-bit integer quan- tization scheme used in [12] which is widely employed across both cloud- and edge-devices due to its easy and efficient implementation on hardware. For most network architectures, 8-bit quantization carries a minimal degrada- tion penalty [7] leading recent research to double-down on more aggressive schemes, using 4 bits or less for activa- tions, weights or both [14, 1, 4, 13]. Nonetheless, some ar- chitectures, such as Mobilenet [10, 20], Inception-V1 [23] and Densenet [11] still exhibit significant degradation when quantized to 8 bits. Mobilenets are of particular interest as they were designed specifically for embedded applications and as such are often deployed in their quantized form. In order to successfully deploy these networks a quantization- aware training phase [22, 7, 12] is usually introduced to regain the network accuracy. This phase is costly in time and requires access to the original dataset on which the net- work was trained. Moreover, while effective, the training offers little insight into why there was significant degrada- tion in the first place. Recently, there has been more focus on methods that do not rely on an additional training phase [15, 1, 27, 16] but an exhaustive 8-bit quantization scheme that excludes training remains an open challenge. In this paper, we begin by showing that some networks exhibit a significant shift in the mean activation value fol- lowing quantization, which we henceforth denote as MAS (mean activation shift). We note that in order for such a shift to have an impact on the performance of the network it must be of significant magnitude with respect to the mean activa- tion itself. The shift is the result of quantization rounding errors being highly unbalanced - a ’small numbers effect’ that is statistically implausible for layers with many param- eters, but can become the main error source for layers with a small amount of parameters (e.g. depthwise convolution with 9 parameters per channel). Note that the frugal use of parameters is especially prevalent in network architectures aimed at embedded applications [10, 20, 26] making them more susceptible to quantization induced shifts. We then show that the shift introduced by the quantization can be canceled by using the bias parameter of the channels. Since the bias term is additive, any constant added to it will shift # ∗Equal contribution 1 the activation distribution of the channel. Once the task is defined as fixing the shifts, the problem then becomes how to estimate them. To this end, we develop two procedures - one based on direct estimation over a set of test images and the other performing a fine-tuning phase involving only the bias terms of the network. The main contributions of this paper are: • Mean activation shift (MAS): We explore both the statistical origins and the impact of MAS on layer- level quantization errors. We show that MAS can arise and be responsible for a large component of the error when quantizing network architectures relying on lay- ers with a very small number of parameters (e.g. Mo- bileNet’s depthwise layers). • Shift compensation using bias terms: We show MAS can be compensated by using the bias terms of the network layers, and this alone can drastically reduce degradation, further establishing the previous claim. We propose two algorithms to that effect - Iterative Bias Correction (IBC) and Bias Fine Tune (BFT) - and we analyze their performance on a variety of Imagenet trained classifiers. While our experiments use convolutional neural networks (CNNs) for classification, we expect both the techniques and the analysis to extend into other tasks (e.g. Object- Detection) building on these classifiers as their feature ex- tractors. # 2. Previous Work # 2.1. Quantization Procedures A slew of ideas on DNN quantization were published in recent years, with widely and subtly diverging assump- tions about the capabilities of the underlying deployment hardware . For instance, support for non-uniform quanti- zation [21, 2, 8], fine-grained mixed precision [18] or per- channel (”channelwise”) quantization [7, 1, 5, 6]. Leaving the batch-normalization layer unfolded [14] is another nu- ance since, similarly to channelwise quantization, it enables per-channel scaling at the cost of additional computations. In this work, we restrict ourselves to the simplest setting folded-batch normalization, layerwise, UINT8-activations, INT8-weights that was introduced by [12]. The attractive- ness of this simple setting is that it is supported efficiently by most existing hardware on which deep-learning appli- cations are deployed. The literature can also be dissected by the cost and complexity of the quantization procedures, falling into two main categories: Post-training quantization: these methods work directly on a pre-trained network without an additional training phase to compensate for the effects of quantization. Gen- erally lean, these methods have minimal requirements in terms of data, expertise and effort (of both man and ma- chine). Most works are focused on how to best estimate the dynamic range of a layer given its activation distribu- tion. These range from simple heuristics for outlier re- moval [7, 14], to more advanced methods employing sta- tistical analysis [16, 19, 4, 1, 5, 6]. In this work we use naive min/max statistics (without clipping) to determine dynamic range in order to keep the comparative experi- ments on IBC/BFT as clean as possible, however, it can be combined with more advanced methods. Other approaches try to lower the dynamic range of the layer by reducing In [27] it was proposed to the intra-layer channel range. improve layerwise quantization by splitting channels with outliers, improving quantization at the cost of additional computations. Finally, in [15] it was proposed to employ inversely-proportional factorization to equalize the dynamic ranges of channels within a layer. Quantization-aware training involves optimizing the quantized weights using a large training set, with net- specific hyperparameters [14] usually taking the pre-trained net as a starting point. The optimization target may use the ground-truth labels [12] as in the normal training, and/or an adaptation of knowledge distillation [9] loss, namely us- ing the full-precision net as the target (”Teacher”), with the quantized (”Student”) net punished for distance of its output to the Teacher’s [19, 17]. Rendering the training ”quantization-aware” is non-trivial since the rounding and clipping operations are non-differentiable. The simplest and standard approach is that of a Straight-Through-Estimator (STE) [3, 19] which essentially consists of using the quan- tized version of the net on forward-pass while using the full-precision network on the backward-pass. One of the methods developed in this work (BFT) employs knowledge- distillation and a STE within a very short training proce- dure (micro-training) restricted to the network biases only. Concurrently to our work, in [5] a micro-training of scal- ing factors (multiplicative rather than additive as in ours) was proposed. Like us, they categorize this as post-training quantization since very little data and training time is used. # 2.2. MobileNets Quantization Mobilenets [10, 20] are CNNs based on ”separable” convolutions - intermittent depthwise and pointwise con- volution layers. Their compact and low-redundancy de- sign makes quantization challenging as they have less re- silience to noise. The basic layerwise 8-bit quantization of Mobilenet-v1 leads to complete loss of accuracy [22, 7]. A major source of the degradation is due to issues caused by the folding of batch-normalization to degenerate channels (i.e. channel with constant zero output) [22]. Zeroing out these channels reduces degradation to 10% and we employ this technique as well. Quantization-aware training meth- ods [12, 7] are able to reduce degradation to 1% for both Mobilenet-v1 and v2. Current post-training quantization methods can achieve similarly good results (i.e. < 1%) only by using channelwise quantization [7] which is not supported on all hardware. Authors of [22] achieve 2.5% degradation in a layerwise post-training quantization set- ting but resort to remodeling the network architecture into a ”quantization-friendly” version by changing the activation function and modifying the separable-convolution building block. In [15] a 3% degradation is achieved without either remodelling or retraining by using inversely-proportional factorization to equalize the dynamic range of channels; this further reduces to 0.6% when the BFT method described in this paper is applied on top, setting the new state-of-the-art for Mobilenet-v1 and v2 ”basic-gemmlowp” quantization while being easy and fast to run. # 3. Problem Statement and Analysis A general sentiment underlying quantization of neural networks is that [24]: ”...as long as the errors generally cancel each other out, they’ll just appear as the kind of ran- dom noise that the network is trained to cope with and so not destroy the overall accuracy by intro- ducing a bias...” Contrary to the above assumption, in this work we show that when comparing the activations of a feature map in a full- precision and quantized net (QNN), a significant ”DC” shift between the distributions can be observed (Fig. 1). This conceptual gap could arise from thinking on the level of a single tensor, whose rounding error distribution is assumed to be uniform and symmetric; then assuming this property to propagate through the net. However, the law of large numbers doesn’t always hold, and small asymmetries might get amplified and accumulate across layers. In the rest of this section, we first (3.1) define the quan- tization error and it’s ”DC” component (denoted as MAS). Then in 3.2 we quantify in precise terms the MAS contribu- tion to the error, empirically demonstrating its significance. Finally, in 3.3 we explore further the statistical mechanism of MAS generation and its dependence on layer structure. # 3.1. Quantization Error Measures For a given layer l and feature ch we define the quan- tization error as the element-wise difference between acti- vations in the original DNN (x) and those in the quantized DNN (QNN) (x(q)) : el,ch = (x(q) l,ch − xl,ch). (1) Where el,ch, x(q) l,ch, xl,ch are vectors (for brevity we some- times drop the subscripts where they are understood from context). Some works make a distinction between round- ing ”noise” and clipping ”distortion”; in this work both are treated indivisibly and we use the terms ”noise” and ”error” interchangeably. For every channel we define the signal en- ergy, E(x2), as: 1 E(x) = Dither (2) The expectations E() here and below are to be understood as taken across all pixels of the testing set. We use the mean rather than sum (as in standard L2 norm definition) so that our analysis will be invariant to the number of el- ements within a layer. Similarly, we define E(e2) as the energy of the quantization error. Next, we define the Mean Activation Shift (MAS) as: l,ch−xl,ch) = E(el,ch) (3) For convenience of exposition we also define we also de- fine the following two quantites: inverse root quantization to noise ratio (rQNSR) and Mean activation Shift to Signal Ratio (MSSR) as: rQN SRl,ch = E(e2) E(x2) (4) MSSRien = oust) 5) E(x? en) MSSR measures how significant the MAS is compared to the average activation. The rQSNR is a measure of the over- all noise level in a channel. Note that MSSR and rQNSR scale similarly, allowing for easy comparison. All definitions above being random variables’ expecta- tions, in practice we use estimates generated by sampling the activations of the network across inferences on a batch of images. # 3.2. The Effect of Mean Activation Shifts One view of MAS as defined in eq. (3) is as the activa- tions’ distribution shift - between the original and the quan- tized net (see fig. 1). Another analysis avenue relates it to the MSE decomposition: M SE = E(error2) = M ean2(error) + V ar(error) (6) In this framework, the question of the MAS’s significance can be formulated as gauging the relative contribution of the error mean to error magnitude, which in our notation can be written as: E2(el,ch) l,ch) . To answer this question we es- timate these quantities on data samples obtained by running 02 (a) (b) “03-02-01 00 O72 i 3 a a. * (©) (d) 0.01 0.02 =020 -0.15 0.10 -0.05 000 005 100 125 150 4175 200 225 250 Figure 1: Mean activation shift in two different layers of Mobilenet 1 1.0 224: Left: weight rounding errors (verti- cal axis) vs. full precision (FP) weights (horizontal axis). An ideal transformation would place all the dots on the null horizontal line. Black indicates a weight whose quantized value is greater than the FP, and magenta indicates weights whose quantized value is smaller or equal to the FP value. Right: 32-image activation histograms for the channel cor- responding to the weights on the left. Top row is for the conv1/channel1 kernel slice, with 27 weight elements (1.a) and a small (0.029) relative mean kernel shift, producing a small MAS (1.b). Bottom row is for the are for depth- wise1 layer (channel12), with 9 weight elements (1.c). The relative mean kernel error is large, (0.239), resulting in a large MAS. (1.d). The vertical blue and red lines mark the mean activation values for FP and quantized networks, with a clear shift in 1.d and an indiscernible shift in 1.b the full precision and the quantized nets over batches of few tens of images each to obtain signal and error vectors; see figs. 2, 3 (curves for different choices of batch are very similar; we drop all but one from plot for visual brevity). We can see that for many layers the contribution is signif- icant, sometimes dominating the error energy. Fortunately, the degradation-complicit ∆l,ch lends itself to a convenient correction, one that doesn’t entail changing the NN’s com- putational graph but only adjusting its parameters - namely, subtracting MAS from the (per-channel) biases Bl,ch to be added pre-activation 1 . For a single linear (Fact = I) layer, this bias-fixing operation can indeed be proven as the opti- mal error reduction by changing biases alone - effectively removing the Mean contribution in eq. (6). Note also that in contrast to the weights (and their potential adjustments), the bias is quantized at high precision in most implementa- tions (the cost for this being rather low), so the above cor- ' Referring to a generic convolutional layer with kernel of size K: K ch! ,dx,dy Xi,ch = Fact (SE aesey Xi-1,cn! Wy ei + Bich rection can be done relatively precisely. The natural mitiga- tion taken together with the significance of the MAS con- tribution strikes a great cost-benefit balance, compared to e.g. quantization-aware training methods (and actually per- forming on-par with those in some cases as we shall see in section 5). Note that for a deep networks the situation is more com- plex since MAS for different channels in the first layers are mixed in deeper layers, causing input errors that may be comparable with (or larger than) the MAS of the deeper lay- ers. We will see how these issues are tackled by our optimiza- tion methods in section 4. mobilenet_v2_1.0 / convi4 fm L2(signal) resnet_v1_50/ conv14 lm 12(signal) L2[error) [error) SE abs(mean(error)) lim abs(mean(error)) channel index (sorted by error norm) channel index (sorted by error norm) Figure 2: 32-image estimates of (a) E(el,ch) (the MAS) E(e2 l,ch) (L2 norms of activa- and (b),(c) tion signal, and its quantization error), per channel, for typ- ical layers of two nets strongly differing in quantization- friendliness. When quantization error is large, the mean- shift tends to be larger even in relative terms, becoming the major contribution to q-error. # 3.3. MAS Generation Mechanism in MobileNets We now wish to connect the number of weight elements in a layer and the typical magnitude of MSSR in that layer and show that small kernels tend to increase MAS. Let us consider layer l in which the calculation of an output chan- nel activation involves k kernel elements and k input activa- tions. We define a weight rounding error as δWi where i denotes the weight elements in the set of size k. The weight rounding errors are i.i.d and follow the distribution: f (δWi) = U [− max(|W |) 2N −1 , max(|W |) 2N −1 ] (7) The mean and standard-deviation of the sum of rounding errors is given by: bk E(-(6w,)) = 0 (8) Aggregated normalized channelwise error norm&mean vs. layer resnet v1 50 12 <== RMScu(MSSRep,1) <-9- RMScu(FONSRep 1) Ft e 6 Blue/Orange ratio 2 aN A pg celta /P we neta ght Peed ° Error (mean, norm(arange]] over signal norm 6 mobilenet_v2_1.0 12 1.0- —*- RMSen(MSSRen,1) <+- RMSen(FONSRen, 1) Error (mean, norm(orange)) over signal norm Blue/Orange ratio mobilenet_v2_1.0 - post Bias Fine Tune => RMSex(MSSRen,1) =e RMScH(FQNSRen,c) Blue/Orange ratio ° a Rp 6 F f=) LAPSA A Af 2 tal a ae ea Ya yt layer 00 Error (mean, norm(arange)) aver signal norm Wd MEAG VE Figure 3: 32-image estimates of the MSSR, QNSR, and their ratio ((a)/(b), (c)/(b), (a)/(c) ratios in terms of fig. (2)) L2-aggregated (RMS) over channels, for all layers of same 2 nets. Comparing top and center we can see again, now at the whole net level, that for mobilenet the overall error is greater and it’s driven in large part by the MAS. Some typ- ical behaviors can be spotted - e.g. average-pooling layer reduces QNSR but the mean-shift contribution increases. Comparing center and bottom plots, the great error reduc- tion following the BFT procedure is evident (IBC gives a similar result); being driven by reducing MAS, it’s not sur- prising that the relative contribution of MAS (black line) goes down as well k o()_ dw,) = Cv(k/12) (9) with C depending only on max(|W |) and N where max(|W |) is the maximum element magnitude in the weight kernel and N is the number of bits used for quan- tized representation. It is worth noting that the assumption of a given max(|W |) means that quantization grid is con- stant. The rounding error is the results of decimating the kernel values to this grid and so the expectation in the above equations is taken with respect to possible kernel values. The activation shift for an input image, defined in 1 can be expressed as: k k €ljch = Sein +61...) X (wit bw; ) - Lin X wi (10) i=1 i=1 or, In the first layer of the network, the first term in the right- hand side is negligible since when the input rounding errors are small such as in the first layer. We consider only the second term in the following analysis. Averaging over input data set and the spatial dimensions of layer l to achieve the numerator in 5, we get: k E (eter) = ECS atin X bw.) = E(ain) x So Sw, (12) i=l where we made the assumption that the input data xi are i.i.d. The separation of terms in the mean is possible since for a given channel the δwi are constant and independent of the data. Let us rewrite M SSRl,ch as: We regard the first term of the product as the ’data term’. If we treat the convolution as a matched filter it can be ap- proximated as: (14) out) Using the above approximation we express the mean and variance of M SSRl,ch with regards to possible kernel val- ues in layer l as: k E(MSSRien) = C1 x 1/k x EQ) bw,) =0 (15) i=l k o(MSSRien) = Col x 1/k x oS. 5w,) i=l (16) ≈ # 1/Vk k/k = 1/ # k Equation 16 tells us that layers with small kernels will tend to have larger MAS energy to activation energy ra- tio. This conclusion is in agreement with our observation that Mobilenet architecture exhibits strong mean activation shifts, since it incorporates many depthwise (dw) convolu- tions, involving small (9 elements) kernel slices. In contrast, conventional 2D convolutions typically involve hundreds of elements for the calculation of each output channel. This is only one possible source of MAS and the occur- rence of significant mean activation shift in Inception v1 de- spite the absence of depthwise layers suggests other possi- ble sources for MAS. One such source can be the neglected term in 11 which becomes significant once the input of the layer exhibits large noise. # 4. Optimization Methods We now present two methods that compensate for the QNN MAS (sec. 3) by updating the bias terms. While the first (IBC) is very fast and requires a very small dataset, the second (BFT) is more general and utilizes a stronger optimizer. # 4.1. Iterative Bias Correction (IBC) Estimating the MAS of all the layers and channels and updating the bias terms of the network in one pass cannot be done, since correcting the bias terms of layer l changes the input to layer l + 1, which, in turn, gives a different MAS than the one calculated before. Therefore, the correc- tion process has to be performed iteratively according to algorithm 1, where actorig are the activation values of layer l, channel ch in the original and quantized networks, respectively. The inner ’for’ loop in the algorithm can be vectorized and done on all of the channels of a layer l simultaneously. The set of input images used for the Itera- tive Bias Correction (short-named IBC-batch), has a typical size of 8-64 images, and can be drawn from the same set used for the QNN calibration. We find that the Iterative Bias Correction effectiveness can vary with IBC-batch size, and larger batches do not necessarily produce better results. The strength of IBC lies in its simplicity and high speed. With as little as 8 unlabeled images (that can be just the quanti- zation batch) and just a few minutes computation we were able to achieve the results in table 1. # 4.2. Bias Fine Tuning (BFT) The observations discussed in section 3.2 and in fig. 2 suggest that modifications of the QNN’s biases, reducing or otherwise modulating the MAS, have the potential to re- duce noise energy, impact downstream QSNR and eventu- ally the final layer’s output accuracy. This line of thought suggests attempting a generic optimization of the biases alone, with the eventual objective of reducing the overall QNN’s loss and accuracy degradation. To this end, we use a quantization-aware training procedure, which starts from the pre-trained weights and biases, but restricts the train- able variables to be the biases only. This restriction enables fast training on a minuscule part of the training set, as small Result: Corrected QNN checkpoint evaluate actorig l ← 0; while not end of original net do l,ch for all l, ch in original net; evaluate actquant l,ch ; for ch ∈ l do ∆l,ch ←< actorig biasquant l,ch > − < actquant l,ch + ∆l,ch; l,ch >; l,ch ← biasquant end l ← l + 1; # end Algorithm 1: Iterative Bias Correction (IBC). The inputs to the IBC algorithm are the original and quantized net- works, a topologically sorted list of layers, where 0 is the input layer and the last layer is the output layer. In each iteration the output of the current layer for both the origi- nal and quantized networks is evaluated and averaged over n images the spatial dimensions to give a single value per output channel ch. The MAS is computed and added to the layer bias. as 1K images, without significant overfit, since the num- ber of trainable variables is several orders of magnitude less than with a full training. Our procedure uses global opt- mization in contrast to the local estimation procedure used in IBC. Drawing inspiration from [17, 19], we use pure teacher-student distillation loss, namely the cross-entropy between the logits of the full-precision and the quantized net. This enables a label-free training. To make the training quantization-aware, we use quantized weights for the train- able net, and add F akeQuant operations of the TensorFlow framework on the activations’ path, implementing the stan- dard Straight-Through Estimator (STE) approach [3, 25]. Since the weights are fixed for the procedure, our method be also viewed as a miniature variant of Incremental Training [28, 29], in the sense that biases are optimized to compen- sate for the error generated by quantization of the weights. Finalizing our procedure, the fine-tuned full precision bi- ases are re-quantized to 16 bits which is precise enough for this final step to have no accuracy impact. # 5. Experiments For our tests we used the following setup and procedure (all experiments were done with the TensorFlow frame- work): 1. Publicly available pre-trained full-precision models from tf-slim collection is ingested; any BatchNorm ops are folded onto preceding layer’s weights. 2. For mobilenets: dead channels (channels with constant output / zero variance) are dropped. Table 1: Top-1 accuracy of original nets and accuracy degradation with basic 8-bit quantization, Iterative Bias Correction (IBC), and Bias Fine Tune (BFT). All numbers are in units of absolute percentage (%). Best IBC results for were achieved with IBC batch size of 8. Network Name Mobilenet-v1-1.0 Mobilenet-v2-1.0 Mobilenet-v2-1.4 Inception-v1 Resnet-50-v1 Original Top-1 accuracy 71.02 71.8 74.95 69.76 75.2 Basic Quant. 7.90 16.44 6.42 2.26 0.27 IBC 0.92 1.42 1.1 0.44 > 10 BFT 1.03 1.2 0.85 0.47 0.30 3. Weights are quantized onto a uniform, symmetric 8-bit (INT8), per-layer grid. Biases are quantized to 16-bit representation. 4. Activations are quantized (at run-time) onto a fixed, uniform, asymmetric 8-bit (UINT8), per-layer grid over a range defined by simple min, max statistics on a 64-image calibration batch. 5. QNN accuracy is evaluated on the standard 50K Ima- geNet validation set. Table 1 summarizes the results of simple-case tests over several neural nets. The second column gives the Top-1 ac- curacy of the original, floating-point representation net. The third column gives the top-1 accuracy degradation using the basic quantization described in [12]. The last 2 columns give the top-1 accuracy degradation of the quantized nets after applying the IBC and BFT optimization tools. # 5.1. Experiments With IBC Testing IBC with different sizes of IBC batches between 8 to 64 images shows a variance of a few 0.1% in results. Out of the cases we tested, the best results were achieved using an IBC batch of 8 images from the calibration batch. While MobileNet and Inception architectures benefit from employing IBC, Resnets show large degradation by approx- imately 10% when utilizing IBC. We were unable to under- stand the source of this degradation. The IBC algorithm discussed throughout this paper com- pares post-activation means in the original and quantized nets. In Mobilenet v1 the activation function is Relu6, clip- ping the output at 0 and 6, which results in a loss of in- formation regarding the convolution output distribution. It would be reasonable to assume that updating the bias terms according to pre-activation distribution would utilize the full information available and produce even better results than those shown in table 1. Indeed, testing a variant of algorithm 1 in which ∆l,ch is calculated pre-activation on Mobilenet v1, yields a degradation of 0.76% - 0.16% better than the post-activation degradation given in table 1). # 5.2. Experiments With BFT We trained using the standard Adam optimizer, with a learning rate schedule scanning a wide range as rec- ommended in [14]: in our default schedule we use 10−3, 10−4, 10−5, 10−6 rates for 16 mini-epochs each, for a total of 64 mini-epochs using the same 1K images. This schedule is used across all nets presented here (see ta- ble 1), proving a high degree of robustness. That stands in contrast to the normally high appetite of training-based methods for expertise and data. Given the above, and es- pecially the relaxed input requirement of only 1K label- less images, (similar in size to calibration sets typically used [16, 7]), we claim that the procedure should be seen as a post-training quantization method despite sharing the ”backprop+STE+SGD” approach with of quantization- aware training methods. The results (table 1) are quite similar to IBC (suggest- ing that both utilize well a common ”resource” discussed above) - with the exception of ResNet which isn’t degraded with BFT. On both Mobilenet-v1 and v2, we perform on par with the state-of-the-art of 1% degradation reached with full quantization-aware training [7]. The restriction to bi- ases apparently enables harvesting the low-hanging fruits of quantization-aware training - which happen to include the lion’s share of the degradation to be reduced - using a fraction of the cost. When combining ChannelEqualiza- tion [15] with BFT on the Mobilenet-v2 nets, we achieve state-of-the-art quantized net accuracy of 71.1% (v2-1.0) and 74.3% (v2-1.4). # 6. Discussion Mean activation shift (MAS) occurs in several neural net architectures after quantization. One progenitor for this shift was demonstrated to be a non-symmetrical distribution of the rounding error of weights, more pronounced when the weight kernels involved are small, such as in depthwise convolutions. The phenomenon was observed in Mobilenet v1 and v2 which incorporate depthwise convolutions. It was also observed in Inception v1 which incorporates 2D convo- lutions and concatenation layers, and no depthwise layers, leading to the conclusion that there are more sources for MAS. We presented two methods that compensate for mean ac- tivation shift of nets by modifying biases - Iterative Bias Correction (IBC), and Bias Fine Tuning (BFT). Employ- ing either one significantly improves the accuracy of Mo- bilenet v1, v2 and Inception v1, bringing their degrada- tion down by an order of magnitudes relative to the base- line quantization. For Mobilenet-v1/v2, post-BFT degrada- tion is on par with the 1% state-of-the-art [7] achieved by resource-intensive quantization-aware training, tested with the same 8-bit quantization scheme. Both tools require very little in terms of tuning effort, run-time and data. BFT per- forms a robust micro-training procedure, incorporating a strong optimizer (gradient descent) and requiring ˜1000 un- labeled images; typically taking ˜20min on a single GPU. IBC uses a simpler direct-estimation, runs even faster (2- 3min is typical) and requires as little as 8 unlabeled images. We expect these methods to readily extend to mobilenet- based nets used as a base for other tasks (detection, segmen- tation, etc.) and at other quantization schemes (e.g. 4-bit weights). Both may permit further improvement by using more data and tuning their parameters in a scheme- and net- specific way; however, we refrain from it here, emphasiz- ing instead the ”without-bells-and-whistles” effectiveness of the methods, which highlights our basic insight (sec. 3.2) about the underlying issue. The default setting seems to be enough for sub-1% degradation on most presented nets (ta- ble 1). On the other hand, in yet more challenging cases (e.g. quantization to n < 8 bits) the prospect for the meth- ods described is to be used as a part of a wider post-training quantization toolset. We leave that to future work. # References [1] R. Banner, Y. Nahshan, E. Hoffer, and D. Soudry. Post training 4-bit quantization of convolution networks for rapid- deployment. CoRR, abs/1810.05723, 2018. 1, 2 [2] C. Baskin, E. Schwartz, E. Zheltonozhskii, N. Liss, R. Giryes, A. M. Bronstein, and A. Mendelson. Uniq: uni- form noise injection for the quantization of neural networks. arXiv preprint arXiv:1804.10969, 2018. 2 [3] Y. Bengio, N. L´eonard, and A. C. Courville. Estimating or propagating gradients through stochastic neurons for condi- tional computation. CoRR, abs/1308.3432, 2013. 2, 6 [4] J. Choi, Z. Wang, S. Venkataramani, P. I.-J. Chuang, V. Srini- vasan, and K. Gopalakrishnan. Pact: Parameterized clip- ping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018. 1, 2 [5] Y. Choukroun, E. Kravchik, and P. Kisilev. Low-bit quantiza- tion of neural networks for efficient inference. arXiv preprint arXiv:1902.06822, 2019. 2 [6] A. Goncharenko, A. Denisov, S. Alyamkin, and E. Terentev. Fast adjustable threshold for uniform neural network quanti- zation. arXiv preprint arXiv:1812.07872, 2018. 2 [7] R. K. (Google). Quantizing deep convolutional networks for efficient inference: A whitepaper. CoRR, abs/1806.08342, 2018. 1, 2, 3, 7 Deep compres- sion: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. 2 [9] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2 [10] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. MobileNets: Ef- ficient Convolutional Neural Networks for Mobile Vision Applications. arXiv e-prints, page arXiv:1704.04861, Apr 2017. 1, 2 [11] G. Huang, Z. Liu, and K. Q. Weinberger. Densely connected convolutional networks. CoRR, abs/1608.06993, 2016. 1 [12] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only In- ference. 2017. 1, 2, 3, 7 [13] S. Jung, C. Son, S. Lee, J. Son, Y. Kwak, J. Han, and C. Choi. Joint training of low-precision neural network with quantiza- tion interval parameters. CoRR, abs/1808.05779, 2018. 1 [14] J. L. McKinstry, S. K. Esser, R. Appuswamy, D. Bablani, J. V. Arthur, I. B. Yildiz, and D. S. Modha. Discovering low- precision networks close to full-precision networks for effi- cient embedded inference. arXiv preprint arXiv:1809.04191, 2018. 1, 2, 7 [15] E. Meller, A. Finkelstein, U. Almog, and M. Grobman. Same, same but different - recovering neural network quantization error through weight factorization. CoRR, abs/1902.01917, 2019. 1, 2, 3, 7 [16] S. Migacz. 8-bit Inference with TensorRT. 2017. 1, 2, 7 [17] A. K. Mishra and D. Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network ac- curacy. CoRR, abs/1711.05852, 2017. 2, 6 [18] E. Park, S. Yoo, and P. Vajda. Value-aware quantization for In Proceedings training and inference of neural networks. of the European Conference on Computer Vision (ECCV), pages 580–595, 2018. 2 [19] A. Polino, R. Pascanu, and D. Alistarh. Model compression via distillation and quantization. CoRR, abs/1802.05668, 2018. 2, 6 [20] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. MobileNetV2: Inverted Residuals and Linear Bottle- necks. arXiv e-prints, page arXiv:1801.04381, Jan 2018. 1, 2 [21] S. O. Settle, M. Bollavaram, P. D’Alberto, E. De- laye, O. Fernandez, N. Fraser, A. Ng, A. Sirasao, and M. Wu. Quantizing convolutional neural networks for low- power high-throughput inference engines. arXiv preprint arXiv:1805.07941, 2018. 2 [22] T. Sheng, C. Feng, S. Zhuo, X. Zhang, L. Shen, and M. Alf2018arXiv180104381Seksic. A quantization- CoRR, friendly separable convolution for mobilenets. abs/1803.08607, 2018. 1, 2, 3 [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. CoRR, abs/1409.4842. 1 [24] P. Warder. What Ive learned about neural network quantiza- tion. petewarden.com, 2008. 3 [25] P. Yin, J. Lyu, S. Zhang, S. J. Osher, Y. Qi, and J. Xin. Un- derstanding straight-through estimator in training activation quantized neural nets. In International Conference on Learn- ing Representations, 2019. 6 [26] X. Zhang, X. Zhou, M. Lin, and J. Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. CoRR, abs/1707.01083, 2017. 1 [27] R. Zhao, Y. Hu, J. Dotzel, C. De Sa, and Z. Zhang. Improving neural network quantization using outlier channel splitting. arXiv preprint arXiv:1901.09504, 2019. 1, 2 [28] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremen- tal network quantization: Towards lossless cnns with low- precision weights. arXiv preprint arXiv:1702.03044, 2017. 6 [29] B. Zhuang, C. Shen, M. Tan, L. Liu, and I. Reid. Towards ef- fective low-bitwidth convolutional neural networks. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7920–7928, 2018. 6
{ "id": "1804.10969" }
1906.02448
Bridging the Gap between Training and Inference for Neural Machine Translation
Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT'14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.
http://arxiv.org/pdf/1906.02448
Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu
cs.CL, cs.LG, stat.ML
10 pages, 7 figures
null
cs.CL
20190606
20190617
9 1 0 2 n u J 7 1 ] L C . s c [ 2 v 8 4 4 2 0 . 6 0 9 1 : v i X r a Bridging the Gap between Training and Inference for Neural Machine Translation Wen Zhang1,2 Yang Feng1,2∗ Fandong Meng3 Di You4 Qun Liu5 1Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) 2University of Chinese Academy of Sciences, Beijing, China {zhangwen,fengyang}@ict.ac.cn 3Pattern Recognition Center, WeChat AI, Tencent Inc, China [email protected] 4Worcester Polytechnic Institute, Worcester, MA, USA [email protected] 5Huawei Noah’s Ark Lab, Hong Kong, China [email protected] # Abstract Neural Machine Translation (NMT) generates target words sequentially in the way of pre- dicting the next word conditioned on the con- text words. At training time, it predicts with the ground truth words as context while at in- ference it has to generate the entire sequence from scratch. This discrepancy of the fed con- text leads to error accumulation among the way. Furthermore, word-level training re- quires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but rea- sonable translations. In this paper, we ad- dress these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese English and → German translation tasks WMT’14 English demonstrate that our approach can achieve sig- nificant improvements on multiple datasets. # Introduction Neural Machine Translation has shown promising results and drawn more attention recently. Most NMT models fit in the encoder-decoder frame- work, including the RNN-based (Sutskever et al., 2014; Bahdanau et al., 2015; Meng and Zhang, 2019), the CNN-based (Gehring et al., 2017) and the attention-based (Vaswani et al., 2017) mod- els, which predict the next word conditioned on the previous context words, deriving a language model over target words. The scenario is at train- ing time the ground truth words are used as context ∗Corresponding author. while at inference the entire sequence is generated by the resulting model on its own and hence the previous words generated by the model are fed as context. As a result, the predicted words at train- ing and inference are drawn from different dis- tributions, namely, from the data distribution as opposed to the model distribution. This discrep- ancy, called exposure bias (Ranzato et al., 2015), leads to a gap between training and inference. As the target sequence grows, the errors accumulate among the sequence and the model has to predict under the condition it has never met at training time. Intuitively, to address this problem, the model should be trained to predict under the same con- dition it will face at inference. Inspired by DATA AS DEMONSTRATOR (DAD) (Venkatraman et al., 2015), feeding as context both ground truth words and the predicted words during training can be a solution. NMT models usually optimize the cross-entropy loss which requires a strict pairwise matching at the word level between the predicted sequence and the ground truth sequence. Once the model generates a word deviating from the ground truth sequence, the cross-entropy loss will correct the error immediately and draw the re- maining generation back to the ground truth se- quence. However, this causes a new problem. A sentence usually has multiple reasonable transla- tions and it cannot be said that the model makes a mistake even if it generates a word different from the ground truth word. For example, reference: We should comply with the rule. cand1: cand2: cand3: once the model generates “abide” as the third target word, the cross-entropy loss would force the model to generate “with” as the fourth word (as cand1) so as to produce larger sentence-level likelihood and be in line with the reference, although “by” is the right choice. Then, “with” will be fed as context to generate “the rule”, as a result, the model is taught to generate “abide with the rule” which actually is wrong. The translation cand1 can be treated as overcorrection phenomenon. Another potential error is that even the model predicts the right word “by” following “abide”, when generating subsequent translation, it may produce “the law” improperly by feeding “by” (as cand2). Assume the references and the the model memorize the training criterion let pattern of the phrase “the rule” always following the word “with”, to help the model recover from the two kinds of errors and create the correct translation like cand3, we should feed “with” as context rather than “by” even when the previous predicted phrase is “abide by”. We refer to this solution as Overcorrection Recovery (OR). In this paper, we present a method to bridge the gap between training and inference and improve the overcorrection recovery capability of NMT. Our method first selects oracle words from its pre- dicted words and then samples as context from the oracle words and ground truth words. Meanwhile, the oracle words are selected not only with a word- by-word greedy search but also with a sentence- level evaluation, e.g. BLEU, which allows greater flexibility under the pairwise matching restriction of cross-entropy. At the beginning of training, the model selects as context ground truth words at a greater probability. As the model converges grad- ually, oracle words are chosen as context more In this way, the training process changes often. from a fully guided scheme towards a less guided scheme. Under this mechanism, the model has the chance to learn to handle the mistakes made at in- ference and also has the ability to recover from overcorrection over alternative translations. We verify our approach on both the RNNsearch model and the stronger Transformer model. The results show that our approach can significantly improve the performance on both models. # 2 RNN-based NMT Model Our method can be applied in a variety of NMT models. Without loss of generality, we take the RNN-based NMT (Bahdanau et al., 2015) as an example to introduce our method. Assume the source sequence and the observed translation are and y∗ = x = Encoder. A bidirectional Gated Recurrent Unit (GRU) (Cho et al., 2014) is used to acquire two sequences of hidden states, the annotation of xi is hi = [−→h i; ←−h i]. Note that exi is employed to represent the embedding vector of the word xi. y∗ 1, { , x|x|} |y∗|} { · · · · −→h i = GRU(exi, −→h i−1) (1) ←−h i = GRU(exi, ←−h i+1) (2) Attention. The attention is designed to extract source information (called source context vector). At the j-th step, the relevance between the target word y∗ j and the i-th source word is evaluated and normalized over the source sequence rij = vT (3) # tanh (Wasj-1 + Uahi) an exp (ri) an exp (ri) (4) Gy The source context vector is the weighted sum of all source annotations and can be calculated by |x| C= an aighs (5) Decoder. The decoder employs a variant of GRU to unroll the target information. At the j-th step, the target hidden state sj is given by sj = GRU(ey∗ j−1, sj−1, cj) (6) The probability distribution Pj over all the words in the target vocabulary is produced conditioned on the embedding of the previous ground truth word, the source context vector and the hidden state tj = g ey∗ j−1, cj, sj (7) # oj = Wotj Pj = softmax (oj) (8) (9) where g stands for a linear transformation, Wo is used to map tj to oj so that each target word has one corresponding dimension in oj. # 3 Approach The main framework (as shown in Figure 1) of our method is to feed as context either the ground truth words or the previous predicted words, i.e. oracle Figure 1: The architecture of our method. words, with a certain probability. This potentially can reduce the gap between training and inference by training the model to handle the situation which will appear during test time. We will introduce two methods to select the oracle words. One method is to select the oracle words at the word level with a greedy search algorithm, and another is to select a oracle sequence at the sentence-level optimum. The sentence-level oracle provides an option of n- gram matching with the ground truth sequence and hence inherently has the ability of recovering from overcorrection for the alternative context. To pre- dict the j-th target word yj, the following steps are involved in our approach: 1. Select an oracle word yoracle j−1 sentence level) at the 1 j } − Oracle Word Selection) { (at word level or -th step. (Section j−1 with a probability of p or from the oracle word yoracle (Section j−1 with a probability of 1 Sampling with Decay) 3. Use the sampled word as yj−1 and replace the y∗ j−1 in Equation (6) and (7) with yj−1, then perform the following prediction of the attention-based NMT. # 3.1 Oracle Word Selection Generally, at the j-th step, the NMT model needs the ground truth word y∗ j−1 as the context word to predict yj, thus, we could select an oracle word yoracle to simulate the context word. The oracle j−1 word should be a word similar to the ground truth or a synonym. Using different strategies will pro- duce a different oracle word yoracle j−1 . One option is that word-level greedy search could be employed to output the oracle word of each step, which is called Word-level Oracle (called WO). Besides, we can further optimize the oracle by enlarging the search space with beam search and then re- ranking the candidate translations with a sentence- level metric, e.g. BLEU (Papineni et al., 2002), ‘Logistic regressic classifier Figure 2: Word-level oracle without noise. GLEU (Wu et al., 2016), ROUGE (Lin, 2004), etc, the selected translation is called oracle sentence, the words in the translation are Sentence-level Or- acle (denoted as SO). # Word-Level Oracle For the -th decoding step, the direct way to select the word-level oracle is to pick the word with the highest probability from the word dis- tribution Pj−1 drawn by Equation (9), which is shown in Figure 2. The predicted score in oj−1 is the value before the softmax operation. In prac- tice, we can acquire more robust word-level or- acles by introducing the Gumbel-Max technique (Gumbel, 1954; Maddison et al., 2014), which provides a simple and efficient way to sample from a categorical distribution. The Gumbel noise, treated as a form of regular- ization, is added to oj−1 in Equation (8), as shown in Figure 3, then softmax function is performed, the word distribution of yj−1 is approximated by (10) η = log ( log u) ˜oj−1 = (oj−1 + η) /τ ˜Pj−1 = softmax (˜oj−1) − − (11) (12) where η is the Gumbel noise calculated from a uni- form random variable u (0, 1), τ is tempera- ture. As τ approaches 0, the softmax function is similar to the argmax operation, and it becomes uniform distribution gradually when τ . → ∞ Similarly, according to ˜Pj−1, the 1-best word is selected as the word-level oracle word j−1 = yWO yoracle j−1 = argmax ˜Pj−1 (13) Note that the Gumbel noise is just used to select the oracle and it does not affect the loss function for training. # Sentence-Level Oracle The sentence-level oracle is employed to allow for more flexible translation with n-gram matching re- quired by a sentence-level metric. In this paper, gistic regres: classifier Figure 3: Word-level oracle with Gumbel noise. we employ BLEU as the sentence-level metric. To select the sentence-level oracles, we first perform beam search for all sentences in each batch, as- suming beam size is k, and get k-best candidate In the process of beam search, we translations. also could apply the Gumbel noise for each word generation. We then evaluate each translation by calculating its BLEU score with the ground truth sequence, and use the translation with the highest BLEU score as the oracle sentence. We denote it as yS = (yS |yS|), then at the j-th decoding step, we define the sentence-level oracle word as j−1 = ySO yoracle j−1 = yS j−1 (14) But a problem comes with sentence-level oracle. As the model samples from ground truth word and the sentence-level oracle word at each step, the two sequences should have the same number of words. However we can not assure this with the naive beam search decoding algorithm. Based on the above problem, we introduce force decoding to make sure the two sequences have the same length. Force Decoding. As the length of the ground , the goal of force decod- truth sequence is | y∗ ing is to generate a sequence with words fol- | lowed by a special end-of-sentence (EOS) symbol. Therefore, in beam search, once a candidate trans- lation tends to end with EOS when it is shorter or y∗ y∗ longer than | | words, that is, e If the candidate translation gets a word distri- bution P; at the j-th step where j < |y*| and EOS is the top first word in P;, then we select the top second word in P; as the j-th word of this candidate translation. If the candidate translation gets a word distri- bution P|y∗|+1 at the -th step where } {| EOS is not the top first word in P|y∗|+1, then we select EOS as the -th word of {| this candidate translation. In this way, we can make sure that all the k can- words, then re-rank didate translations have y∗ | | the k candidates according to BLEU score and se- lect the top first as the oracle sentence. For adding Gumbel noise into the sentence-level oracle selec- tion, we replace the Pj with ˜Pj at the j-th decod- ing step during force decoding. # 3.2 Sampling with Decay In our method, we employ a sampling mechanism to randomly select the ground truth word y∗ j−1 or the oracle word yoracle as yj−1. At the beginning of training, as the model is not well trained, us- ing yoracle as yj−1 too often would lead to very slow convergence, even being trapped into local optimum. On the other hand, at the end of train- ing, if the context yj−1 is still selected from the ground truth word y∗ j−1 at a large probability, the model is not fully exposed to the circumstance which it has to confront at inference and hence can not know how to act in the situation at inference. In this sense, the probability p of selecting from the ground truth word can not be fixed, but has to decrease progressively as the training advances. At the beginning, p=1, which means the model is trained entirely based on the ground truth words. As the model converges gradually, the model se- lects from the oracle words more often. Borrowing ideas from but being different from Bengio et al. (2015) which used a schedule to decrease p as a function of the index of mini-batch, we define p with a decay function dependent on the index of training epochs e (starting from 0) p = µ µ + exp (e/µ) (15) where µ is a hyper-parameter. The function is strictly monotone decreasing. As the training pro- ceeds, the probability p of feeding ground truth words decreases gradually. # 3.3 Training After selecting yj−1 by using the above method, we can get the word distribution of yj according to Equation (6), (7), (8) and (9). We do not add the Gumbel noise to the distribution when calcu- lating loss for training. The objective is to maxi- mize the probability of the ground truth sequence based on maximum likelihood estimation (MLE). Thus following loss function is minimized: =->O" ae ny log P? [y?] (16) where N is the number of sentence pairs in the indicates the length of the n-th training data, # yn | | ground truth sentence, P7' refers to the predicted probability distribution at the j-th step for the n-th sentence, hence P? Ly is the probability of gen- erating the ground truth word yy’ at the j-th step. # 4 Related Work Some other researchers have noticed the prob- lem of exposure bias in NMT and tried to solve it. Venkatraman et al. (2015) proposed DATA AS DEMONSTRATOR (DAD) which initialized the training examples as the paired two adjacent ground truth words and at each step added the pre- dicted word paired with the next ground truth word as a new training example. Bengio et al. (2015) further developed the method by sampling as con- text from the previous ground truth word and the previous predicted word with a changing probabil- ity, not treating them equally in the whole training process. This is similar to our method, but they do not include the sentence-level oracle to relieve the overcorrection problem and neither the noise perturbations on the predicted distribution. Another direction of attempts is the sentence- level training with the thinking that the sentence- level metric, e.g., BLEU, brings a certain de- gree of flexibility for generation and hence is more robust to mitigate the exposure bias problem. To avoid the problem of exposure bias, Ranzato et al. (2015) presented a novel algorithm Mixed Incremental Cross-Entropy Reinforce (MIXER) for sequence-level training, which directly op- timized the sentence-level BLEU used at infer- ence. Shen et al. (2016) introduced the Minimum Risk Training (MRT) into the end-to-end NMT model, which optimized model parameters by minimizing directly the expected loss with respect to arbitrary evaluation metrics, e.g., sentence-level BLEU. Shao et al. (2018) proposed to eliminate the exposure bias through a probabilistic n-gram matching objective, which trains NMT NMT un- der the greedy decoding strategy. # 5 Experiments We Chinese English carry out experiments the NIST En) and the WMT’14 De) translation tasks. on English (Zh German (En → → → → # 5.1 Settings For Zh En, the training dataset consists of 1.25M sentence pairs extracted from LDC corpora1. We choose the NIST 2002 (MT02) dataset as the val- idation set, which has 878 sentences, and the NIST 2003 (MT03), NIST 2004 (MT04), NIST 2005 (MT05) and NIST 2006 (MT06) datasets as the test sets, which contain 919, 1788, 1082 De, and 1664 sentences respectively. For En we perform our experiments on the corpus pro- vided by WMT’14, which contains 4.5M sentence pairs2. We use the newstest2013 as the validation set, and the newstest2014 as the test sets, which containing 3003 and 2737 sentences respectively. We measure the translation quality with BLEU En, case- scores (Papineni et al., 2002). For Zh insensitive BLEU score is calculated by using the mteval-v11b.pl script. For En De, we tokenize the references and evaluate the performance with case-sensitive BLEU score by the multi-bleu.pl script. The metrics are exactly the same as in pre- vious work. Besides, we make statistical signifi- cance test according to the method of Collins et al. (2005). In training the NMT model, we limit the source and target vocabulary to the most frequent 30K words for both sides in the Zh-En translation task, covering approximately 97.7% and 99.3% words of two corpus respectively. For the En—De translation task, sentences are encoded using byte- pair encoding (BPE) (Sennrich et al., 2016) with 37k merging operations for both source and tar- get languages, which have vocabularies of 39418 and 40274 tokens respectively. We limit the length of sentences in the training datasets to 50 words for Zh-En and 128 subwords for En—De. For RNNSearch model, the dimension of word em- bedding and hidden layer is 512, and the beam size in testing is 10. All parameters are initialized by the uniform distribution over [—0.1, 0.1]. The mini-batch stochastic gradient descent (SGD) al- gorithm is employed to train the model parameters with batch size setting to 80. Moreover, the learn- ing rate is adjusted by adadelta optimizer (Zeiler, 2012) with p=0.95 and e=1e-6. Dropout is applied on the output layer with dropout rate being 0.5. For Transformer model, we train base model with 1These sentence pairs are mainly extracted from LDC2002E18, LDC2003E07, LDC2003E14, Hansards por- tion of LDC2004T07, LDC2004T08 and LDC2005T06 2http://www.statmt.org/wmt14/ translation-task.html Systems Architecture MT03 MT04 MT05 | MT06 | Average Existing end-to-end NMT systems Tu et al. (2016) Coverage 33.69 38.05 35.01 34.83 35.40 Shen et al. (2016) | MRT 37.41 39.87 37.45 36.80 37.88 Zhang et al. (2017) | Distortion 37.93 40.40 36.81 35.77 37.73 Our end-to-end NMT systems RNNsearch 37.93 40.53 36.65 35.80 37.73 + SS-NMT 38.82 41.68 37.28 37.98 38.94 + MIXER 38.70 40.81 37.59 38.38 38.87 this work + OR-NMT 40.40'** | 42.63%1* | 38.871* | 38.44! | 40.09 Transformer 46.89 47.88 47.40 46.66 47.21 + word oracle 47.42 48.34 47.89 47.34 47.75 + sentence oracle || 48.31* 49.40* 48.72* | 48.45* 48.72 Table 1: Case-insensitive BLEU scores (%) on Zh significant difference (p<0.01) from RNNsearch, SS-NMT, MIXER and Transformer, respectively. default settings (fairseq3). # 5.2 Systems The following systems are involved: RNNsearch: Our implementation of an im- proved model as described in Section 2, where the decoder employs two GRUs and an attention. Specifically, Equation 6 is substituted with: ˜sj = GRU1(ey∗ j−1, sj−1) (17) sj = GRU2(cj, ˜sj) (18) Besides, in Equation 3, sj−1 is replaced with ˜sj−1. SS-NMT: Our implementation of the scheduled sampling (SS) method (Bengio et al., 2015) on the basis of the RNNsearch. The decay scheme is the same as Equation 15 in our approach. # 5.3 Results on Zh En Translation → We verify our method on two baseline models with the NIST Zh → Results on the RNNsearch As shown in Table 1, Tu et al. (2016) propose to model coverage in RNN-based NMT to improve the adequacy of translations. Shen et al. (2016) propose minimum risk training (MRT) for NMT to directly optimize model parameters with respect to BLEU scores. Zhang et al. (2017) model dis- tortion to enhance the attention model. Compared with them, our baseline system RNNsearch 1) out- performs previous shallow RNN-based NMT sys- tem equipped with the coverage model (Tu et al., 2016); and 2) achieves competitive performance with the MRT (Shen et al., 2016) and the Distor- tion (Zhang et al., 2017) on the same datasets. We hope that the strong shallow baseline system used in this work makes the evaluation convincing. MIXER: Our implementation of the mixed in- cremental cross-entropy reinforce (Ranzato et al., 2015), where the sentence-level metric is BLEU and the average reward is acquired according to its offline method with a 1-layer linear regressor. OR-NMT: Based on the RNNsearch, we intro- duced the word-level oracles, sentence-level ora- cles and the Gumbel noises to enhance the over- correction recovery capacity. For the sentence- level oracle selection, we set the beam size to be 3, set τ =0.5 in Equation (11) and µ=12 for the decay function in Equation (15). OR-NMT is the abbre- viation of NMT with Overcorrection Recovery. We also compare with the other two related methods that aim at solving the exposure bias problem, including the scheduled sampling (Ben- gio et al., 2015) (SS-NMT) and the sentence- level training (Ranzato et al., 2015) (MIXER). From Table 1, we can see that both SS-NMT and MIXER can achieve improvements by taking mea- sures to mitigate the exposure bias. While our approach OR-NMT can outperform the baseline system RNNsearch and the competitive compar- ison systems by directly incorporate the sentence- level oracle and noise perturbations for relieving the overcorrection problem. Particularly, our OR- NMT significantly outperforms the RNNsearch by +2.36 BLEU points averagely on four test datasets. Comparing with the two related models, # 3https://github.com/pytorch/fairseq Systems RNNsearch + word oracle + noise + sentence oracle + noise Average 37.73 38.94 39.50 39.56 40.09 Table 2: Factor analysis on Zh sults are average BLEU scores on MT03 En translation, the re- 06 datasets. → ∼ our approach further gives a significant improve- ments on most test sets and achieves improvement by about +1.2 BLEU points on average. # Results on the Transformer The methods we propose can also be adapted to the stronger Transformer model. The evalu- ated results are listed in Table 1. Our word-level method can improve the base model by +0.54 BLEU points on average, and the sentence-level method can further bring in +1.0 BLEU points im- provement. # 5.4 Factor Analysis We propose several strategies to improve the per- formance of approach on relieving the overcorrec- tion problem, including utilizing the word-level oracle, the sentence-level oracle, and incorporat- ing the Gumbel noise for oracle selection. To in- vestigate the influence of these factors, we conduct the experiments and list the results in Table 2. When only employing the word-level oracle, the translation performance was improved by +1.21 BLEU points, this indicates that feeding pre- dicted words as context can mitigate exposure bias. When employing the sentence-level oracle, we can further achieve +0.62 BLEU points im- provement. It shows that the sentence-level oracle performs better than the word-level oracle in terms of BLEU. We conjecture that the superiority may come from a greater flexibility for word genera- tion which can mitigate the problem of overcor- rection. By incorporating the Gumbel noise dur- ing the generation of the word-level and sentence- level oracle words, the BLEU score are further im- proved by 0.56 and 0.53 respectively. This indi- cates Gumbel noise can help the selection of each oracle word, which is consistent with our claim that Gumbel-Max provides a efficient and robust way to sample from a categorical distribution. RNNsearch wo 4 —— SO (r=0.5) Training Loss 0 2 4 6 8 10 12 14 16 Epoch 18 20 22 Figure 4: Training loss curves on Zh En translation with different factors. The black, blue and red colors represent the RNNsearch, RNNsearch with word-level oracle and RNNsearch with sentence-level oracle sys- tems respectively. 40 wR? 3 35 RNNsearch fo) wo 2 WO (r=0.1) a, {it sa WO (r=0.5) 304 woes WO (7=1.0) i -- so I —— §0 (r=0.5) 25 i) 4 6 8 10 12 14 16 18 20 22 Epoch Figure 5: Trends of BLEU scores on the validation set with different factors on the Zh → # 5.5 About Convergence In this section, we analyze the influence of differ- ent factors for the convergence. Figure 4 gives the training loss curves of the RNNsearch, word-level oracle (WO) without noise and sentence-level or- acle (SO) with noise. In training, BLEU score on the validation set is used to select the best model, a detailed comparison among the BLEU score curves under different factors is shown in Figure 5. RNNsearch converges fast and achieves the best result at the 7-th epoch, while the train- ing loss continues to decline after the 7-th epoch until the end. Thus, the training of RNNsearch may encounter the overfitting problem. Figure 4 and 5 also reveal that, integrating the oracle sam- pling and the Gumbel noise leads to a little slower convergence and the training loss does not keep decreasing after the best results appear on the val- idation set. This is consistent with our intuition that oracle sampling and noises can avoid overfit- 40 38 36 oO 6 34 ---- RNNsearch B 39 ~~ WO zon at --Â¥- WO (r=0.1) a soa WO (r=0.5) 28 sve WO (7=1.0) 26 -- so 24 —— SO (r=0.5) 2 4 6 8 10 12 14 16 18 20 22 Figure 6: Trends of BLEU scores on the MT03 test set with different factors on the Zh → ting despite needs a longer time to converge. Figure 6 shows the BLEU scores curves on the MT03 test set under different factors4. When sam- pling oracles with noise (τ =0.5) on the sentence level, we obtain the best model. Without noise, our system converges to a lower BLEU score. This can be understood easily that using its own re- sults repeatedly during training without any reg- ularization will lead to overfitting and quick con- vergence. In this sense, our method benefits from the sentence-level sampling and Gumbel noise. # 5.6 About Length Figure 7 shows the BLEU scores of generated translations on the MT03 test set with respect to In partic- the lengths of the source sentences. ular, we split the translations for the MT03 test set into different bins according to the length of source sentences, then test the BLEU scores for translations in each bin separately with the results reported in Figure 7. Our approach can achieve big improvements over the baseline system in all bins, especially in the bins (10,20], (40,50] and (70,80] of the super-long sentences. The cross- entropy loss requires that the predicted sequence is exactly the same as the ground truth sequence which is more difficult to achieve for long sen- tences, while our sentence-level oracle can help recover from this kind of overcorrection. # 5.7 Effect on Exposure Bias To validate whether the improvements is mainly obtained by addressing the exposure bias prob- lem, we randomly select 1K sentence pairs from 4Note that the “SO” model without noise is trained based on the pre-trained RNNsearch model (as shown by the red dashed lines in Figure 5 and 6). aN a i) lm RNNsearch (BLEU: 37.93) mmm OR-NMT (BLEU: 40.40) BLEU Score wo b a 2: cS wo i o ie} be ° S HS H GS HS COG EF EES & Source Sentence Length Figure 7: Performance comparison on the MT03 test set with respect to the different lengths of source sen- En translation task. tences on the Zh → the Zh En training data, and use the pre-trained RNNSearch model and proposed model to de- code the source sentences. The BLEU score of RNNSearch model was 24.87, while our model produced +2.18 points. We then count the ground truth words whose probabilities in the predicted distributions produced by our model are greater than those produced by the baseline model, and . There are totally 28, 266 mark the number as gold words in the references, and =18, 391. The proportion is 18, 391/28, 266=65.06%, which could verify the improvements are mainly ob- tained by addressing the exposure bias problem. # 5.8 Results on En De Translation → Systems RNNsearch + SS-NMT + MIXER + OR-NMT Transformer (base) + SS-NMT + MIXER + OR-NMT Table 3: Case-sensitive BLEU scores (%) on En task. The “ ter (p<0.01) than RNNsearch and Transformer. We also evaluate our approach on the WMT’14 De translation task. From benchmarks on the En the results listed in Table 3, we conclude that the proposed method significantly outperforms the competitive baseline model as well as related ap- En task, proaches. Similar with results on the Zh both scheduled sampling and MIXER could im- prove the two baseline systems. Our method im- proves the RNNSearch and Transformer baseline models by +1.59 and +1.31 BLEU points respec- tively. These results demonstrate that our model works well across different language pairs. # 6 Conclusion The end-to-end NMT model generates a transla- tion word by word with the ground truth words as context at training time as opposed to the pre- vious words generated by the model as context at inference. To mitigate the discrepancy be- tween training and inference, when predicting one word, we feed as context either the ground truth word or the previous predicted word with a sam- pling scheme. The predicted words, referred to as oracle words, can be generated with the word- level or sentence-level optimization. Compared to word-level oracle, sentence-level oracle can fur- ther equip the model with the ability of overcor- rection recovery. To make the model fully ex- posed to the circumstance at reference, we sam- ple the context word with decay from the ground truth words. We verified the effectiveness of our method with two strong baseline models and re- lated works on the real translation tasks, achieved significant improvement on all the datasets. We also conclude that the sentence-level oracle show superiority over the word-level oracle. # Acknowledgments We thank the three anonymous reviewers for their valuable suggestions. This work was sup- ported by National Natural Science Foundation of China (NO. 61662077, NO. 61876174) and National Key R&D Program of China (NO. YS2017YFGH001428). # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. ICLR 2015. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural net- In C. Cortes, N. D. Lawrence, D. D. Lee, works. M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1171–1179. Curran Associates, Inc. Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Learning Schwenk, and Yoshua Bengio. 2014. phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine In Proceedings of the 43rd Annual translation. Meeting of the Association for Computational Lin- guistics (ACL’05), pages 531–540, Ann Arbor, Michigan. Association for Computational Linguis- tics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional In Proceedings sequence to sequence learning. of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1243–1252, International Convention Centre, Sydney, Australia. PMLR. Emil Julius Gumbel. 1954. Statistical theory of ex- treme valuse and some practical applications. Nat. Bur. Standards Appl. Math. Ser. 33. Chin-Yew Lin. 2004. Rouge: A package for automatic In Text Summarization evaluation of summaries. Branches Out: Proceedings of the ACL-04 Work- shop, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Chris J Maddison, Daniel Tarlow, and Tom Minka. 2014. A* sampling. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3086–3094. Curran Associates, Inc. Fandong Meng and Jinchao Zhang. 2019. Dtmt: A novel deep transition architecture for neural ma- In Proceedings of the Thirty- chine translation. Third AAAI Conference on Artificial Intelligence, AAAI’19. AAAI Press. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. the 40th annual meeting on association for compu- tational linguistics, pages 311–318. Association for Computational Linguistics. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computa- tional Linguistics. Chenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching In Proceedings of for neural machine translation. the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4778–4784. Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum risk training for neural machine translation. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1683–1692. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- In Z. Ghahramani, M. Welling, C. Cortes, works. N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In Proceedings of ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Curran As- sociates, Inc. Arun Venkatraman, Martial Hebert, and J. Andrew Improving multi-step prediction of Bagnell. 2015. In Proceedings of the learned time series models. Twenty-Ninth AAAI Conference on Artificial Intelli- gence, AAAI’15, pages 3024–3030. AAAI Press. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144. Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701. Jinchao Zhang, Mingxuan Wang, Qun Liu, and Jie Zhou. 2017. Incorporating word reordering knowl- edge into attention-based neural machine transla- tion. In Proceedings of ACL.
{ "id": "1511.06732" }
1906.02361
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world-knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation (CAGE) framework. CAGE improves the state-of-the-art by 10% on the challenging CommonsenseQA task. We further study commonsense reasoning in DNNs using both human and auto-generated explanations including transfer to out-of-domain tasks. Empirical results indicate that we can effectively leverage language models for commonsense reasoning.
http://arxiv.org/pdf/1906.02361
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher
cs.CL
Accepted at ACL, 11 pages total
In Proceedings of the Association for Computational Linguistics (ACL), 2019. Florence, Italy
cs.CL
20190606
20190606
9 1 0 2 n u J 6 ] L C . s c [ 1 v 1 6 3 2 0 . 6 0 9 1 : v i X r a # Explain Yourself! Leveraging Language Models for Commonsense Reasoning Nazneen Fatema Rajani Bryan McCann Caiming Xiong Richard Socher Salesforce Research Palo Alto, CA, 94301 {nazneen.rajani,bmccann,cxiong,rsocher}@salesforce.com # Abstract Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world- knowledge or reasoning over information not immediately present in the input. We collect human explanations for commonsense reason- ing in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations (CoS-E). We use CoS-E to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Expla- nation (CAGE) framework. CAGE improves the state-of-the-art by 10% on the challeng- ing CommonsenseQA task. We further study commonsense reasoning in DNNs using both human and auto-generated explanations in- cluding transfer to out-of-domain tasks. Em- pirical results indicate that we can effectively leverage language models for commonsense reasoning. 1 Commonsense reasoning is a challenging task for modern machine learning methods (Zhong et al., 2018; Talmor et al., 2019). Explanations are a way to verbalize the reasoning that the models learn during training. Common sense Question Answer- ing (CQA) is a multiple-choice question answer- ing dataset proposed for developing natural lan- guage processing (NLP) models with commons- sense reasoning capabilities (Talmor et al., 2019). Although these efforts have led to progress, it is still unclear how these models perform reasoning and to what extent that reasoning is based on world knowledge. We collect human explanations for commonsense reasoning built on top of CQA and introduce them as Common Sense Explanations (CoS-E)1. CoS-E contains human explanations in Question: While eating a hamburger with friends, Choices: CoS-E: what are people trying to do? have fun, tasty, or indigestion Usually a hamburger with friends indicates a good time. Question: After getting drunk people couldn’t Choices: CoS-E: understand him,it was because of his what? lower standards,slurred speech, or falling down People who are drunk have difficulty speaking. Question: Choices: CoS-E: People do what during their time off from work? take trips, brow shorter, or become hysterical People usually do something relaxing, such as taking trips,when they don’t need to work. Table 1: Examples from our CoS-E dataset. the form of both open-ended natural language ex- planations as well as highlighted span annotations that represent words selected by humans as impor- tant for predicting the right answer (see Table 1). Talmor et al. (2019) show that using Google search to extract context from top 100 result snip- pets for each of the question and answer choices does not help much in improving the accuracy on CQA trained using even the state-of-the-art read- ing comprehension model BiDAF++ (Seo et al., 2017) augmented with a self-attention layer and ELMo representations (Peters et al., 2018). In contrast, we leverage a pretrained language model to generate explanations that are useful for commonsense reasoning. We propose Common- sense Auto-Generated Explanations (CAGE) as a framework for generating explanations for CQA. We break down the task of commonsense reason- ing into two phases. In the first phase, we pro- vide a CQA example alongside the corresponding CoS-E explanation to a language model. The lan- guage model conditions on the question and an- swer choices from the example and is trained to generate the CoS-E explanation. In the second phase, we use the language model # 1https://github.com/nazneenrajani/CoS-E (a) One time-step of training a CAGE language model to gen- erate explanations from CoS-E. It is conditioned on the ques- tion tokens Q concatenated with the answer choice tokens A1, A2, A3 and previously generated tokens E1, . . . , Ei−1. It is trained to generate token Ei. (b) A trained CAGE language model is used to generate ex- planations for a downstream commonsense reasoning model (CSRM), which itself predicts one of the answer choices. # Figure 1: An overview of CAGE trained on CoS-E and CQA. to generate explanations for each example in the training and validation sets of CQA. These CAGE explanations are provided to a second common- sense reasoning model by concatenating it to the end of the original question, answer choices, and output of the language model. The two-phase CAGE framework obtains state-of-the-art results outperforming the best reported baseline by 10% and also produces explanations to justify its pre- dictions. Figure 1 shows an overview of our pro- posed approach. In summary, we introduce a new Common Sense Explanations (CoS-E) dataset to study neu- ral commonsense reasoning and provide a new method, CAGE for automatically generating ex- planations that achieve a state-of-the-art accuracy of approximately 65% on CQA v1.0. We demon- strate explanation transfer on two out-of-domain datasets. Note that before our final submission, the organizers released a more challenging v1.11 of CQA with 5 answer choices instead of 3 and so we also included the new version in our results and discussions. 2 Background and Related Work Commonsense reasoning Datasets that require models to learn to predict relations between situ- ations or events in natural language have been in- troduced in the recent past. The Story Cloze (also referred to as ROC Stories) involves predicting the correct story ending from a set of plausible end- ings (Mostafazadeh et al., 2016) while the Situ- ations with Adversarial Generations (SWAG) in- volves predicting the next scene based on an initial event (Zellers et al., 2018). Language Modeling based techniques such as the GPT and BERT mod- els get human-level performance on these datasets (Radford et al., 2018; Devlin et al., 2019). They have been less successful on tasks that require clear understanding of how pronouns resolve be- tween sentences and how that interacts with world knowledge. For example, the Winograd Schemas (Winograd, 1972) and challenges derived from that format (Levesque et al., 2012; McCann et al., 2018; Wang et al., 2018) have proven difficult for even the most modern machine learning methods (Trinh and Le, 2018) to achieve near-human per- formance, but the emphasis on pronoun resolution in those challenges leaves room for exploration of other kinds of commonsense reasoning. CQA is a new dataset that consists of 9500 questions with one correct answer and two distractor an- swers (Talmor et al., 2019). The authors claim that because all the answer choices are drawn from the same source concept, the dataset requires models to actually infer from the question rather than take advantage of distributional biases. We, however, observed that the current state of this dataset has gender disparity with higher proportion of femi- nine pronouns used in negative context. The authors show that the state-of-the-art lan- guage models perform very poorly compared to human participants on their dataset. Although, CQA introduces a benchmark for evaluating com- monsense reasoning capabilities of models, it is still unclear how and to what extent do models ac- tually do common-sense reasoning. CoS-E builds on top of their benchmark, on the other hand, pro- vides data in the form of explanations that can be used to study and analyze as well as evaluate a model’s reasoning capabilities. Natural language explanations Lei et al. (2016) proposed an approach for rationale genera- tion for sentiment analysis by highlighting com- plete phrases in the input text that by itself is sufficient to predict the desired output. Human- generated natural language explanations for clas- sification data have been used in the past to train a semantic parser that in turn generates more noisy labeled data which can used to train a classifier (Hancock et al., 2018). Camburu et al. (2018) generate explanations and predictions for the nat- ural language inference problem (Camburu et al., 2018). However, the authors report that inter- pretability comes at the cost of loss in perfor- mance on the popular Stanford Natural Language Inference (Bowman et al., 2015) dataset. We find that, unlike for e-SNLI, explanations for CQA lead to improved performance in what Camburu et al. (2018) would call the explain-predict setting. In the multi-modal setting, Rajani and Mooney (2018) showed that visual explanations can be leveraged to improve performance of VQA (An- tol et al., 2015) and that an ensemble explanation is significantly better than individual explanations using both automated and human evaluations (Ra- jani and Mooney, 2017). Knowledge Transfer in NLP Natural language processing has often relied on the transfer of world-knowledge through pretrained word vec- tors like Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). Contextualized word vectors (McCann et al., 2017; Peters et al., 2018) refined these representations for particular inputs by using different forms of general encod- ing. Language models trained from scratch on large amounts of data have made groundbreak- ing success in this direction by carefully fine- tuning for specific tasks (Dai and Le, 2015; Rad- ford et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019). These models have the advantage that only a few parameters need to be learned from scratch and thus perform surprisingly well even on small amounts of supervised data. Fine-tuned lan- guage models do not however work as well for di- rectly predicting answers for CQA (Talmor et al., In our work, we show how these fine- 2019). tuned language models are more effective when leveraged to generate explanations and empirically prove that they also linguistically capture common sense. 3 Common Sense Explanations (CoS-E) We used Amazon Mechanical Turk (MTurk) to collect explanations for our Common Sense Ex- planations (CoS-E) dataset. The CQA dataset con- sists of two splits – the question token split and the random split. Our CoS-E dataset and all our experiments use the more difficult random split, which is the main evaluation split according to Tal- Answer Distractor Answer or Distractor Question Bigram Question Trigram 40 60 Percent of Examples Figure 2: Analysis of the CoS-E v1.0 dataset. Percent of the dataset that contains the answer, a distractor, ei- ther, at least one bigram from the question, and at least one trigram from the question. mor et al. (2019). We also release CoS-E for CQA v1.11. Human participants are given the question and answer choices along with the ground-truth an- swer choice. Turkers are prompted with the fol- lowing question: “Why is the predicted output the most appropriate answer?” Annotators were in- structed to highlight relevant words in the question that justifies the ground-truth answer choice and to provide a brief open-ended explanation based on the highlighted justification could serve as the commonsense reasoning behind the question. We collected these explanations for the CQA train- random-split and dev-random-split, which have a size of 7610 and 950 for v1.0 and 9741 and 1221 for v1.11 respectively. Table 1 shows a random sample of examples from our CoS-E dataset with both free-form explanations and highlighted text. From here on, we refer to the highlighted words as CoS-E-selected and the free-form explanation as CoS-E-open-ended. In MTurk, it is difficult to control the quality of open-ended annotations. So, we do some in- browser checks to avoid obviously bad explana- tions. Annotators cannot move forward if they do not highlight any relevant words in the question or if the length of explanations is less than 4 words. We also check that the explanation is not a sub- string of the question or the answer choices with- out any other extra words. We collect these ex- planations from only one annotator per example, so we also perform some post-collection checks to catch examples that are not caught by our previ- ous filters. We filter out explanations that could be classified as a template. For example, expla- nations of the form “<answer> is the only option that is [correct|obvious]” are deleted and then re- annotated. Figure 2 shows the distribution of explanations collected in the CoS-E v1.0 dataset. 58% of expla- nations from CoS-E contain the ground truth, but the effectiveness of CoS-E is not constrained only to those examples. Our model obtains state-of-the- art results by using CoS-E only during training. Empirical results show that even when using only those explanations that do not have any word over- lap with any of the answer choices, performance exceeds that of baselines that do not use CoS-E at all. We also observed that a significant pro- portion of the distractor choices are also present in the CoS-E dataset and on further analysis we found that for those examples, annotators resorted to explaining by eliminating the wrong choices. This indicates that it is difficult even for humans to reason about many of the examples in CQA. Be- cause CoS-E uses crowd-sourcing, it also adds di- versity of perspective and in particular diverse rea- soning on world knowledge to the CQA dataset. Even though many explanations remain noisy af- ter quality-control checks, we find that they are of sufficient quality to train a language model that generates commonsense reasoning. We refer to Section 5 for more details on empirical results and ablation analysis on CoS-E. 4 Algorithm We present Commonsense Auto-Generated Expla- nations (CAGE) and apply it to the CQA task. CAGE are generated by a language model and are used aas supplementary inputs to a classification model. Each example in CQA consists of a ques- tion, q, three answer choices, c0, c1, c2, and a la- beled answer a. Our CoS-E dataset adds a human explanation eh for why a is the most appropriate choice. The output of CAGE is a language model generated explanation e that is trained to be close to eh. # 4.1 Commonsense Auto-Generated # Explanations (CAGE) In order to supply CAGE to a classification model, we fine-tune a language model (LM) to gener- ate explanations from our CoS-E dataset. Our LM is the large, pre-trained OpenAI GPT (Rad- ford et al., 2018) which is a multi-layer, trans- former (Vaswani et al., 2017) decoder. GPT is fine-tuned on the combination of CQA and CoS-E datasets, as shown in the left half of Figure 1. We explore explanation generation in two settings – 1) explain-and-then-predict (reasoning) (Figure 1) and 2) predict-and-then-explain (rationalization). Reasoning This is our main approach and in this the LM is fine-tuned conditioned on the question, answer choices and the human generated explana- tion and not the actual predicted label. So, the in- put context during training is defined as follows: CRE = “q, c0, c1, or c2? commonsense says ” The model is trained to generate explanations e ac- cording to a conditional language modeling objec- tive. The objective is to maximize: log P (ei|ei−k, . . . , ei−1, CRE; Θ) i where k is the size of the context window (in our case k is always greater than the length of e so that the entire explanation is within the context). The conditional probability P is modeled by a neural network with parameters Θ conditioned on CRE and previous explanation tokens. We call this kind of explanation reasoning because they can be au- tomatically generated during inference to provide additional context for commonsense question an- swering. In Section 5, we show that this approach outperforms the reported state-of-the-art on CQA by 10%. For the sake of completeness, we also experimented with the reverse of this approach wherein the model first makes the predictions and then generates explanations based on those labels, which we call rationalization and is discussed be- low. Rationalization In rationalization, the LM model conditions on the predicted labels along to generate post-hoc rational- with the input izations. the input context contains the output label and is constructed as follows: CRA = “ q, c0, c1, or c2? a because ” The training objective for the LM in rationaliza- tion is similar to that in reasoning except that in this case, the model has access to the ground truth labels to the input questions during training. Be- cause the language model is conditioned on the predicted label, the explanations cannot be con- sidered as common sense reasoning. Instead, they offer a rationalization that makes the model more accessible and interpretable. We find that this ap- proach outperforms the current best model by 6% and also produces interestingly good quality ex- planations as discussed in Section 5. For CAGE, we generate sequences of maximum length 20, use a batch size of 36, train for a maxi- mum of 10 epochs, selecting the best model based on validation BLEU and perplexity scores. Learn- ing rate was set to 1e−6, warmed up linearly with proportion 0.002 and weight decay 0.01. # 4.2 Commonsense Predictions with Explanations Given either a human explanation from CoS-E or reasoning from a language model, we can then learn to perform predictions on the CQA task. For the classification module of our proposed ap- proach, we adopt the widely popular BERT model (Devlin et al., 2019) which we refer to as just BERT. BERT can be fine-tuned for multiple choice question answering by adding a simple binary classifier that takes as input the final state corre- sponding to the the special [CLS] token placed at the start of all inputs to BERT models (Devlin et al., 2019). We apply this same approach to the CQA task. For each example in the dataset, we construct three input sequences for fine-tuning BERT. Each sequence is the concatenation of the question, a separator token [SEP], and one of the answer choices. If the approach requires expla- nation from either CoS-E or automatically gener- ated as in the CAGE, we concatenate the question, [SEP], the explanation, [SEP], and an answer choice. For BERT, the explanations share the same input representation as that of the questions. We also experimented with the explanation sharing the same representation as that of the answer choice but found that the performance decreased slightly. When explanations are used only during train- ing, the explanation variable is optional and the answer choices directly follow the question dur- ing evaluation. For all our experiments we used a train batch size of 24, test batch size of 12, 10 training epochs and maximum sequence length of 50 for the baseline and 175 for all experiments in- volving explanations. The right part of Figure 1 gives an overview of the classification module of our proposed approach. 4.3 Transfer to out-of-domain datasets Transfer without fine-tuning to out-of-domain NLP datasets is known to exhibit poor perfor- mance. For example, for the comparatively eas- ier natural langauge inference task with fixed la- bels, Bowman et al. (2015) show that the accuracy dropped by 25% when training on SNLI and eval- uating on SICK-E (Marelli et al., 2014). We study transfer of natural language explanations from the CQA to SWAG (Zellers et al., 2018) and Story Cloze Test (Mostafazadeh et al., 2016). Both the datasets are multiple-choice like CQA and the au- thors publicize them as commonsense reasoning and inference tasks. We use the GPT language model fine-tuned on CQA train and dev sets to generate explanations on the SWAG train and val sets (with 73546 and 20006 instances respectively) and the Story Cloze Spring 2016 val and test sets (with 1870 instances each). We then train a BERT classifier using the input instances and generated explanations and evaluate on the SWAG and Story Cloze test sets. 5 Experimental Results We present results on the CQA dataset using variations of our proposed Commonsense Auto- Generated Explanations (CAGE). All our models are based on BERT, which also serves as our base- line without any CoS-E or CAGE. All our ablation analysis is conducted on the CQA dev-random- split. We also show results for key models on the final test split.2 Method Accuracy (%) BERT (baseline) CoS-E-open-ended CAGE-reasoning 63.8 65.5 72.6 Table 2: Results on CQA dev-random-split with CoS-E used during training. Table 2 shows results that compare a BERT baseline that uses only the CQA inputs and the same architecture but trained using inputs that contain explanations from CoS-E during train- ing. The BERT baseline model reaches 64% accu- racy and adding open-ended human explanations (CoS-E-open-ended) alongside the questions dur- ing training results in a 2% boost in accuracy. By generating explanations as described in Sec- tion 4.1, we can give the commonsense question answering model access to an explanation that is not conditioned on the ground truth. These expla- nations (CAGE-reasoning) can be provided during both training and validation and increases the ac- curacy to 72%. Table 3 shows the results obtained on the CQA test split. We report our two best models that represent using human explanations (CoS-E-open- ended) for training only and using language model explanations (CAGE-reasoning) during both train and test. We compare our approaches to the best reported models for the CQA task (Talmor et al., # 2https://www.tau-nlp.org/csqa-leaderboard Method Accuracy (%) RC (Talmor et al., 2019) GPT (Talmor et al., 2019) CoS-E-open-ended CAGE-reasoning Human (Talmor et al., 2019) 47.7 54.8 60.2 64.7 95.3 Table 3: Test accuracy on CQA v1.0. The addition of CoS-E-open-ended during training dramatically im- proves performance. Replacing CoS-E during training with CAGE reasoning during both training and infer- ence leads to an absolute gain of 10% over the previous state-of-the-art. Method Accuracy (%) CoS-E-selected w/o ques CoS-E-limited-open-ended CoS-E-selected CoS-E-open-ended w/o ques CoS-E-open-ended* 53.0 67.6 70.0 84.5 89.8 Table 4: Oracle results on CQA dev-random-split using different variants of CoS-E for both training and valida- tion. * indicates CoS-E-open-ended used during both training and validation to contrast with CoS-E-open- ended used only during training in Table 2. 2019). We observe that using CoS-E-open-ended during training improves the state-of-the-art by ap- proximately 6%. Talmor et al. (2019) experimented with using Google search of “question + answer choice” for each example in the dataset and collected 100 top snippets per answer choice to be used as context for their Reading Comprehension (RC) model. They found that providing such extra data does not improve accuracy. On the other hand, us- ing CAGE-reasoning resulted in a gain of 10% accuracy over the previous state-of-the-art. This suggests that our CoS-E-open-ended and CAGE- reasoning explanations provide far more useful in- formation than what can be achieved through sim- ple heuristics like using Google search to find rel- evant snippets. We observed that our models’ per- formance on test is lower than those on validation and this trend was confirmed by the organizers of the task. To establish an oracle upper-bound on the per- formance, we also explored an experimental set- ting in which human-generated explanations from CoS-E are provided during both training and val- idation. These results are summarized in Table 4. We note that this is an unfair setting because the human that provided the explanation had access to the ground truth answer; these results merely serve as an oracle for how much potential benefit can come from using CoS-E-open-ended. If the open- ended human explanations (CoS-E-open-ended) are provided at inference time, performance jumps to approximately 90%. These results also motivate an attempt to automatically generate explanations that establish the world knowledge needed to solve CQA. CAGE-reasoning is our attempt towards this goal. Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’. These variants also use explanation during both train and valida- tion. For these experiments we give the explana- tion in place of the question followed by the an- swer choices as input to the model. When the explanation consists of words humans selected as justification for the answer (CoS-E-selected), the model was able to obtain 53% in contrast to the 85% achieved by the open-ended human explana- tions (CoS-E-open-ended). Adding the question boosts performance for CoS-E-selected to 70%, again falling short of almost 90% achieved by CoS-E-open-ended. We conclude then that our full, open-ended CoS-E thus supply a significant source of information beyond simply directing the model towards the most useful information al- ready in the question. Method Accuracy (%) CAGE-reasoning BERT baseline CoS-E-open-ended 55.7 56.7 58.2 Table 5: Test results on CQA v1.11. We experimented with one final setting in which we only used open-ended explanations that did not contain any word from any answer choices (23%. In this setting, we call these “CoS-E-limited-open- ended” explanations because these explanations are limited in the choice of words allowed. We observe that even using these limited kind of ex- planations improves over the BERT baseline in Ta- ble 4, which suggests that the explanations are pro- viding useful information beyond just mentioning the correct or incorrect answers. We also evaluated our key models – CoS-E- open-ended used during training only and the CAGE reasoning on the v1.11 of CQA that was re- leased before the final submission. Table 5 shows the results obtained on the more challenging CQA v1.11. Camburu et al. (2018) empirically show that transferring explanations on the natural language inference (NLI) problem from SNLI to MultiNLI performs very poorly and is still an open challeng- ing problem. We study transfer of explanations on commonsense reasoning tasks. The NLI problem has a small fixed set of pre-defined labels unlike the commonsense reasoning tasks such as CQA, SWAG and Story Cloze. Table 6 shows the results obtained by the BERT baseline without explana- tions and using our transferred explanations from CQA to SWAG and Story Cloze. We observed that adding explanations led to a very small de- crease (< 0.6%) in the performance compared to the baseline for both tasks. Method SWAG Story Cloze BERT + expl transfer 84.2 83.6 89.8 89.5 Table 6: Results for explanation transfer from CQA to out-of-domain SWAG and Sotry Cloze tasks. 6 Analysis and Discussion In Table 2, using CAGE-reasoning at both train and validation resulted in an accuracy of 72%, but Table 4 shows that if CAGE-reasoning truly captured all information provided in CoS-E-open- ended, performance would be 90%. This gap be- tween CAGE and CoS-E prompted further analy- sis. We measure quality of CAGE using human evaluation and automated metrics. One of the met- rics is the BLEU score (Papineni et al., 2002), which measures syntactical precision by n-gram overlap. We also report perplexity, which pro- vides a token-level measure of how well the lan- guage models predict the next word. We ob- tained a peak BLEU score of 4.1 between CAGE- reasoning and CoS-E-open-ended and perplexity of 32. Language models that are not fine-tuned achieve BLEU score of only 0.8. Though it is clearly beneficial to fine-tune the LM and empiri- cal results suggested that CAGE increased perfor- mance, these scores suggest that humans and LMs have widely varying ways of providing useful ex- planations. Error analysis on the baseline BERT model that does not use any explanations indicates that the model performs poorly on questions that are longer on an average and are more compositional. The average length of such questions is 14 words as opposed to the average length of 13 words for questions that the model using CAGE predicts in- Question: What could people do that involves talking? confession, carnival, state park Choices: confession is the only vocal action. CoS-E: Reason people talk to each other Rationale: people talk to people Question: A child wants to play, what would they likely want? Choices: play tag, breathe, fall down A child to play tag CoS-E: Children want to play tag, and they want to play tag with their Reason friends. Rationale: Children want to play tag, what would they want to do? Question: They were getting ready for a really long hike, he put the food in his what? recycling center, house, backpack Backpacks are used on hikes a backpack is a place to store food and supplies. Choices: CoS-E: Reason Rationale: a backpack is used to carry food and supplies Question: You can do knitting to get the feeling of what? Choices: CoS-E: Reason Rationale: you can do knitting to get the feeling of what? relaxation, your, arthritis Your are focusing on a repetitive task. knitting is the only thing that is relaxing. Table 7: Random sample of explanations generated by humans from CoS-E and our CAGE framework’s rea- soning and rationalization approaches. Boldface indi- cates gold label. All the typos and grammatical errors are as they appear in the actual output sequence. correctly. Therefore, we can conclude that expla- nations help elucidate the longer and more com- plicated compositional questions. Table 7 shows a collection of examples from CQA, CoS-E, and CAGE samples. We ob- serve that CAGE-reasoning typically employs a much simpler construction than CoS-E-open- ended. Nonetheless, this simple declarative mode can sometimes be more informative than CoS-E- open-ended. CAGE achieves this by either pro- viding more explicit guidance (as in the final ex- ample of Table 7) or by adding meaningful context (as in the third example by introducing the word ‘friends’). We observe that CAGE-reasoning con- tains at least one of the answer choices 43% of the time, out of which it contains the model’s actual predicted answer choice 21% of the time. This suggests that there is more to the effectiveness of CAGE-reasoning than directly pointing to the an- swer. Question: Choices: Explanation: What is the main purpose of having a bath? cleanness, use water, exfoliation, hygiene, wetness the only purpose of having a bath is to clean yourself. Question: Choices: Explanation: dresser drawer is the only place that you can store linens. Where can you store you spare linens near your socks? cabinet, chest, hospital, dresser drawers, home Question: Choices: Explanation: Where do you find the most amount of leafs?, forrest, floral arrangement, compost pile, field, ground the most likely place to find leafs is in a garden. Table 8: Random sample of incorrectly predicted in- stances by CAGE-reasoning on CQA v1.11 dev-set. Bold indicated ground-truth and underline indicates our CAGE’s prediction. We also carried out human evaluations to compare 400 examples of CoS-E and CAGE- reasoning. We asked human participants on Me- chanical Turk to guess the most appropriate an- swer choice based on only the explanation without the question. This tests whether the explanation by itself is sufficient for a human to arrive at the same answer as the neural network. We found that Turkers were able to arrive at the same answer as the model based on CAGE-reasoning 42% of the time. This initially seemed low, but Turkers could only arrive at the same answer as humans using only CoS-E-open-ended 52% of the time From Table 7, we observed that CAGE- rationalization and CAGE-reasoning were often identical or differed only in word ordering or by replacing one of the answer choices with an- other. Humans could predict the answer based on just CAGE-rationalization 42% of the time, same as CAGE-reasoning. Although CAGE- rationalizations seem to be better than CAGE- reasoning, we find that it does not drastically im- prove the model’s language generating behavior which is what humans judge while trying to guess the right answer without the actual question. Even though CoS-E and CAGE are noisy, they empirically perform well when used by down- stream models for CQA, but this is not the case for misleading explanations. If we manually changed a random sample of 50 examples to have adversar- ial misleading explanations, performance dropped from 60% to 30%, well below the baseline of 50% validation accuracy. For example, we changed the explanation from “being able to use“ to “buying more will alleviate stress“ for the question “If a couple is having financial issues, buying products can lead to what“ with answer choices “economic boom”, “disagreements”, “being able to use”. Of the 70% of the errors made by a model trained on misleading explanations, 57% of them were instead correctly answered by our model trained with true CoS-E explanations. This demonstrates the effectiveness of having well-informing expla- nations. Camburu et al. (2018) use human explanations to train a neural network model on the SNLI dataset (Bowman et al., 2015). However, they obtain explanations at the cost of accuracy. The authors use the InferSent (Conneau et al., 2017) model for classification and add a one-layer LSTM as the explanation decoder. They report a slight drop in performance (< 1%) when training on human explanations and testing by first predict- ing an answer and then generating explanations. There is a further drop of approximately 2% ac- curacy when their model generates explanations prior to predicting an answer based only on that explanations. However, they also show that a bidirectional encoder with MLP-classifier obtains 96.83% accuracy when given only human expla- nations. CQA experiences a lift from explana- tions when e-SNLI performance appears to de- grade with explanations. For CQA, humans are able to predict the right answer only about 52% of the time using only human explanations from CoS-E. On the more challenging CQA v1.11, we ob- served that our CoS-E model trained on human explanations but evaluated without explanations obtains state-of-the-art performance, beating the BERT baseline by 1.5%. Surprisingly, we found that our CAGE-reasoning model performs slightly worse than the baseline. However, during error analysis we found that the language model expla- nations do not exhibit any obvious problems. Ta- ble 8 shows some samples that CAGE predicts incorrectly. We observed that many of the in- correctly predicted instances had the correct an- swer in the generated explanation, such as “dresser drawer” and “cleanness” in the first two exam- ples, but this information is not properly used by the BERT classifier. A more explicit method of guiding attention towards the relevant information in the explanations might be necessary for such cases. The model also frequently errs when the choices seem semantically close such as “forest” and “compost pile” in the third example. In these cases, the classifier often predicts the incorrect choice on v1.11, but was able to predict the cor- rect choice on v1.0 when only 3 choices were pre- sented. This suggests that simply concatenating explanations is unable to make sufficiently clear the more difficult cases of the newer version of CQA. Transferring the language model used to gener- ate commonsense explanations to out-of-domain datasets, SWAG and Story Cloze, led to slight decrease in performance. Upon inspection, the generated explanations exhibited little grammati- cal or syntactical errors and often contained appar- ently relevant information. Table 9 shows exam- ples from both datasets and the corresponding gen- SWAG Question: Choices: Explanation: Men are standing on motorbikes getting ready for a motocross competition. man places the ladders onto a fence and winds up a marching wall, high with hammer and a stone., man is talking to the camera and standing on a podium., man stands outside in the field going at arms of people and leading a long jumping calf in front., man drops the javelin to the ground and jumps it very high. man is talking to the camera and not the crowd. Question: Choices: Explanation: The man examines the instrument in his hand. The person studies a picture of the man playing the violin., The person holds up the violin to his chin and gets ready., The person stops to speak to the camera again., The person puts his arm around the man and backs away. the person is holding the instrument in his hand. Question: Choices: Explanation: The woman is seated facing the camera while another woman styles her hair. The woman in purple is wearing a blue dress and blue headband, using the pits to style her hair., The woman begins to cut the hair with her hair then serves it and begins brushing her hair and styling it., The woman puts some right braids on his., The woman continues to have her hair styled while turned away from the camera. the woman is using the braids to trim her hair. Story Cloze (ROCStories) Question: Choices: Explanation: My friends all love to go to the club to dance. They think it’s a lot of fun and always invite. I finally decided to tag along last Saturday. I danced terribly and broke a friend’s toe. My friends decided to keep inviting me out as I am so much fun., The next weekend, I was asked to please stay home. the next weekend, i would be asked to stay home Question: Choices: Explanation: Ari spends $20 a day on pickles. He decides to make his own to save money. He puts the pickles in brine. Ari waits 2 weeks for his pickles to get sour. Ari opens the jar to find perfect pickles., Ari’s pickles are sweet. pickles are the only thing that can be found in a jar. Question: Choices: Explanation: Gina sat on her grandpa’s bed staring outside. It was winter and his garden was dead until spring. Her grandpa had passed away so there would be no one to tend it. The weeds would take over and strangle the flowers. Gina asked her grandpa what kind of flowers he liked best., Gina decided to go outside and pick some of the weeds. the weeds would take over and strangle the flowers. Table 9: Random sample of explanations generated by the language model fine-tuned on CQA and transferred without further training to SWAG and Story Cloze. Bold indicates ground-truth. In the SWAG dataset, each erated explanations. question is a video caption from activity recogni- tion videos with choices about what might happen next and the correct answer is the video caption of the next scene. Generated explanations for SWAG appear to be grounded in the given images even though the language model was not at all trained on SWAG. Similarly, we found that for the Story Cloze dataset, the explanations had information pointing to the correct ending. Nonetheless, the classifier was unable to make use of this informa- tion to improve performance. guage model explanations and evaluated explana- tion transfer to out-of-domain datasets. While CAGE focuses on generating explana- tions prior to predicting an answer, language mod- els for explanation might also be jointly trained to predict the answer. They might also be extended to a broader set of tasks. With a sufficient dataset of explanations (analogous to CoS-E) for many tasks, it might be possible to fine-tune a more general explanatory language model that generates more useful explanations for unseen tasks. 7 Conclusion and Future Work We introduced the Common Sense Explanations (CoS-E) dataset built on top of the existing Com- monsenseQA dataset. We also proposed the novel Commonsense Auto-Generated Explana- tions (CAGE) framework that trains a language model to generate useful explanations when fine- tuned on the problem input and human explana- tions These explanations can then be used by a classifier model to make predictions. We empir- ically show that such an approach not only results in state-of-the-art performance on a difficult com- monsense reasoning task, but also opens further avenues for studying explanation as it relates to interpretable commonsense reasoning. We also performed comprehensive error analyses of lan- With deferral of explanation to neural models, it will be crucial in the future to study the ethical implications of biases that are accumulated dur- ing pretraining or fine-tuning. Explanations must be carefully monitored to ensure that they do not reinforce negative or otherwise harmful reasoning that might then propagate into downstream mod- els. For example, in CQA we observed significant gender disparity and bias with higher proportion of female pronouns used in negative contexts. This kind of bias has inevitably propagated into CoS- E and advise these datasets and trained models be used with that in mind. # Acknowledgements We would like to thank Melvin Gruesbeck for the illustration of CAGE in Figure 1. We also thank the anonymous reviewers for their feedback. # References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV). Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large an- notated corpus for learning natural language infer- In Proceedings of the 2015 Conference on ence. Empirical Methods in Natural Language Processing (EMNLP2015), pages 632–642. Oana-Maria Camburu, Tim Rockt¨aschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Natural Language Inference with Natural Language In Advances in Neural Information Explanations. Processing Systems (NeurIPS2018), pages 9560– 9572. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from In Proceedings natural language inference data. of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP2017), pages 670–680. Andrew M Dai and Quoc V Le. 2015. Semi-supervised In Proceedings of the 28th sequence learning. International Conference on Neural Information Processing Systems (NIPS2015), pages 3079–3087. MIT Press. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Braden Hancock, Paroma Varma, Stephanie Wang, Martin Bringmann, Percy Liang, and Christopher R´e. 2018. Training classifiers with natural language In Proceedings of the 56th Annual explanations. Meeting of the Association for Computational Lin- guistics (ACL2018), volume 1, pages 1884–1895. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (ACL2018), pages 328–339. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. In Proceedings Rationalizing neural predictions. of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP2016), pages 107–117. Hector Levesque, Ernest Davis, and Leora Morgen- In stern. 2012. The winograd schema challenge. Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 216–223, Reykjavik, Iceland. European Lan- guage Resources Association (ELRA). Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6294–6305. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of In Proceedings of the 2016 commonsense stories. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL2016), pages 839– 849, San Diego, California. Association for Compu- tational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual meeting on Association for Computa- tional Linguistics (ACL2002), pages 311–318. As- sociation for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP2014), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving lan- guage understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/ research-covers/language-unsupervised/ language understanding paper.pdf. Nazneen Fatema Rajani and Raymond Mooney. 2018. Stacking with auxiliary features for visual question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2217–2226. Nazneen Fatema Rajani and Raymond J. Mooney. Ensembling visual explanations for vqa. 2017. the NIPS 2017 workshop In Proceedings of on Visually-Grounded Interaction and Language (ViGIL). Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional at- tention flow for machine comprehension. CoRR, abs/1611.01603. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems (NIPS2017), pages 5998–6008. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Terry Winograd. 1972. Understanding natural lan- guage. Cognitive psychology, 3(1):1–191. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP2018), pages 93–104. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018. Improving ques- tion answering by commonsense-based pre-training. arXiv preprint arXiv:1809.03568.
{ "id": "1806.08730" }
1906.02569
Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild
Accessibility is a major challenge of machine learning (ML). Typical ML models are built by specialists and require specialized hardware/software as well as ML experience to validate. This makes it challenging for non-technical collaborators and endpoint users (e.g. physicians) to easily provide feedback on model development and to gain trust in ML. The accessibility challenge also makes collaboration more difficult and limits the ML researcher's exposure to realistic data and scenarios that occur in the wild. To improve accessibility and facilitate collaboration, we developed an open-source Python package, Gradio, which allows researchers to rapidly generate a visual interface for their ML models. Gradio makes accessing any ML model as easy as sharing a URL. Our development of Gradio is informed by interviews with a number of machine learning researchers who participate in interdisciplinary collaborations. Their feedback identified that Gradio should support a variety of interfaces and frameworks, allow for easy sharing of the interface, allow for input manipulation and interactive inference by the domain expert, as well as allow embedding the interface in iPython notebooks. We developed these features and carried out a case study to understand Gradio's usefulness and usability in the setting of a machine learning collaboration between a researcher and a cardiologist.
http://arxiv.org/pdf/1906.02569
Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, James Zou
cs.LG, cs.HC, stat.ML
Presented at 2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA
null
cs.LG
20190606
20190606
9 1 0 2 n u J 6 ] G L . s c [ 1 v 9 6 5 2 0 . 6 0 9 1 : v i X r a # Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild # Abubakar Abid * 1 2 Ali Abdalla * 2 Ali Abid * 2 Dawood Khan * 2 Abdulrahman Alfozan 2 James Zou 3 # Abstract Accessibility is a major challenge of machine learning (ML). Typical ML models are built by specialists and require specialized hardware/soft- ware as well as ML experience to validate. This makes it challenging for non-technical collabora- tors and endpoint users (e.g. physicians) to easily provide feedback on model development and to gain trust in ML. The accessibility challenge also makes collaboration more difficult and limits the ML researcher’s exposure to realistic data and sce- narios that occur in the wild. To improve accessi- bility and facilitate collaboration, we developed an open-source Python package, Gradio, which allows researchers to rapidly generate a visual in- terface for their ML models. Gradio makes access- ing any ML model as easy as sharing a URL. Our development of Gradio is informed by interviews with a number of machine learning researchers who participate in interdisciplinary collaborations. Their feedback identified that Gradio should sup- port a variety of interfaces and frameworks, al- low for easy sharing of the interface, allow for input manipulation and interactive inference by the domain expert, as well as allow embedding the interface in iPython notebooks. We devel- oped these features and carried out a case study to understand Gradio’s usefulness and usability in the setting of a machine learning collaboration between a researcher and a cardiologist. et al., 2018; Hertzmann, 2018). In a typical work flow, the domain experts will provide the data sets that the ML re- searcher analyzes, and will provide high-level feedback on the progress of a project. However, the domain expert is usually very limited in their ability to provide direct feed- back on model performance, since, without a background in ML or coding, they are unable to try out the ML models during development. This causes several problems during the course of the collab- oration. First, the lack of an accessible model for collabora- tors makes it very difficult for domain experts to understand when a model is working well and communicate relevant feedback to improve model performance. Second, it makes it difficult to build models that will be reliable when de- ployed in the real world, since they were only only trained on a fixed dataset and not tested with the domain shifts present in the real world (“in the wild”). Real-world data often includes artifacts that are not present in fixed training data; domain experts are usually aware of such artifacts and if they could access the model, they may be able to expose the model to such data, and gather additional data as needed (Thiagarajan et al., 2018). Lack of end-user engagement in model testing can lead to models that are biased or par- ticularly inaccurate on certain kinds of samples. Finally, end-user domain experts who have not engaged with the model as it was being developed tend to exhibit a general distrust of the model when it is deployed. # 1. Introduction Machine learning (ML) researchers are increasingly part of interdisciplinary collaborations in which they work closely with domain experts, such as doctors, physicists, geneticists, and artists (Bhardwaj et al., 2017; Radovic et al., 2018; Zou *Equal contribution 1Department of Electrical Engineering, Stanford University, Stanford, California, USA 2Gradio Inc., Mountain View, California, USA 3Department of Biomedical Data Science, Stanford University, Stanford, California, USA. Corre- spondence to: Abubakar Abid <[email protected]>. In order to address these issues, we have developed an open- source python package, Gradio1, which allows researchers to rapidly generate a web-based visual interface for their ML models. This visual interface allows domain experts to inter- act with the model without writing any code. The package includes a library of common interfaces to support a wide variety of models, e.g. image, audio, and text-based mod- els. Additionally, Gradio makes it easy for researchers to securely share their public links to their models so that col- laborators can try out the model directly from their browsers without downloading any software, and also lets the domain expert provide feedback on individual samples, fully en- abling the feedback loop between domain experts and ML researchers. In the rest of this paper, we begin by discussing related 2019 ICML Workshop on Human in the Loop Learning (HILL 2019), Long Beach, USA. Copyright by the author(s). 1The name is an abbrevation of gradient input output Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild S gradio INPUT EDIT ‘SUBMIT CLEAR OUTPUT cheetah, cheetah, 99% leopard, dhole, C (74 (Cptional message for flagging) oe Figure 1. An illustration of a web interface generated by Gradio, which allows users to drag and drop their own images (left), and get predicted labels (right). Gradio can provide an interface wrapper around any machine learning model (InceptionNetv3 is shown in the example here). The web interface can be shared with others using the share link button (center top), and collaborators can provide feedback by flagging particular input samples (bottom right). works and their limitations, which led to the development of Gradio (Section 2). We then detail the implementation of Gradio in Section 3. We have carried out a preliminary pilot study that includes an ML researcher and a clinical collaborator, which we describe in Section 4. We conclude with a discussion of the next steps for Gradio in Section 5. # 2. Motivation # 2.1. Related Works R1: Support a variety of interfaces and frameworks. Our users reported working with different kinds of models, where the input (or output) could be: text, image, and even audio. To support the majority of models, Gradio must be able to offer developers a range of interfaces to match their needs. Each of these interfaces must be intuitive enough so that domain users can use them without a background in machine learning. In addition, ML researchers did not want to be restricted in which ML framework to use: Gradio needed to work with at least Scikit-Learn, TensorFlow, and PyTorch models. The usefulness of visual interfaces in interdisciplinary col- laborations has been observed by many prior researchers, who have typically created highly customized tools for spe- cific use cases. For example Xu et al. (2018) created an interactive dashboard to visualize ECG data and classify heart beats. The authors found that the visualization signifi- cantly increased adoption of the ML method and improved clinical effectiveness at detecting arrhythmias. R2: Easily share a machine learning model. Our users indicated that deploying a model so that it can be used by domain experts is very difficult. They said that Gradio should allow developers to easily create a link that can be shared with researchers, domain experts, and peers, ideally without having to package the model in a particular way or having to upload it to a hosting server. However, visual interfaces that have been developed by prior researchers have been tightly restricted to a specific machine learning framework (Klemm et al., 2018) or to a specific application domain (Muthukrishna et al., 2019). When we interviewed our users with regards to such tools, they indicated that the limited scope of these tools would make them unsuitable for their particular work. R3: Manipulate input data. To support exploration and improvement of models, the domain expert needs the ability to manipulate the input. For example, the ability to crop an image, occlude certain parts of the image, edit the text, add noise to an audio recording, or trim a video clip. This helps the domain expert detect which features affect the model, and what kind of additional data needs to be collected in order to increase the robustness of the model. # 2.2. Design Requirements We interviewed 12 machine learning researchers who par- ticipate in interdisciplinary collaborations. Based on the feedback gathered during these interviews, we identified the following key design requirements. R4: Running in iPython notebooks & embedding. Fi- nally, our users asked that the interfaces be run from and embedded in Jupyter and Google’s Colab notebooks, as well as embedded in websites. A use case for many of our re- searchers was to expose machine learning models publicly Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild after being trained. This would allow their models to be tested by many people e.g. in a citizen data science effort, or as part of a tutorial. They needed Gradio to both allow shar- ing of models directly with collaborators, but also widely with the general public. # 3. Implementation Gradio is implemented as a python library, and can be in- stalled from PyPi2. Once installed, running a Gradio inter- face requires minimal change to a ML developer’s existing workflow. After the model is trained, the developer creates an Interface object with four required parameters (Fig. 2a). The first and second parameter are inputs and outputs, which takes as argument the input/output interface to be used. The developer can choose any of the subclasses of Gra- dio.AbstractInput and Gradio.AbstractOutput, respectively. Currently this includes a library of standard interfaces for handling image, text, and audio data. The next parameter is model type which is a string representing the type of model being passed in; This may be keras, pytorch, or sklearn – or it may be pyfunc, which handles arbitrary python functions. The final parameter is model where the developer passes in the actual model to use for processing. Due to the common practice of pre-processing or post-processing the input and output of a specific model, we implemented a feature to instantiate Gradio.Input/Gradio.Output objects with custom parameters or alternatively supply custom pre-processing and post-processing functions. We give the developer the option of how the interface should be launched. The launch function accepts 4 boolean vari- ables, which allow for displaying the model inbrowser (whether to display model in a new browser window) (Fig. 2b), displaying it inline (whether to display model embed- ded in interactive python environment such as Jupyter or Colab notebooks), attempting to validate (whether to val- idate the interface-model compatibility before launching), and creating a share link (whether to create a public link to the model interface). If Gradio creates a share link to the model, then the model continues running on the host machine, and an SSH tunnel is created allowing collabora- tors to pass in data into the model remotely, and observe the output. This allows the user to continue running using the same machine, with the same hardware and software de- pendencies. The collaborator does not need any specialized hardware or software: just a browser running on a computer or mobile phone (the user interfaces are mobile-friendly). The user of the interface can input any data and also ma- nipulate the input by, for example, cropping an image (Fig. 2c). The data from the input is encrypted and then is passed securely through the SSH tunnel to the developer’s com- 2pip install Gradio puter, which is actually running the model, and the output is passed back to the end user to display (www.gradio.app also serves as a coordinator service between public links and the SSH tunnels). The amount of time until the end user receives the output is simply the amount of time it takes for model inference plus any network latency in sending data. The collaborator can additionally flag data where the output was false, which sends the inputs and outputs (along with a message) to the ML researcher’s computer, closing the feedback loop between researcher and domain expert. (Fig. 2d). # 4. Pilot Study: Echocardiogram Classification We carried out a user study to understand the usefulness of Gradio in the setting of an ML collaboration. The partici- pants were an ML researcher and a cardiologist who had in collaboration developed an ultrasound classification model that could determine whether a pacemaker was present in a patient from a single frame in an ultrasound video. The model scored an area under receiver-operating characteristic curve (AUC) of 0.93 on the binary classification task. We first asked participants a series of questions to record their typical workflow without using the Gradio library. We then taught the participants the Gradio library and let them use it for collaborative work. After being shown instructions about the Gradio library, the ML researcher was able to set up Gradio on a lab server that was running his model. The process of setting up Gradio took about 10 minutes, as some additional python dependencies needed to be installed on the lab server. After the installation, the researcher was able to copy and adapt the standard code from Gradio documen- tation and did not run into any bugs. The ML researcher was advised to share the model with the cardiologist. He did so, and the cardiologist automatically began to test the robustness of the model by inputting his own images and observing the model’s response. After using Gradio, the cardiologist gained more confidence and trust in the perfor- mance of this particular ML model. We observed the researchers while they carried out these tasks, as we sought to answer four research questions: # Q1: How do researchers share data and models with and without Gradio? Before Gradio, the cardiologist provided the entire dataset of videos to the machine learning researcher in monthly batches. These batches were the latest set of videos that were available to the cardiologist. The ML researcher would train the model on increasingly larger datasets and report metrics such as classification accuracy and AUC to the cardiologist. Beyond this, there was very little data sharing from the cardiologist and the researcher did not ever share the model with the cardiologist. Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild a Introducing Gradio import tensorflow as tf —+ import gradio mdl = tf.keras.models.Sequential() # ... define and train the model as you would normally —+ io = gradio. Interface(inputs="imageupload”, outputs=“label”, model_type="keras”, model=md1) —+ io. launch() b i Host REMOTE USER Figure 2. A diagram of the steps to share a machine learning model using Gradio. Steps: (a) The machine learning researcher defines the input and output interface types, and launches the interface either inline or in a new browser tab. (b) The interfaces launches, and optionally, a public link is created that allows remote collaborators to input their own data into the model. (c) The users of the interface can also manipulate the model in natural ways, such as cropping images or obscuring parts of the image. (d) All of the model computation is done by the host (i.e. the computer that called Gradio). The collaborator or user can interact with the model on their browser without local computation, and can provide real-time feedback (e.g. flagging incorrect answers) which is sent to the host. With Gradio, the cardiologist opened the link to the model sent by the ML researcher. Even though it was his first time using the model, the cardiologist immediately began to probe the model by inputting an ultrasound image from his desktop into the Gradio model. He chose an image which clearly contained a pacemaker, see Fig. 3(a). The model correctly predicted that a pacemaker was present in the patient. The cardiologist then occluded the pacemaker using the paint tool built into Gradio, see Fig. 3(b). After completely occluding the pacemaker, the cardiologist resub- mitted the image; the model switched its prediction to “no pacemaker,” which elicited an audible sigh of relief from the ML researcher and cardiologist. The cardiologist proceeded to choose more difficult images, generally finding that the model correctly determined when a pacemaker was and was not present in the image. He also occluded different regions in the image to serve as a compar- ison to occluding the pacemaker. The model performance was generally found to be accurate and robust; a notable exception was in the case of flipping over the vertical axis, which would generally significantly affect the prediction accuracy. When this happened, the cardiologist flagged the problematic images, sending them to the ML researcher’s computer for further analysis. # Q2: What features of Gradio are most used and most unused by developers and collaborators? We found that the machine learning researcher quickly un- derstood the different interfaces available to him for his model. He selected the appropriate interface for his model, and set share=True to generate a unique publicly accessible link for the model. When he shared the link with the car- diologist, the cardiologist spent a great deal of time trying different ways to manipulate a few sample images to affect the model prediction. The cardiologist treated this as a chal- lenge and tried to cause the model to make a mistake in an adversarial manner. The cardiologist also used the flagging feature in the cases where the model did make a mistake. The “share” button which appears at the top of the interface was not used; instead, the ML researcher simply copied and pasted the URL to share it with his collaborator. And de- spite the general interest in running model interfaces inside iPython notebooks, our users did not use that feature. # Q3: What kind of model feedback do collaborators pro- vide to developers through the Gradio interface? The collaborator tested various transformations on the test images, including changing the orientation of the image to occluding parts of the image. Whenever this would cause the model to make a mistake on an image, the cardiologist would flag the image, but would usually pass a blank mes- sage. Thus, it seemed that the collaborator would only send images that were misclassified back to the ML researcher. Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild INPUT OUTPUT pacemaker (a) INPUT OUTPUT no pacemaker pacemaker (b) Figure 3. In our pilot study, the clinician used Gradio to test a model that classified echocardiograms based on the presence of a pacemaker. (a) The clinician submitted his own image of an echocardiagram, similar to the one shown here, and the model correctly predicted that a pacemaker was present. (b) The clinician used Gradio’s built-in tools to obscure the pacemaker and the model correctly predicted the absence of a pacemaker. (The white arrows are included to point out the location of the pacemaker to the reader; they were not present in the original images). # Q4: What additional features are requested by the de- velopers and collaborators? Our users verbally requested two features as they were us- ing the model. First, the cardiologist asked if it would be possible for the ML developer to pre-supply images to the interface. This way, he would not need to find an ultrasound image from his computer, but would be able to choose one from a set of images already displayed to him. Second, the collaborator was used to seeing saliency maps for ultrasound images that the ML researcher had gener- ated in previous updates. The collaborator expressed that it would be very helpful for him to see these saliency maps, especially as he was choosing what areas inside of the image to occlude. # 5. Discussion & Next Steps In this paper, we describe a Python package that allows ma- chine learning researchers to easily create visual interfaces for their machine learning models, and share them with col- laborators. Collaborators are then able to interact with the machine learning models without writing code, and provide feedback to the machine learning researchers. In this way, collaborators and end users can test machine learning mod- els in settings that are realistic and can provide new data to build models that work reliably in the wild. We think this will lower the barrier of accessibility for do- main experts to use machine learning and take a stronger part in the development cycle of models. At a time when machine learning is becoming more and more ubiquitous, the barrier to accessibility is still very high. We carried out a case study to evaluate the usability and use- fulness of Gradio within an existing collaboration between an ML researcher and a cardiologist working on detecting pacemakers in ultrasounds. We were surprised to see that both the ML researcher and the domain expert seemed to be relieved when the model worked, as though they expected it not to. We think because researchers can not manipulate inputs the way a domain expert would, they generally have less confidence in the model’s robustness in the wild. At Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild the same time, because domain experts have not interacted with the model or used it, they feel the same doubt in its robustness. This study was however limited in scope to one pair of users, and a short time. We plan to conduct more us- ability studies and quantitative measures of trust in machine learning models to get a better holistic view of the usability and usefulness of Gradio. Similarly, quantitative measures of user satisfaction on both the end of machine learning researcher and domain expert can be used to evaluate the product and guide its further development. The next steps in the development of the package would be creating features for saliency, handling other types of inputs (ex: tabular data), handling bulk inputs, as well as helping ML researchers reach domain experts even if they don’t already have access to them. clinical models handle real-world domain shifts? arXiv preprint arXiv:1809.07806, 2018. Xu, K., Guo, S., Cao, N., Gotz, D., Xu, A., Qu, H., Yao, Z., and Chen, Y. Ecglens: Interactive visual exploration of large scale ecg data for arrhythmia detection. In Proceed- ings of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 663. ACM, 2018. Zou, J., Huss, M., Abid, A., Mohammadi, P., Torkamani, A., and Telenti, A. A primer on deep learning in genomics. Nature genetics, pp. 1, 2018. Additional documentation about Gradio and example code can be found at: www.gradio.app. # Acknowledgments We thank all of the machine learning researchers who talked with us to help us understand the current difficulties in sharing machine learning models with collaborators, and gave us feedback during the development of Gradio. In particular, we thank Amirata Ghorbani and David Ouyang for participating in our pilot study and for sharing their echocardiogram models using Gradio. # References Bhardwaj, R., Nambiar, A. R., and Dutta, D. A study of machine learning in healthcare. In 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), volume 2, pp. 236–241. IEEE, 2017. Hertzmann, A. Can computers create art? In Arts, volume 7, pp. 18. Multidisciplinary Digital Publishing Institute, 2018. Klemm, S., Scherzinger, A., Drees, D., and Jiang, X. Barista- a graphical tool for designing and training deep neural networks. arXiv preprint arXiv:1802.04626, 2018. Muthukrishna, D., Parkinson, D., and Tucker, B. Dash: Deep learning for the automated spectral classifica- arXiv preprint tion of supernovae and their hosts. arXiv:1903.02557, 2019. Radovic, A., Williams, M., Rousseau, D., Kagan, M., Bona- corsi, D., Himmel, A., Aurisano, A., Terao, K., and Wongjirad, T. Machine learning at the energy and in- tensity frontiers of particle physics. Nature, 560(7716): 41, 2018. Thiagarajan, J. J., Rajan, D., and Sattigeri, P. Can deep
{ "id": "1903.02557" }
1906.02738
Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading
Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading. The key idea is to provide the conversation model with relevant long-form text on the fly as a source of external knowledge. The model performs QA-style reading comprehension on this text in response to each conversational turn, thereby allowing for more focused integration of external knowledge than has been possible in prior approaches. To support further research on knowledge-grounded conversation, we introduce a new large-scale conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Both human evaluation and automated metrics show that our approach results in more contentful responses compared to a variety of previous methods, improving both the informativeness and diversity of generated output.
http://arxiv.org/pdf/1906.02738
Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao
cs.CL, cs.AI, cs.LG
ACL 2019 long paper
null
cs.CL
20190606
20190607
9 1 0 2 n u J 7 ] L C . s c [ 2 v 8 3 7 2 0 . 6 0 9 1 : v i X r a # Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading Lianhui Qin†, Michel Galley‡, Chris Brockett‡, Xiaodong Liu‡, Xiang Gao‡, Bill Dolan‡, Yejin Choi† and Jianfeng Gao‡ † University of Washington, Seattle, WA, USA ‡ Microsoft Research, Redmond, WA, USA {lianhuiq,yejin}@cs.washington.edu {mgalley,Chris.Brockett,xiaodl,xiag,billdol,jfgao}@microsoft.com # Abstract Although neural conversation models are ef- fective in learning how to produce fluent re- sponses, their primary challenge lies in know- ing what to say to make the conversation con- tentful and non-vacuous. We present a new end-to-end approach to contentful neural con- versation that jointly models response gener- ation and on-demand machine reading. The key idea is to provide the conversation model with relevant long-form text on the fly as a source of external knowledge. The model performs QA-style reading comprehension on this text in response to each conversational turn, thereby allowing for more focused inte- gration of external knowledge than has been possible in prior approaches. To support fur- ther research on knowledge-grounded conver- sation, we introduce a new large-scale conver- sation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Both human evaluation and automated metrics show that our approach results in more con- tentful responses compared to a variety of pre- vious methods, improving both the informa- tiveness and diversity of generated output. A woman fell 30,000 feet from an airplane and survived. The page states that a 2009 report found the plane only fell several hundred meters. Well if she only fell a few hundred meters and survived then I 'm not impressed at all. Still pretty incredible , but quite a bit different that 10,000 meters. She holds the Guinness world record for surviving the highest fall without a parachute: 10,160 metres (33,330 ft). In 2005, Vulovie's fall was recreated by the American television MythBusters. Four years later, [...] two Prague- based journalists, claimed that Flight 367 had been mistaken for an enemy aircraft and shot down by the Czechoslovak Air Force at an altitude of 800 metres (2,600 ft). Figure 1: Users discussing a topic defined by a Wikipedia article. In this real-world example from our Reddit dataset, information needed to ground responses is distributed throughout the source document. # Introduction While end-to-end neural conversation models (Shang et al., 2015; Sordoni et al., 2015; Vinyals and Le, 2015; Serban et al., 2016; Li et al., 2016a; Gao et al., 2019a, etc.) are effective in learning how to be fluent, their responses are often vacu- ous and uninformative. A primary challenge thus lies in modeling what to say to make the conver- sation contentful. Several recent approaches have attempted to address this difficulty by condition- ing the language decoder on external information sources, such as knowledge bases (Agarwal et al., 2018; Liu et al., 2018a), review posts (Ghazvinine- jad et al., 2018; Moghe et al., 2018), and even im- ages (Das et al., 2017; Mostafazadeh et al., 2017). However, empirical results suggest that condition- ing the decoder on rich and complex contexts, while helpful, does not on its own provide suffi- cient inductive bias for these systems to learn how to achieve deep and accurate integration between external knowledge and response generation. We posit that this ongoing challenge demands a more effective mechanism to support on-demand knowledge integration. We draw inspiration from how humans converse about a topic, where peo- ple often search and acquire external information as needed to continue a meaningful and informa- tive conversation. Figure 1 illustrates an example human discussion, where information scattered in separate paragraphs must be consolidated to com- pose grounded and appropriate responses. Thus, the challenge is to connect the dots across differ- ent pieces of information in much the same way that machine reading comprehension (MRC) sys- tems tie together multiple text segments to provide a unified and factual answer (Seo et al., 2017, etc.). We introduce a new framework of end-to- end conversation models that jointly learn re- sponse generation together with on-demand ma- chine reading. We formulate the reading com- prehension task as document-grounded response generation: given a long document that supple- ments the conversation topic, along with the con- versation history, we aim to produce a response that is both conversationally appropriate and in- formed by the content of the document. The key idea is to project conventional QA-based reading comprehension onto conversation response gener- ation by equating the conversation prompt with the question, the conversation response with the answer, and external knowledge with the con- text. The MRC framing allows for integration of long external documents that present notably richer and more complex information than rela- tively small collections of short, independent re- view posts such as those that have been used in prior work (Ghazvininejad et al., 2018; Moghe et al., 2018). to facili- tate research on knowledge-grounded conversa- tion (2.8M turns, 7.4M sentences of grounding) that is at least one order of magnitude larger than existing datasets (Dinan et al., 2019; Moghe et al., 2018). This dataset consists of real-world conver- sations extracted from Reddit, linked to web doc- uments discussed in the conversations. Empirical results on our new dataset demonstrate that our full model improves over previous grounded response generation systems and various ungrounded base- lines, suggesting that deep knowledge integration is an important research direction.1 # 2 Task We propose to use factoid- and entity-rich web documents, e.g., news stories and Wikipedia pages, as external knowledge sources for an open- ended conversational system to ground in. Formally, we are given a conversation history 1Code for reproducing our models and data is made publicly available at https://github.com/qkaren/ converse_reading_cmr. of turns X = (x1, . . . , xM ) and a web docu- ment D = (s1, . . . , sN ) as the knowledge source, where si is the ith sentence in the document. With the pair (X, D), the system needs to generate a natural language response y that is both conversa- tionally appropriate and reflective of the contents of the web document. # 3 Approach Our approach integrates conversation generation with on-demand MRC. Specifically, we use an MRC model to effectively encode the conversation history by treating it as a question in a typical QA task (e.g., SQuAD (Rajpurkar et al., 2016)), and encode the web document as the context. We then replace the output component of the MRC model (which is usually an answer classification mod- ule) with an attentional sequence generator that generates a free-form response. We refer to our approach as CMR (Conversation with on-demand Machine Reading). In general, any off-the-shelf MRC model could be applied here for knowledge comprehension. We use Stochastic Answer Net- works (SAN)2 (Liu et al., 2018b), a performant machine reading model that until very recently held state-of-the-art performance on the SQuAD benchmark. We also employ a simple but effec- tive data weighting scheme to further encourage response grounding. # 3.1 Document and Conversation Reading We adapt the SAN model to encode both the in- put document and conversation history and for- ward the digested information to a response gen- erator. Figure 2 depicts the overall MRC architec- ture. Different blocks capture different concepts of representations in both the input conversation his- tory and web document. The leftmost blocks rep- resent the lexicon encoding that extracts informa- tion from X and D at the token level. Each token is first transformed into its corresponding word embedding vector, and then fed into a position- wise feed-forward network (FFN) (Vaswani et al., 2017) to obtain the final token-level representa- tion. Separate FFNs are used for the conversation history and the web document. The next block is for contextual encoding. The aforementioned token vectors are concate- nated with pre-trained 600-dimensional CoVe vec- tors (McCann et al., 2017), and then fed to a BiL- # 2https://github.com/kevinduh/san_mrc G&D Conversation history = 1. Lexicon Encoding 2. Contextual Encoding Model Output So he’s the CEO of Apple. Le L Steve Jobs was a mediocre programmer ' Emb 11 Bi and one of the greatest designers [...] H Poor J ! FEN] |! yoru Generator ' |» <BOS> Apple 5 | | soe Cross-Attn PX ov, \CBO, 5 OF, \APPIC, SEOS> Document 1 tot 1 \ \ \ \ ‘ 2 3. Memory \ rhea tia foo t <title> Steve Jobs </title> <p> : es -Memory Steven Paul Jobs was an American i poy i - =0+0+0—+O entrepreneur, businessman, inventor, ' 1! Bie Self- Bi- ' Vy tay t y) 4 and industrial designer. He was the ‘Emb | | FFN| | | | stm Attn Ls™| | i van y y chairman, chief executive officer (CEO), ' = - _ =| im =| eS and co-founder of Apple Inc.; [...] 1 1 1 FCoVe 1 \ ot ' Figure 2: Model Architecture for Response Generation with on-demand Machine Reading: The first blocks of the MRC-based encoder serve as a lexicon encoding that maps words to their embeddings and transforms with position-wise FFN, independently for the conversation history and the document. The next block is for contextual encoding, where BiLSTMs are applied to the lexicon embeddings to model the context for both conversation history and document. The last block builds the final encoder memory, by sequentially applying cross-attention in order to integrate the two information sources, conversation history and document, self-attention for salient information retrieval, and a BiLSTM for final information rearrangement. The response generator then attends to the memory and generates a free-form response. each decoding step t with a hidden state ht, we generate a token yt based on the distribution: STM that is shared for both conversation history and web document. The step-wise outputs of the BiLSTM carry the information of the tokens as well as their left and right context. p(yt) = softmax((W1ht + b)/τ ), (1) The last block builds the memory that sum- marizes the salient information from both X and D. The block first applies cross-attention to in- tegrate information from the conversation history X into the document representation. Each contex- tual vector of the document D is used to compute attention (similarity) distribution over the contex- tual vectors of X, which is concatenated with the weighted average vector of X by the resulting dis- tribution. Second, a self -attention layer is applied to further ingest and capture the most salient in- formation. The output memory, M ∈ Rd×n, is obtained by applying another BiLSTM layer for final information rearrangement. Note that d is the hidden size of the memory and n is the length of the document. where τ > 0 is the softmax temperature. The hid- den state ht is defined as follows: ht = W2[zt ++fattention(zt, M )]. (2) Here, [· ++·] indicates a concatenation of two vec- tors; fattention is a dot-product attention (Vaswani et al., 2017); and zt is a state generated by GRU(et−1, ht−1) with et−1 being the embedding of the word yt−1 generated at the previous (t − 1) step. In practice, we use top-k sample decoding to draw yt from the above distribution p(yt). Sec- tion 5 provides more details about the experimen- tal configuration. # 3.3 Data Weighting Scheme We further propose a simple data weighting scheme to encourage the generation of grounded responses. The idea is to bias the model train- ing to fit better to those training instances where the ground-truth response is more closely relevant to the document. More specifically, given a train- ing instance (X, D, y), we measure the closeness score c ∈ R between the document D and the gold response y (e.g., with the NIST (Doddington, 2002) or BLEU (Papineni et al., 2002) metrics). In each training data batch, we normalize the close- ness scores of all the instances to have a sum of # 3.2 Response Generation Having read and processed both the conversation history and the extra knowledge in the document, the model then produces a free-form response y = (y1, . . . , yT ) instead of generating a span or per- forming answer classification as in MRC tasks. We use an attentional recurrent neural network decoder (Luong et al., 2015) to generate response tokens while attending to the memory. At the be- ginning, the initial hidden state h0 is the weighted sum of the representation of the history X. For Train Valid Test # dialogues # utterances # documents # document sentences 1.2k 28.4k 0.12M 0.34M 2.36M 28.4k 1.2k 15.18M 0.58M 1.68M 3.1k 3.1k Average length (# words): utterances document sentences 18.74 13.72 18.84 14.17 18.48 14.15 Table 1: Our grounded conversational dataset. 1, and weight each of the instances with its cor- responding normalized score when evaluating the training loss. This training regime promotes in- stances with grounded responses and thus encour- ages the model to better encode and utilize the in- formation in the document. # 4 Dataset To create a grounded conversational dataset, we extract conversation threads from Reddit, a popu- lar and large-scale online platform for news and discussion. In 2015 alone, Reddit hosted more than 73M conversations.3 On Reddit, user sub- missions are categorized by topics or “subreddits”, and a submission typically consists of a submis- sion title associated with a URL pointing to a news or background article, which initiates a discus- sion about the contents of the article. This ar- ticle provides framing for the conversation, and this can naturally be seen as a form of ground- ing. Another factor that makes Reddit conversa- tions particularly well-suited for our conversation- as-MRC setting is that a significant proportion of these URLs contain named anchors (i.e., ‘#’ in the URL) that point to the relevant passages in the document. This is conceptually quite similar to MRC data (Rajpurkar et al., 2016) where typically only short passages within a larger document are relevant in answering the question. We reduce spamming and offensive language by manually curating a list of 178 relatively “safe” subreddits and 226 web domains from which the web pages are extracted. To convert the web page of each conversation into a text document, we ex- tracted the text of the page using an html-to-text converter,4 while retaining important tags such as <title>, <h1> to <h6>, and <p>. This means the 3https://redditblog.com/2015/12/31/ reddit-in-2015/ # 4https://www.crummy.com/software/ BeautifulSoup entire text of the original web page is preserved, but these main tags retain some high-level struc- ture of the article. For web URLs with named an- chors, we preserve that information by indicating the anchor text in the document with tags <an- chor> and </anchor>. As the whole documents in the dataset tend to be lengthy, anchors offer im- portant hints to the model about which parts of the documents should likely be focused on in order to produce a good response. We considered it sensi- ble to keep them as they are also available to the human reader. After filtering short or redacted turns, or which quote earlier turns, we obtained 2.8M conversa- tion instances respectively divided into train, vali- dation, and test (Table 1). We used different date ranges for these different sets: years 2011-2016 for train, Jan-Mar 2017 for validation, and the rest of 2017 for test. For the test set, we select con- versational turns for which 6 or more responses were available, in order to create a multi-reference test set. Given other filtering criteria such as turn length, this yields a 6-reference test set of size 2208. For each instance, we set aside one of the 6 human responses to assess human performance on this task, and the remaining 5 responses serve as ground truths for evaluating different systems.5 Table 1 provides statistics for our dataset, and Fig- ure 1 presents an example from our dataset that also demonstrates the need to combine conversa- tion history and background information from the document to produce an informative response. To enable reproducibility of our experiments, we crawled web pages using Common Crawl (http://commoncrawl.org), a service that crawls web pages and makes its historical crawls available to the public. We also release the code (URL redacted for anonymity) to recreate our dataset from both a popular Reddit dump6 and Common Crawl, and the latter service ensures that anyone reproducing our data extraction exper- iments would retrieve exactly the same web pages. We made a preliminary version of this dataset available for a shared task (Galley et al., 2019) at Dialog System Technology Challenges (DSTC) (Yoshino et al., 2019). Back-and-forth with partic- 5While this is already large for a grounded dataset, we could have easily created a much bigger one given how abun- dant Reddit data is. We focused instead on filtering out spam- ming and offensive language, in order to strike a good balance between data quality and size. # 6http://files.pushshift.io/reddit/ ipants helped us iteratively refine the dataset. The code to recreate this dataset is included.7 # 5 Experiments # 5.1 Systems We evaluate our systems and several competitive baselines: SEQ2SEQ (Sutskever et al., 2014) We use a stan- dard LSTM SEQ2SEQ model that only exploit the conversation history for response generation, without any grounding. This is a competitive base- line initialized using pretrained embeddings. MEMNET: We use a Memory Network designed for grounded response generation (Ghazvinine- jad et al., 2018). An end-to-end memory net- work (Sukhbaatar et al., 2015) encodes conversa- tion history and sentences in the web documents. Responses are generated with a sequence decoder. CMR-F : To directly measure the effect of incor- porating web documents, we compare to a base- line which omits the document reading component of the full model (Figure 2). As with the SEQ2SEQ approach, the resulting model generates responses solely based on conversation history. CMR: To measure the effect of our data weighting scheme, we compare to a system that has identical architecture to the full model, but is trained with- out associating weights to training instances. CMR+W: As described in section 3, the full model reads and comprehends both the conversa- tion history and document using an MRC compo- nent, and sequentially generates the response. The model is trained with the data weighting scheme to encourage grounded responses. Human: To get a better sense of the systems’ performance relative to an upper bound, we also evaluate human-written responses using different metrics. As described in Section 4, for each test instance, we set aside one of the 6 human refer- ences for evaluation, so the ‘human’ is evaluated against the other 5 references for automatic eval- uation. To make these results comparable, all the systems are also automatically evaluated against the same 5 references. 7We do not report on shared task systems here, as these systems do not represent our work and some of these sys- tems have no corresponding publications. Along with the data described here, we provided a standard SEQ2SEQ base- line to the shared task, which we improved for the purpose of this paper (improved BLEU, NIST and METEOR). Our new SEQ2SEQ baseline is described in Section 5. # 6 Experiment Details For all the systems, we set word embedding di- mension to 300 and used the pretrained GloVe8 for initialization. We set hidden dimensions to 512 and dropout rate to 0.4. GRU cells are used for SEQ2SEQ and MEMNET (we also tested LSTM cells and obtained similar results). We used the Adam optimizer for model training, with an ini- tial learning rate of 0.0005. Batch size was set to 32. During training, all responses were truncated to have a maximum length of 30, and maximum query length and document length were set to 30, 500, respectively. we used regular teacher-forcing decoding during training. For inference, we found that top-k random sample decoding (Fan et al., 2018) provides the best results for all the systems. That is, at each decoding step, a token was drawn from the k most likely candidates according to the distribution over the vocabulary. Similar to recent work (Fan et al., 2018; Edunov et al., 2018), we set k = 20 (other common k values like 10 gave similar results). We selected key hyperparameter configurations on the validation set. # 6.1 Evaluation Setup Table 2 shows automatic metrics for quantitative evaluation over three qualities of generated texts. We measure the overall relevance of the generated responses given the conversational history by us- ing standard Machine Translation (MT) metrics, comparing generated outputs to ground-truth re- sponses. These metrics include BLEU-4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007). and NIST (Doddington, 2002). The latter metric is a variant of BLEU that weights n-gram matches by their information gain by effectively penalizing uninformative n-grams (such as “I don’t know”), which makes it a relevant metric for evaluating systems aiming diverse and informative responses. MT metrics may not be particularly adequate for our task (Liu et al., 2016), given its focus on the informativeness of responses, and for that reason we also use two other types of metrics to measure the level of grounding and diversity. As a diversity metric, we count all n-grams in the system output for the test set, and measure: (1) Entropy-n as the entropy of the n-gram count distribution, a metric proposed in (Zhang et al., 2018b); (2) Distinct-n as the ratio between the 8https://nlp.stanford.edu/projects/ glove/ Appropriateness Grounding Diversity NIST BLEU METEOR Precision Recall F1 Entropy-4 Distinct-1 Distinct-2 Len Human 2.650 3.13% 8.31% 2.89% 0.45% 0.78% 10.445 0.167 0.670 18.757 SEQ2SEQ MEMNET 2.223 2.185 1.09% 7.34% 1.10% 7.31% 1.20% 1.25% 0.05% 0.10% 9.745 0.06% 0.12% 9.821 0.023 0.035 0.174 0.226 15.942 15.524 2.260 2.213 CMR+W 2.238 CMR-F CMR 1.20% 7.37% 1.43% 7.33% 1.38% 7.46% 1.68% 2.44% 3.39% 0.08% 0.15% 9.778 0.13% 0.25% 9.818 0.20% 0.38% 9.887 0.035 0.046 0.052 0.219 0.258 0.283 15.471 15.048 15.249 Table 2: Automatic Evaluation results (higher is better for all metrics). Our best models (CMR+W and CMR) considerably increase the quantitative measures of Grounding, and also slightly improve Diversity. Automatic measures of Quality (e.g., BLEU-4) give mixed results, but this is reflective of the fact that we did not aim to improve response relevance with respect to the context, but instead its level of grounding. The human evaluation results in Table 3 indeed suggest that our best system (CMR+W) is better. number of n-gram types and the total number of n-grams, a metric introduced in (Li et al., 2016a). For the grounding metrics, we first compute ‘#match,’ the number of non-stopword tokens in the response that are present in the document but not present in the context of the conversa- tion. Excluding words from the conversation his- tory means that, in order to produce a word of the document, the response generation system is very likely to be effectively influenced by that document. We then compute both precision as ‘#match’ divided by the total number of non-stop tokens in the response, and recall as ‘#match’ di- vided by the total number of non-stop tokens in the document. We also compute the respective F1 score to combine both. Looking only at exact uni- gram matches between the document and response is a major simplifying assumption, but the combi- nation of the three metrics offers a plausible proxy for how greatly the response is grounded in the document. It seems further reasonable to assume that these can serve as a surrogate for less quan- tifiable forms of grounding such as paraphrase – e.g., US −→ American – when the statistics are ag- gregated on a large test dataset. # 6.2 Automatic Evaluation Table 2 shows automatic evaluation results for the different systems. In terms of appropriate- ness, the different variants of our models outper- form the SEQ2SEQ and MEMNET baselines, but differences are relatively small and, in case of one of the metrics (NIST), the best system does not use grounding. Our goal, we would note, is not to specifically improve response appropriate- ness, as many responses that completely ignore the document (e.g., I don’t know) might be per- Human judges preferred: Neutral Our best system Comparator CMR+W *44.17% 26.27% CMR+W *40.93% 25.80% CMR+W 37.67% 27.53% 29.56% SEQ2SEQ 33.27% MEMNET 34.80% CMR CMR+W 30.37% 16.27% *53.37% Human Table 3: Human Evaluation results, showing prefer- ences (%) for our model (CMR+W) vs. baseline and other comparison systems. Distributions are skewed towards CMR+W. The 5-point Likert scale has been collapsed to a 3-point scale. *Differences in mean pref- erences are statistically significant (p ≤ 0.0001). fectly appropriate. Our systems fare much better in terms of Grounding and Diversity: our best sys- tem (CMR+W) achieves an F1 score that is more than three times (0.38% vs. 0.12%) higher than the most competitive non-MRC system (MEMNET). # 6.3 Human Evaluation We sampled 1000 conversations from the test set. Filters were applied to remove conversations con- taining ethnic slurs or other offensive content that might confound judgments. Outputs from systems to be compared were presented pairwise to judges from a crowdsourcing service. Four judges were asked to compare each pair of outputs on Rele- vance (the extent to which the content was related to and appropriate to the conversation) and Infor- mativeness (the extent to which the output was in- teresting and informative). Judges were asked to agree or disagree with a statement that one of the pair was better than the other on the above two parameters, using a 5-point Likert scale.9 Pairs 9The choices presented to the judges were Strongly Agree, Agree, Neutral, Disagree, and Strongly Disagree. of system outputs were randomly presented to the judges in random order in the context of short snippets of the background text. These results are presented in summary form in Table 3, which shows the overall preferences for the two systems expressed as a percentage of all judgments made. Overall inter-rater agreement measured by Fliess’ Kappa was 0.32 (“fair"). Nevertheless, the differ- ences between the paired model outputs are sta- tistically significant (computed using 10,000 boot- strap replications). # 6.4 Qualitative Study Table 4 illustrates how our best model (CMR+W) tends to produce more contentful and informa- tive responses compared to the other systems. In the first example, our system refers to a particu- lar episode mentioned in the article, and also uses terminology that is more consistent with the ar- In the second example, hu- ticle (e.g., series). morous song seems to positively influence the re- sponse, which is helpful as the input doesn’t men- tion singing at all. the CMR+W model clearly grounds its response to the article as it states the fact (Steve Jobs: CEO of Apple) retrieved from the article. The outputs by the other two baseline models are instead not relevant in the context. Figure 3 displays the attention map of the gen- erated response and (part of) the document from our full model. The model successfully attends to the key words (e.g., 36th, episode) of the docu- ment. Note that the attention map is unlike what is typical in machine translation, where target words tend to attend to different portions of the input text. In our task, where alignments are much less one- to-one compared to machine translation, it is com- mon for the generator to retain focus on the key information in the external document to produce semantically relevant responses. # 7 Related Work Dialogue: Traditional dialogue systems (see (Jurafsky and Martin, 2009) for an historical per- spective) are typically grounded, enabling these systems to be reflective of the user’s environment. The lack of grounding has been a stumbling block for the earliest end-to-end dialogue systems, as various researchers have noted that their outputs tend to be bland (Li et al., 2016a; Gao et al., 2019b), inconsistent (Zhang et al., 2018a; Li et al., a perfect episode but fink saw the episode where i | ia series <p> star trek Ll cog g a 32a es 8 < ne 2 > 2 2 stitle> investigations <ititle> 29 g§ 8 82 =o EB fy investigations Figure 3: Attention weights between words of the doc- uments and words of the response. Dark (blue) cells represent probabilities closer to 1. 2016b; Zhang et al., 2019), and lacking in fac- tual content (Ghazvininejad et al., 2018; Agarwal et al., 2018). Recently there has been growing interest in exploring different forms of ground- ing, including images, knowledge bases, and plain texts (Das et al., 2017; Mostafazadeh et al., 2017; Agarwal et al., 2018; Yang et al., 2019). A recent survey is included in Gao et al. (2019a). Prior work, e.g, (Ghazvininejad et al., 2018; Zhang et al., 2018a; Huang et al., 2019), uses grounding in the form of independent snippets of text: Foursquare tips and background information about a given speaker. Our notion of grounding is different, as our inputs are much richer, encom- passing the full text of a web page and its underly- ing structure. Our setting also differs significantly from relatively recent work (Dinan et al., 2019; Moghe et al., 2018) exploiting crowdsourced con- versations with detailed grounding labels: we use Reddit because of its very large scale and bet- ter characterization of real-world conversations. We also require the system to learn grounding di- rectly from conversation and document pairs, in- stead of relying on additional grounding labels. Moghe et al. (2018) explored directly using a span-prediction QA model for conversation. Our framework differs in that we combine MRC mod- els with a sequence generator to produce free-form responses. Machine Reading Comprehension: MRC models such as SQuAD-like models, aim to extract answer spans (starting and ending indices) # Doc <title> Investigations </title> <p> "Investigations" is the 36th episode of the American science fiction televi- sion series Star Trek: Voyager which aired on the UPN network. It is the 36th episode of the second season. </p> Set in the 24th century, the series follows the adventures of the Federation starship Voyager during its journey home to Earth, having been stranded tens of thousands of light-years away. [...] [...] King Abdullah II of Jordon once cameo’d as an unnamed ensign on Star Trek Voyager I think it should be noted that he is like a good guy who just killed her and her son. Input MEMNET SEQ2SEQ well , i didn’t know , but i was just saying that it wasn’t a joke. CMR+W I am not sure if it was a “perfect” episode, but I think I saw the episode where he was the voice of the Star Trek series. <title> “Weird Al” Yankovic </title> <p> Alfred Matthew “Weird Al” Yankovic is an American singer, songwriter, parodist, record producer, satirist, actor, voice actor, music video director, film producer, and author. He is known for his humorous songs that make light of popular culture and often parody specific songs by contemporary musical acts, original songs that are style pastiches of the work of other acts, and polka medleys of several popular songs, featuring his favored instrument, the accordion.</p> (Prince and West mentioned several times in the rest of the article.) [...] That’s fair, I believe Weird Al always got permission from artists before he parodied them which is nice. # Input MEMNET Well then you don’t need to make a sequel to the front page of the movie. SEQ2SEQ CMR+W No, because his song was a parody of the artist formerly known as "Kanye West". If I remember correctly. I had no idea who had any good chance in history class. # Doc # Input <title> Steve Jobs </title> <p> Steven Paul Jobs was an American entrepreneur, businessman, inventor, and industrial designer. He was the chairman, chief executive officer (CEO), and co-founder of Apple Inc.; chairman and majority shareholder of Pixar; a member of The Walt Disney Company’s board of directors following its acquisition of Pixar; and the founder, chairman, and CEO of next. [...] </p> Steve Jobs was a mediocre programmer and one of the greatest designers and marketers in human history. But I prefer Bill Gates if we want to talk about hero worship myself. [...] MEMNET What if you don’t want to see this movie? SEQ2SEQ No ... the whole point is that this is a good way to make a difference. CMR+W So he’s the CEO of Apple. Table 4: Sample output comparing our best system (CMR+W) against Memory Networks and a SEQ2SEQ base- line. The source documents were manually shortened to fit in the table, without significantly affecting meaning. from a given document for a given question (Seo et al., 2017; Liu et al., 2018b; Yu et al., 2018). These models differ in how they fuse information between questions and documents. We chose SAN (Liu et al., 2018b) because of its representa- tive architecture and competitive performance on existing MRC tasks. We note that other off-the- shelf MRC models, such as BERT (Devlin et al., 2018), can also be plugged in. We leave the study of different MRC architectures for future work. Questions are treated as entirely independent in these “single-turn” MRC models, so recent work (e.g., CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018)) focuses on multi-turn MRC, modeling sequences of questions and answers in a conversation. While multi-turn MRC aims to answer complex questions, that body of work is restricted to factual questions, whereas our work—like much of the prior work in end-to-end dialogue—models free-form dialogue, which also encompasses chitchat and non-factual responses. # 8 Conclusions We have demonstrated that the machine reading comprehension approach offers a promising step to generating, on the fly, contentful conversation exchanges that are grounded in extended text cor- pora. The functional combination of MRC and neural attention mechanisms offers visible gains over several strong baselines. We have also for- mally introduced a large dataset that opens up in- teresting challenges for future research. The CMR (Conversation with on-demand ma- chine reading) model presented here will help con- nect the many dots across multiple data sources. One obvious future line of investigation will be to explore the effect of other off-the-shelf machine reading models such as BERT (Devlin et al., 2018) within the CMR framework. # Acknowledgements We are grateful to the anonymous reviewers, as well as to Vighnesh Shiv, Yizhe Zhang, Chris Quirk, Shrimai Prabhumoye, and Ziyu Yao for helpful comments and suggestions on this work. This research was supported in part by NSF (IIS- 1524371), DARPA CwC through ARO (W911NF- 15-1-0543), and Samsung AI Research. # References Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, and Verena Rieser. 2018. A knowledge-grounded multimodal search-based conversational agent. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 59–66, Brussels, Belgium. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuAC: Question answering in con- text. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2174–2184. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In CVPR. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In ICLR. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proc. of HLT. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proc. of EMNLP. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proc. of ACL. Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. 2019. Grounded response gen- In AAAI Dialog System eration task at DSTC7. Technology Challenges Workshop. Jianfeng Gao, Michel Galley, and Lihong Li. 2019a. Neural approaches to conversational ai. Founda- tions and Trends in Information Retrieval, 13(2- 3):127–298. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. In NAACL-HLT 2019. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proc. of AAAI. Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2019. Challenges in building intelligent open-domain dia- log systems. arXiv preprint arXiv:1905.05709. Dan Jurafsky and James H Martin. 2009. Speech & language processing. Prentice Hall. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for mt evaluation with high levels In Proc. of of correlation with human judgments. the Second Workshop on Statistical Machine Trans- lation, StatMT ’07, pages 228–231. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proc. of NAACL-HLT. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural con- versation model. In Proc. of ACL. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proc. of EMNLP. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018a. Knowledge In Pro- diffusion for neural dialogue generation. ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1489–1498. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018b. Stochastic Answer Networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1694–1704, Melbourne, Australia. Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based In Proceedings of the neural machine translation. 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412–1421, Lisbon, Portugal. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6297–6308. Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M Khapra. 2018. Towards exploiting back- ground knowledge for building conversation sys- tems. In Proc. of EMNLP. Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Jianfeng Gao, Georgios Sp- Michel Galley, ithourakis, and Lucy Vanderwende. 2017. Image- grounded conversations: Multimodal context for natural question and response generation. In Proc. of IJCNLP. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic In Proceedings evaluation of machine translation. of the 40th annual meeting on association for com- putational linguistics, pages 311–318. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association of Com- putational Linguistics (TACL). Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In Proc. of ACL-IJCNLP. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- In Proc. of eration of conversational responses. NAACL-HLT. Sainbayar Sukhbaatar, Jason Weston, and Rob Fergus. In Proc. of 2015. End-to-end memory networks. NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Proc. of NIPS, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. In ICML Deep Learning Workshop. Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jian- feng Gao, W Bruce Croft, Xiaodong Liu, Ye- long Shen, and Jingjing Liu. 2019. A hy- brid retrieval-generation neural conversation model. arXiv preprint arXiv:1904.09068. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fer- nando D’Haro, Lazaros Polymenakos, R. Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kum- merfeld, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. 2019. Dia- log system technology challenge 7. In In NeurIPS Conversational AI Workshop. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehen- sion. In ICLR. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018a. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers). Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018b. Generating informative and diverse conversational responses via adversarial information maximization. In Proc. of NeurIPS. Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. arXiv preprint arXiv:1903.05759.
{ "id": "1904.09068" }
1906.02243
Energy and Policy Considerations for Deep Learning in NLP
Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.
http://arxiv.org/pdf/1906.02243
Emma Strubell, Ananya Ganesh, Andrew McCallum
cs.CL
In the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Florence, Italy. July 2019
null
cs.CL
20190605
20190605
9 1 0 2 n u J 5 ] L C . s c [ arXiv:1906.02243v1 1 v 3 4 2 2 0 . 6 0 9 1 : v i X r a # Energy and Policy Considerations for Deep Learning in NLP # Emma Strubell Ananya Ganesh Andrew McCallum College of Information and Computer Sciences University of Massachusetts Amherst {strubell, aganesh, mccallum}@cs.umass.edu # Abstract Recent progress in hardware and methodol- ogy for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have ob- tained notable gains in accuracy across many NLP tasks. However, these accuracy improve- ments depend on the availability of exception- ally large computational resources that neces- sitate similarly substantial energy consump- tion. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud com- pute time, and environmentally, due to the car- bon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of re- cently successful neural network models for NLP. Based on these findings, we propose ac- tionable recommendations to reduce costs and improve equity in NLP research and practice. Consumption Air travel, 1 passenger, NY↔SF Human life, avg, 1 year American life, avg, 1 year Car, avg incl. fuel, 1 lifetime CO2e (lbs) 1984 11,023 36,156 126,000 Training one model (GPU) NLP pipeline (parsing, SRL) 39 78,468 192 626,155 w/ tuning & experimentation Transformer (big) w/ neural architecture search Table 1: Estimated CO2 emissions from training com- mon NLP models, compared to familiar consumption.1 NLP models could be trained and developed on a commodity laptop or server, many now require multiple instances of specialized hardware such as GPUs or TPUs, therefore limiting access to these highly accurate models on the basis of finances. # 1 Introduction Advances in techniques and hardware for train- ing deep neural networks have recently en- abled impressive accuracy improvements across many fundamental NLP tasks (Bahdanau et al., 2015; Luong et al., 2015; Dozat and Man- ning, 2017; Vaswani et al., 2017), with the most computationally-hungry models obtaining the highest scores (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019; So et al., 2019). As a result, training a state-of-the-art model now re- quires substantial computational resources which demand considerable energy, along with the as- sociated financial and environmental costs. Re- search and development of new models multiplies these costs by thousands of times by requiring re- training to experiment with model architectures and hyperparameters. Whereas a decade ago most Even when these expensive computational re- sources are available, model training also incurs a substantial cost to the environment due to the en- ergy required to power this hardware for weeks or months at a time. Though some of this energy may come from renewable or carbon credit-offset re- sources, the high energy demands of these models are still a concern since (1) energy is not currently derived from carbon-neural sources in many loca- tions, and (2) when renewable energy is available, it is still limited to the equipment we have to pro- duce and store it, and energy spent training a neu- ral network might better be allocated to heating a It is estimated that we must cut family’s home. carbon emissions by half over the next decade to deter escalating rates of natural disaster, and based on the estimated CO2 emissions listed in Table 1, 1Sources: (1) Air tion: https://bit.ly/2Hw0xWc; https://bit.ly/2Qbr0w1. travel and per-capita consump- (2) car lifetime: model training and development likely make up a substantial portion of the greenhouse gas emis- sions attributed to many NLP researchers. To heighten the awareness of the NLP commu- nity to this issue and promote mindful practice and policy, we characterize the dollar cost and carbon emissions that result from training the neural net- works at the core of many state-of-the-art NLP models. We do this by estimating the kilowatts of energy required to train a variety of popular off-the-shelf NLP models, which can be converted to approximate carbon emissions and electricity costs. To estimate the even greater resources re- quired to transfer an existing model to a new task or develop new models, we perform a case study of the full computational resources required for the development and tuning of a recent state-of-the-art NLP pipeline (Strubell et al., 2018). We conclude with recommendations to the community based on our findings, namely: (1) Time to retrain and sen- sitivity to hyperparameters should be reported for NLP machine learning models; (2) academic re- searchers need equitable access to computational resources; and (3) researchers should prioritize de- veloping efficient models and hardware. # 2 Methods To quantify the computational and environmen- tal cost of training deep neural network mod- els for NLP, we perform an analysis of the en- ergy required to train a variety of popular off- the-shelf NLP models, as well as a case study of the complete sum of resources required to develop LISA (Strubell et al., 2018), a state-of-the-art NLP model from EMNLP 2018, including all tuning and experimentation. We measure energy use as follows. We train the models described in §2.1 using the default settings provided, and sample GPU and CPU power con- sumption during training. Each model was trained for a maximum of 1 day. We train all models on a single NVIDIA Titan X GPU, with the excep- tion of ELMo which was trained on 3 NVIDIA GTX 1080 Ti GPUs. While training, we repeat- edly query the NVIDIA System Management In- terface2 to sample the GPU power consumption and report the average over all samples. To sample CPU power consumption, we use Intel’s Running Average Power Limit interface.3 2nvidia-smi: https://bit.ly/30sGEbi 3RAPL power meter: https://bit.ly/2LObQhV Consumer Renew. Gas Coal Nuc. China 22% 3% 65% 4% Germany 40% 7% 38% 13% United States 17% 35% 27% 19% Amazon-AWS 17% 24% 30% 26% Google 56% 14% 15% 10% Microsoft 32% 23% 31% 10% Table 2: Percent energy sourced from: Renewable (e.g. hydro, solar, wind), natural gas, coal and nuclear for the top 3 cloud compute providers (Cook et al., 2017), compared to the United States,4 China5 and Germany (Burger, 2019). We estimate the total time expected for mod- els to train to completion using training times and hardware reported in the original papers. We then calculate the power consumption in kilowatt-hours (kWh) as follows. Let pc be the average power draw (in watts) from all CPU sockets during train- ing, let pr be the average power draw from all DRAM (main memory) sockets, let pg be the aver- age power draw of a GPU during training, and let g be the number of GPUs used to train. We esti- mate total power consumption as combined GPU, CPU and DRAM consumption, then multiply this by Power Usage Effectiveness (PUE), which ac- counts for the additional energy required to sup- port the compute infrastructure (mainly cooling). We use a PUE coefficient of 1.58, the 2018 global average for data centers (Ascierto, 2018). Then the total power pt required at a given instance during training is given by: pt = 1.58t(pc + pr + gpg) 1000 (1) The U.S. Environmental Protection Agency (EPA) provides average CO2 produced (in pounds per kilowatt-hour) for power consumed in the U.S. (EPA, 2018), which we use to convert power to estimated CO2 emissions: CO2e = 0.954pt (2) This conversion takes into account the relative pro- portions of different energy sources (primarily nat- ural gas, coal, nuclear and renewable) consumed to produce energy in the United States. Table 2 lists the relative energy sources for China, Ger- many and the United States compared to the top 5U.S. Dept. of Energy: https://bit.ly/2JTbGnI 5China Electricity Council; trans. China Energy Portal: https://bit.ly/2QHE5O3 three cloud service providers. The U.S. break- down of energy is comparable to that of the most popular cloud compute service, Amazon Web Ser- vices, so we believe this conversion to provide a reasonable estimate of CO2 emissions per kilowatt hour of compute energy used. # 2.1 Models We analyze four models, the computational re- quirements of which we describe below. All mod- els have code freely available online, which we used out-of-the-box. For more details on the mod- els themselves, please refer to the original papers. Transformer. The Transformer model (Vaswani et al., 2017) is an encoder-decoder architecture primarily recognized for efficient and accurate ma- chine translation. The encoder and decoder each consist of 6 stacked layers of multi-head self- attention. Vaswani et al. (2017) report that the Transformer base model (65M parameters) was trained on 8 NVIDIA P100 GPUs for 12 hours, and the Transformer big model (213M parame- ters) was trained for 3.5 days (84 hours; 300k steps). This model is also the basis for recent work on neural architecture search (NAS) for ma- chine translation and language modeling (So et al., 2019), and the NLP pipeline that we study in more detail in §4.2 (Strubell et al., 2018). So et al. (2019) report that their full architecture search ran for a total of 979M training steps, and that their base model requires 10 hours to train for 300k steps on one TPUv2 core. This equates to 32,623 hours of TPU or 274,120 hours on 8 P100 GPUs. ELMo. The ELMo model (Peters et al., 2018) is based on stacked LSTMs and provides rich word representations in context by pre-training on a large amount of data using a language model- ing objective. Replacing context-independent pre- trained word embeddings with ELMo has been shown to increase performance on downstream tasks such as named entity recognition, semantic role labeling, and coreference. Peters et al. (2018) report that ELMo was trained on 3 NVIDIA GTX 1080 GPUs for 2 weeks (336 hours). BERT. The BERT model (Devlin et al., 2019) pro- vides a Transformer-based architecture for build- ing contextual representations similar to ELMo, but trained with a different language modeling ob- jective. BERT substantially improves accuracy on tasks requiring sentence-level representations such as question answering and natural language infer- ence. Devlin et al. (2019) report that the BERT base model (110M parameters) was trained on 16 TPU chips for 4 days (96 hours). NVIDIA reports that they can train a BERT model in 3.3 days (79.2 hours) using 4 DGX-2H servers, totaling 64 Tesla V100 GPUs (Forster et al., 2019). GPT-2. is the latest edition of This model OpenAI’s GPT general-purpose token encoder, also based on Transformer-style self-attention and trained with a language modeling objective (Rad- ford et al., 2019). By training a very large model on massive data, Radford et al. (2019) show high zero-shot performance on question answering and language modeling benchmarks. The large model described in Radford et al. (2019) has 1542M pa- rameters and is reported to require 1 week (168 hours) of training on 32 TPUv3 chips. 6 # 3 Related work There is some precedent for work characterizing the computational requirements of training and in- ference in modern neural network architectures in the computer vision community. Li et al. (2016) present a detailed study of the energy use required for training and inference in popular convolutional models for image classification in computer vi- sion, including fine-grained analysis comparing different neural network layer types. Canziani et al. (2016) assess image classification model ac- curacy as a function of model size and gigaflops required during inference. They also measure av- erage power draw required during inference on GPUs as a function of batch size. Neither work an- alyzes the recurrent and self-attention models that have become commonplace in NLP, nor do they extrapolate power to estimates of carbon and dol- lar cost of training. Analysis of hyperparameter tuning has been performed in the context of improved algorithms for hyperparameter search (Bergstra et al., 2011; Bergstra and Bengio, 2012; Snoek et al., 2012). To our knowledge there exists to date no analysis of the computation required for R&D and hyperpa- rameter tuning of neural network models in NLP. 6Via the authors on Reddit. 7GPU lower bound computed using pre-emptible P100/V100 U.S. resources priced at $0.43–$0.74/hr, upper bound uses on-demand U.S. resources priced at $1.46– $2.48/hr. We similarly use pre-emptible ($1.46/hr–$2.40/hr) and on-demand ($4.50/hr–$8/hr) pricing as lower and upper bounds for TPU v2/3; cheaper bulk contracts are available. 26 192 262 1438 — $2074–$6912 626,155 — $44,055–$146,848 — $12,902–$43,008 $942,973–$3,201,722 Table 3: Estimated cost of training a model in terms of CO2 emissions (lbs) and cloud compute cost (USD).7 Power and carbon footprint are omitted for TPUs due to lack of public information on power draw for this hardware. # 4 Experimental results # 4.1 Cost of training Table 3 lists CO2 emissions and estimated cost of training the models described in §2.1. Of note is that TPUs are more cost-efficient than GPUs on workloads that make sense for that hardware (e.g. BERT). We also see that models emit substan- tial carbon emissions; training BERT on GPU is roughly equivalent to a trans-American flight. So et al. (2019) report that NAS achieves a new state- of-the-art BLEU score of 29.7 for English to Ger- man machine translation, an increase of just 0.1 BLEU at the cost of at least $150k in on-demand compute time and non-trivial carbon emissions. # 4.2 Cost of development: Case study To quantify the computational requirements of R&D for a new model we study the logs of all training required to develop Linguistically- Informed Self-Attention (Strubell et al., 2018), a multi-task model that performs part-of-speech tag- ging, labeled dependency parsing, predicate detec- tion and semantic role labeling. This model makes for an interesting case study as a representative NLP pipeline and as a Best Long Paper at EMNLP. training associated with the project spanned a period of 172 days (approx. 6 months). During that time 123 small hyperparameter grid searches were performed, resulting in 4789 jobs in total. Jobs varied in length ranging from a min- imum of 3 minutes, indicating a crash, to a maxi- mum of 9 days, with an average job length of 52 hours. All training was done on a combination of NVIDIA Titan X (72%) and M40 (28%) GPUs.8 The sum GPU time required for the project totaled 9998 days (27 years). This averages to Models Hours 1 24 4789 120 2880 239,942 $5 $118 $9870 Table 4: Estimated cost in terms of cloud compute and electricity for training: (1) a single model (2) a single tune and (3) all models trained during R&D. about 60 GPUs running constantly throughout the 6 month duration of the project. Table 4 lists upper and lower bounds of the estimated cost in terms of Google Cloud compute and raw electricity re- quired to develop and deploy this model.9 We see that while training a single model is relatively in- expensive, the cost of tuning a model for a new dataset, which we estimate here to require 24 jobs, or performing the full R&D required to develop this model, quickly becomes extremely expensive. # 5 Conclusions # Authors should report training time and sensitivity to hyperparameters. Our experiments suggest that it would be benefi- cial to directly compare different models to per- form a cost-benefit (accuracy) analysis. To ad- dress this, when proposing a model that is meant to be re-trained for downstream use, such as re- training on a new domain or fine-tuning on a new task, authors should report training time and com- putational resources required, as well as model sensitivity to hyperparameters. This will enable direct comparison across models, allowing subse- quent consumers of these models to accurately as- sess whether the required computational resources 8We approximate cloud compute cost using P100 pricing. 9Based on average U.S cost of electricity of $0.12/kWh. are compatible with their setting. More explicit characterization of tuning time could also reveal inconsistencies in time spent tuning baseline mod- els compared to proposed contributions. Realiz- (1) a standard, hardware- ing this will require: independent measurement of training time, such as gigaflops required to convergence, and (2) a standard measurement of model sensitivity to data and hyperparameters, such as variance with re- spect to hyperparameters searched. # Academic researchers need equitable access to computation resources. Recent advances in available compute come at a high price not attainable to all who desire access. Most of the models studied in this paper were de- veloped outside academia; recent improvements in state-of-the-art accuracy are possible thanks to in- dustry access to large-scale compute. Limiting this style of research to industry labs hurts the NLP research community in many ways. First, it stifles creativity. Researchers with good ideas but without access to large-scale compute will simply not be able to execute their ideas, instead constrained to focus on different prob- lems. Second, it prohibits certain types of re- search on the basis of access to financial resources. This even more deeply promotes the already prob- lematic “rich get richer” cycle of research fund- ing, where groups that are already successful and thus well-funded tend to receive more funding due to their existing accomplishments. Third, the prohibitive start-up cost of building in-house re- sources forces resource-poor groups to rely on cloud compute services such as AWS, Google Cloud and Microsoft Azure. While these services provide valuable, flexi- ble, and often relatively environmentally friendly compute resources, it is more cost effective for academic researchers, who often work for non- profit educational institutions and whose research is funded by government entities, to pool resources to build shared compute centers at the level of funding agencies, such as the U.S. National Sci- ence Foundation. For example, an off-the-shelf GPU server containing 8 NVIDIA 1080 Ti GPUs and supporting hardware can be purchased for approximately $20,000 USD. At that cost, the hardware required to develop the model in our case study (approximately 58 GPUs for 172 days) would cost $145,000 USD plus electricity, about half the estimated cost to use on-demand cloud GPUs. Unlike money spent on cloud compute, however, that invested in centralized resources would continue to pay off as resources are shared across many projects. A government-funded aca- demic compute cloud would provide equitable ac- cess to all researchers. # Researchers should prioritize computationally efficient hardware and algorithms. We recommend a concerted effort by industry and academia to promote research of more computa- tionally efficient algorithms, as well as hardware that requires less energy. An effort can also be made in terms of software. There is already a precedent for NLP software packages prioritizing efficient models. An additional avenue through which NLP and machine learning software de- velopers could aid in reducing the energy asso- ciated with model tuning is by providing easy- to-use APIs implementing more efficient alterna- tives to brute-force grid search for hyperparameter tuning, e.g. random or Bayesian hyperparameter search techniques (Bergstra et al., 2011; Bergstra and Bengio, 2012; Snoek et al., 2012). While software packages implementing these techniques do exist,10 they are rarely employed in practice for tuning NLP models. This is likely because their interoperability with popular deep learning frameworks such as PyTorch and TensorFlow is there are not simple exam- not optimized, i.e. ples of how to tune TensorFlow Estimators using Bayesian search. Integrating these tools into the workflows with which NLP researchers and practi- tioners are already familiar could have notable im- pact on the cost of developing and tuning in NLP. # Acknowledgements We are grateful to Sherief Farouk and the anony- mous reviewers for helpful feedback on earlier drafts. This work was supported in part by the Centers for Data Science and Intelligent Infor- mation Retrieval, the Chan Zuckerberg Initiative under the Scientific Knowledge Base Construc- tion project, the IBM Cognitive Horizons Network agreement no. W1668553, and National Science Foundation grant no. IIS-1514053. Any opinions, findings and conclusions or recommendations ex- pressed in this material are those of the authors and do not necessarily reflect those of the sponsor. 10For example, the Hyperopt Python library. # References Rhonda Ascierto. 2018. Uptime Institute Global Data Center Survey. Technical report, Uptime Institute. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly In 3rd Inter- Learning to Align and Translate. national Conference for Learning Representations (ICLR), San Diego, California, USA. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305. James S Bergstra, R´emi Bardenet, Yoshua Bengio, and Bal´azs K´egl. 2011. Algorithms for hyper-parameter In Advances in neural information optimization. processing systems, pages 2546–2554. Bruno Burger. 2019. Net Public Electricity Generation in Germany in 2018. Technical report, Fraunhofer Institute for Solar Energy Systems ISE. Alfredo Canziani, Adam Paszke, and Eugenio Culur- ciello. 2016. An analysis of deep neural network models for practical applications. Gary Cook, Jude Lee, Tamina Tsai, Ada Kongn, John Deans, Brian Johnson, Elizabeth Jardim, and Brian Johnson. 2017. Clicking Clean: Who is winning the race to build a green internet? Technical report, Greenpeace. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In NAACL. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In ICLR. EPA. 2018. Emissions & Generation Resource Inte- grated Database (eGRID). Technical report, U.S. Environmental Protection Agency. Christopher Forster, Thor Johnsen, Swetha Man- dava, Sharath Turuvekere Sreenivas, Deyu Fu, Julie Bernauer, Allison Gray, Sharan Chetlur, and Raul Puri. 2019. BERT Meets GPUs. Technical report, NVIDIA AI. Da Li, Xinbo Chen, Michela Becchi, and Ziliang Zong. 2016. Evaluating the energy efficiency of deep con- volutional neural networks on cpus and gpus. 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Comput- ing and Communications (SustainCom) (BDCloud- SocialCom-SustainCom), pages 477–484. Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based In Proceedings of the neural machine translation. 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Associa- tion for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In NAACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural informa- tion processing systems, pages 2951–2959. David R. So, Chen Liang, and Quoc V. Le. 2019. In Proceedings of the The evolved transformer. 36th International Conference on Machine Learning (ICML). Emma Strubell, Patrick Verga, Daniel Andor, and Andrew McCallum. 2018. David Weiss, Linguistically-Informed Self-Attention for Se- In Conference on Empir- mantic Role Labeling. ical Methods in Natural Language Processing (EMNLP), Brussels, Belgium. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In 31st Conference on Neural Information Processing Systems (NIPS).
{ "id": "1906.02243" }
1906.01618
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3D-structure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.
http://arxiv.org/pdf/1906.01618
Vincent Sitzmann, Michael Zollhöfer, Gordon Wetzstein
cs.CV, cs.AI, I.2.10; I.4.5; I.4.8; I.4.10
Video: https://youtu.be/6vMEBWD8O20 Project page: https://vsitzmann.github.io/srns/
null
cs.CV
20190604
20200128
0 2 0 2 n a J 8 2 ] V C . s c [ 2 v 8 1 6 1 0 . 6 0 9 1 : v i X r a # Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations # Vincent Sitzmann Michael Zollhöfer Gordon Wetzstein {sitzmann, zollhoefer}@cs.stanford.edu, [email protected] Stanford University vsitzmann.github.io/srns/ # Abstract Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3D- structure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3D- structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to- end from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.1 # 1 Introduction A major driver behind recent work on generative models has been the promise of unsupervised discovery of powerful neural scene representations, enabling downstream tasks ranging from robotic manipulation and few-shot 3D reconstruction to navigation. A key aspect of solving these tasks is understanding the three-dimensional structure of an environment. However, prior work on neural scene representations either does not or only weakly enforces 3D structure [1–4]. Multi-view geometry and projection operations are performed by a black-box neural renderer, which is expected to learn these operations from data. As a result, such approaches fail to discover 3D structure under limited training data (see Sec. 4), lack guarantees on multi-view consistency of the rendered images, and learned representations are generally not interpretable. Furthermore, these approaches lack an intuitive interface to multi-view and projective geometry important in computer graphics, and cannot easily generalize to camera intrinsic matrices and transformations that were completely unseen at training time. In geometric deep learning, many classic 3D scene representations, such as voxel grids [5–10], point clouds [11–14], or meshes [15] have been integrated with end-to-end deep learning models and have led to significant progress in 3D scene understanding. However, these scene representations are discrete, limiting achievable spatial resolution, only sparsely sampling the underlying smooth surfaces of a scene, and often require explicit 3D supervision. # 1Please see supplemental video for additional results. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. We introduce Scene Representation Networks (SRNs), a continuous neural scene representation, along with a differentiable rendering algorithm, that model both 3D scene geometry and appearance, enforce 3D structure in a multi-view consistent manner, and naturally allow generalization of shape and appearance priors across scenes. The key idea of SRNs is to represent a scene implicitly as a continuous, differentiable function that maps a 3D world coordinate to a feature-based representation of the scene properties at that coordinate. This allows SRNs to naturally interface with established techniques of multi-view and projective geometry while operating at high spatial resolution in a memory-efficient manner. SRNs can be trained end-to-end, supervised only by a set of posed 2D images of a scene. SRNs generate high-quality images without any 2D convolutions, exclusively operating on individual pixels, which enables image generation at arbitrary resolutions. They generalize naturally to camera transformations and intrinsic parameters that were completely unseen at training time. For instance, SRNs that have only ever seen objects from a constant distance are capable of rendering close-ups of said objects flawlessly. We evaluate SRNs on a variety of challenging 3D computer vision problems, including novel view synthesis, few-shot scene reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. To summarize, our approach makes the following key contributions: # e # e # e A continuous, 3D-structure-aware neural scene representation and renderer, SRNs, that efficiently encapsulate both scene geometry and appearance. End-to-end training of SRNs without explicit supervision in 3D space, purely from a set of posed 2D images. We demonstrate novel view synthesis, shape and appearance interpolation, and few-shot reconstruction, as well as unsupervised discovery of a non-rigid face model, and significantly outperform baselines from recent literature. Scope The current formulation of SRNs does not model view- and lighting-dependent effects or translucency, reconstructs shape and appearance in an entangled manner, and is non-probabilistic. Please see Sec. 5 for a discussion of future work in these directions. # 2 Related Work Our approach lies at the intersection of multiple fields. In the following, we review related work. Geometric Deep Learning. Geometric deep learning has explored various representations to reason about scene geometry. Discretization-based techniques use voxel grids [7, 16–22], octree hierarchies [23–25], point clouds [11, 26, 27], multiplane images [28], patches [29], or meshes [15, 21, 30, 31]. Methods based on function spaces continuously represent space as the decision boundary of a learned binary classifier [32] or a continuous signed distance field [33–35]. While these techniques are successful at modeling geometry, they often require 3D supervision, and it is unclear how to efficiently infer and represent appearance. Our proposed method encapsulates both scene geometry and appearance, and can be trained end-to-end via learned differentiable rendering, supervised only with posed 2D images. Neural Scene Representations. Latent codes of autoencoders may be interpreted as a feature representation of the encoded scene. Novel views may be rendered by concatenating target pose and latent code [1] or performing view transformations directly in the latent space [4]. Generative Query Networks [2, 3] introduce a probabilistic reasoning framework that models uncertainty due to incomplete observations, but both the scene representation and the renderer are oblivious to the scene’s 3D structure. Some prior work infers voxel grid representations of 3D scenes from images [6, 8, 9] or uses them for 3D-structure-aware generative models [10, 36]. Graph neural networks may similarly capture 3D structure [37]. Compositional structure may be modeled by representing scenes as programs [38]. We demonstrate that models with scene representations that ignore 3D structure fail to perform viewpoint transformations in a regime of limited (but significant) data, such as the Shapenet v2 dataset [39]. Instead of a discrete representation, which limits achievable spatial resolution and does not smoothly parameterize scene surfaces, we propose a continuous scene representation. Neural Image Synthesis. Deep models for 2D image and video synthesis have recently shown promising results in generating photorealistic images. Some of these approaches are based on 2 PI Depth Update (digs = i+ Sint hho.eo 4 Scene representation ©:R2> R” “| Ray Marching LSTM H nendering [Rt], K ped dV) EE ‘ee‘e Keil he! Pixel Generator db world coordinates Figure 1: Overview: at the heart of SRNs lies a continuous, 3D-aware neural scene representation, Φ, which represents a scene as a function that maps (x, y, z) world coordinates to a feature representation of the scene at those coordinates (see Sec. 3.1). A neural renderer Θ, consisting of a learned ray marcher and a pixel generator, can render the scene from arbitrary novel view points (see Sec. 3.2). (variational) auto-encoders [40, 41], generative flows [42, 43], or autoregressive per-pixel models [44, 45]. In particular, generative adversarial networks [46–50] and their conditional variants [51–53] have recently achieved photo-realistic single-image generation. Compositional Pattern Producing Networks [54, 55] learn functions that map 2D image coordinates to color. Some approaches build on explicit spatial or perspective transformations in the networks [56–58, 14]. Recently, following the spirit of “vision as inverse graphics” [59, 60], deep neural networks have been applied to the task of inverting graphics engines [61–65]. However, these 2D generative models only learn to parameterize the manifold of 2D natural images, and struggle to generate images that are multi-view consistent, since the underlying 3D scene structure cannot be exploited. # 3 Formulation 3 along with their N i=1 of N tuples of images = Given a training set Ii ∈ } { 3 camera matrices [66], our goal 4 and intrinsic Ki ∈ respective extrinsic Ei = × × is to distill this dataset of observations into a neural scene representation Φ that strictly enforces 3D structure and allows to generalize shape and appearance priors across scenes. In addition, we are interested in a rendering function Θ that allows us to render the scene represented by Φ from arbitrary viewpoints. In the following, we first formalize Φ and Θ and then discuss a framework for optimizing Φ, Θ for a single scene given only posed 2D images. Note that this approach does not require information about scene geometry. Additionally, we show how to learn a family of scene representations for an entire class of scenes, discovering powerful shape and appearance priors. # 3.1 Representing Scenes as Functions Our key idea is to represent a scene as a function Φ that maps a spatial location x to a feature representation v of learned scene properties at that spatial location: Φ(x) = v. n, x (1) The feature vector v may encode visual information such as surface color or reflectance, but it may also encode higher-order information, such as the signed distance of x to the closest scene surface. This continuous formulation can be interpreted as a generalization of discrete neural scene representations. Voxel grids, for instance, discretize R3 and store features in the resulting 3D grid [5– 10]. Point clouds [12–14] may contain points at any position in R3, but only sparsely sample surface properties of a scene. In contrast, Φ densely models scene properties and can in theory model arbitrary spatial resolutions, as it is continuous over R3 and can be sampled with arbitrary resolution. In practice, we represent Φ as a multi-layer perceptron (MLP), and spatial resolution is thus limited by the capacity of the MLP. In contrast to recent work on representing scenes as unstructured or weakly structured feature embeddings [1, 4, 2], Φ is explicitly aware of the 3D structure of scenes, as the input to Φ are R3. This allows interacting with Φ via the toolbox of multi-view and world coordinates (x, y, z) perspective geometry that the physical world obeys, only using learning to approximate the unknown properties of the scene itself. In Sec. 4, we show that this formulation leads to multi-view consistent novel view synthesis, data-efficient training, and a significant gain in model interpretability. 3 # 3.2 Neural Rendering Given a scene representation Φ, we introduce a neural rendering algorithm Θ, that maps a scene representation Φ as well as the intrinsic K and extrinsic E camera parameters to an image # I # where V # X 3 4 3 3 Θ : R R × × R H × W × 3, (Φ, E, K) Θ(Φ, E, K) = , (2) → is the space of all functions Φ. X × × # I The key complication in rendering a scene represented by Φ is that geometry is represented implicitly. The surface of a wooden table top, for instance, is defined by the subspace of R3 where Φ undergoes a change from a feature vector representing free space to one representing wood. To render a single pixel in the image observed by a virtual camera, we thus have to solve two sub-problems: (i) finding the world coordinates of the intersections of the respective camera rays with scene geometry, and (ii) mapping the feature vector v at that spatial coordinate to a color. We will first propose a neural ray marching algorithm with learned, adaptive step size to find ray intersections with scene geometry, and subsequently discuss the architecture of the pixel generator network that learns the feature-to-color mapping. # 3.2.1 Differentiable Ray Marching Algorithm Algorithm 1 Differentiable Ray-Marching 1: function FINDINTERSECTION(®, K, E, (u, v)) 2 dy + 0.05 > Near plane 3 (lao, Co) + (0, 0) > Initial state of LSTM 4: for i ~ 0 to maz_iter do 5: Xi + Yu,v(di) > Calculate world coordinates 6 vi + ®(xi) > Extract feature vector 7 (6, hii, Ci41) — LSTM(v, hi, ci) > Predict steplength using ray marching LSTM 8 dita - di t+6 > Update d 9 return Fu,» (dmax_iter) Intersection testing intuitively amounts to solving an optimization problem, where the point along each camera ray is sought that minimizes the distance to the surface of the scene. To model this problem, we parameterize the points along each ray, identified with the coordinates (u, v) of the respective pixel, with their distance d to the camera (d > 0 represents points in front of the camera): Tu.»(d) = RT (K7 ( 7 ) —t), d>0, (3) with world coordinates ru,v(d) of a point along the ray with distance d to the camera, camera intrinsics K, and camera rotation matrix R and translation vector t. For each ray, we aim to solve # arg min d s.t. ru,v(d) Ω, d > 0 (4) ∈ where we define the set of all points that lie on the surface of the scene as Ω. Here, we take inspiration from the classic sphere tracing algorithm [67]. Sphere tracing belongs to the class of ray marching algorithms, which solve Eq. 4 by starting at a distance dinit close to the camera and stepping along the ray until scene geometry is intersected. Sphere tracing is defined by a special choice of the step length: each step has a length equal to the signed distance to the closest surface point of the scene. Since this distance is only 0 on the surface of the scene, the algorithm takes non-zero steps until it has arrived at the surface, at which point no further steps are taken. Extensions of this algorithm propose heuristics to modifying the step length to speed up convergence [68]. We instead propose to learn the length of each step. Specifically, we introduce a ray marching long short-term memory (RM-LSTM) [69], that maps the feature vector Φ(xi) = vi at the current estimate of the ray intersection xi to the length of the next ray marching step. The algorithm is formalized in Alg. 1. 4 Given our current estimate di, we compute world coordinates xi = ru,v(di) via Eq. 3. We then compute Φ(xi) to obtain a feature vector vi, which we expect to encode information about nearby scene surfaces. We then compute the step length δ via the RM-LSTM as (δ, hi+1, ci+1) = LST M (vi, hi, ci), where h and c are the output and cell states, and increment di accordingly. We iterate this process for a constant number of steps. This is critical, because a dynamic termination criterion would have no guarantee for convergence in the beginning of the training, where both Φ and the ray marching LSTM are initialized at random. The final step yields our estimate of the world coordinates of the intersection of the ray with scene geometry. The z-coordinates of running and final estimates of intersections in camera coordinates yield depth maps, which we denote as di, which visualize every step of the ray marcher. This makes the ray marcher interpretable, as failures in geometry estimation show as inconsistencies in the depth map. Note that depth maps are differentiable with respect to all model parameters, but are not required for training Φ. Please see the supplement for a contextualization of the proposed rendering approach with classical rendering algorithms. # 3.2.2 Pixel Generator Architecture The pixel generator takes as input the 2D feature map sampled from Φ at world coordinates of ray- surface intersections and maps it to an estimate of the observed image. As a generator architecture, we choose a per-pixel MLP that maps a single feature vector v to a single RGB vector. This is equivalent to a convolutional neural network (CNN) with only 1 1 convolutions. Formulating the generator without 2D convolutions has several benefits. First, the generator will always map the same (x, y, z) coordinate to the same color value. Assuming that the ray-marching algorithm finds the correct intersection, the rendering is thus trivially multi-view consistent. This is in contrast to 2D convolutions, where the value of a single pixel depends on a neighborhood of features in the input feature map. When transforming the camera in 3D, e.g. by moving it closer to a surface, the 2D neighborhood of a feature may change. As a result, 2D convolutions come with no guarantee on multi- view consistency. With our per-pixel formulation, the rendering function Θ operates independently on all pixels, allowing images to be generated with arbitrary resolutions and poses. On the flip side, we cannot exploit recent architectural progress in CNNs, and a per-pixel formulation requires the ray marching, the SRNs and the pixel generator to operate on the same (potentially high) resolution, requiring a significant memory budget. Please see the supplement for a discussion of this trade-off. # 3.3 Generalizing Across Scenes We now generalize SRNs from learning to represent a single scene to learning shape and appearance priors over several instances of a single class. Formally, we assume that we are given a set of M N i=1 as discussed in instance datasets } Sec. 3.1. M We reason about the set of functions j=1 that represent instances of objects belonging to the same class. By parameterizing a specific Φj as an MLP, we can represent it with its vector of Rl. We assume scenes of the same class have common shape and appearance parameters φj ∈ Rk, k < l. Equivalently, this properties that can be fully characterized by a set of latent variables z assumes that all parameters φj live in a k-dimensional subspace of Rl. Finally, we define a mapping Ψ : R k R l, Ψ(zj) = φj (5) 2; → that maps a latent vector zj to the parameters φj of the corresponding Φj. We propose to parameterize Ψ as an MLP, with parameters ψ. This architecture was previously introduced as a Hypernetwork [70], a neural network that regresses the parameters of another neural network. We share the parameters of the rendering function Θ across scenes. We note that assuming a low-dimensional embedding manifold has so far mainly been empirically demonstrated for classes of single objects. Here, we similarly only demonstrate generalization over classes of single objects. Finding latent codes zj. To find the latent code vectors zj, we follow an auto-decoder frame- Cj is represented by its own latent code zj. The zj work [33]. For this purpose, each object instance are free variables and are optimized jointly with the parameters of the hypernetwork Ψ and the neural renderer Θ. We assume that the prior distribution over the zj is a zero-mean multivariate Gaussian with a diagonal covariance matrix. Please refer to [33] for additional details. 5 Figure 2: Shepard-Metzler object from 1k-object training set, 15 observations each. SRNs (right) outperform dGQN (left) on this small dataset. Figure 3: Non-rigid animation of a face. Note that mouth movement is directly reflected in the normal maps. Shapenet v2 objects DeepVoxels objects Single-Shot 50-shot Figure 4: Normal maps for a selection of objects. We note that geometry is learned fully unsupervised and arises purely out of the perspective and multi-view geometry constraints on the image formation. # Joint Optimization N i=1, we aim To summarize, given a dataset } to find the parameters ψ of Ψ that maps latent vectors zj to the parameters of the respective scene representation φj, the parameters θ of the neural rendering function Θ, as well as the latent codes zj themselves. We formulate this as an optimization problem with the following objective: MN argmin SOY |Oe(®u(a), Bt, K}) — Tf 3 + Adepl| min(A,pinq1,0)|I3 + Atarllzs|la- (6) {9.0{2j) 4, } j=. TT OO Limg Leepin Liatent argmin SOY |Oe(®u(a), Bt, K}) — Tf 3 + Adepl| min(A,pinq1,0)|I3 + Atarllzs|la- (6) {9.0{2j) 4, } j=. TT OO Limg Leepin Liatent Where Lime is an ¢-loss enforcing closeness of the rendered image to ground-truth, Lgepn is a regularization term that accounts for the positivity constraint in Eq./4] and Ljatent enforces a Gaussian prior on the z;. In the case of a single scene, this objective simplifies to solving for the parameters of the MLP parameterization of ® instead of the parameters 7 and latent codes z;. We solve Eq. with stochastic gradient descent. Note that the whole pipeline can be trained end-to-end, without requiring any (pre-)training of individual parts. In Sec./4| we demonstrate that SRNs discover both geometry and appearance, initialized at random, without requiring prior knowledge of either scene geometry or scene scale, enabling multi-view consistent novel view synthesis. # Limg # Leepin image Few-shot reconstruction. After finding model parameters by solving Eq. 6, we may use the = trained model for few-shot reconstruction of a new object instance, represented by a dataset ( { N 4 =argmin ) > ||Oo(®yi), Ei, Ki) — Till} + Adep|| min(ds, pinas,0)|[3 + Avaell2ll3. 7) 2 i=1 i=1 # 4 Experiments We train SRNs on several object classes and evaluate them for novel view synthesis and few-shot reconstruction. We further demonstrate the discovery of a non-rigid face model. Please see the supplement for a comparison on single-scene novel view synthesis performance with DeepVoxels [6]. 6 SB VBSEFEBLYme sf | tee er evve we 9G EGA Figure 5: Interpolating latent code vectors of cars and chairs in the Shapenet dataset while rotating the camera around the model. Features smoothly transition from one model to another. dGQN Tatarchenko et. al. SRNs Ground Truth - : =| G 50-shot a ‘ee ya OND Ta %? 1-Shot re lo Figure 6: Qualitative comparison with Tatarchenko et al. [1] and the deterministic variant of the GQN [2], for novel view synthesis on the Shapenet v2 “cars” and “chairs” classes. We compare novel views for objects reconstructed from 50 observations in the training set (top row), two observations and a single observation (second and third row) from a test set. SRNs consistently outperforms these baselines with multi-view consistent novel views, while also reconstructing geometry. Please see the supplemental video for more comparisons, smooth camera trajectories, and reconstructed geometry. Implementation Details. Hyperparameters, computational complexity, and full network architec- tures for SRNs and all baselines are in the supplement. Training of the presented models takes on the order of 6 days. A single forward pass takes around 120 ms and 3 GB of GPU memory per batch item. Code and datasets are available. Shepard-Metzler objects. We evaluate our approach on 7-element Shepard-Metzler objects in a limited-data setting. We render 15 observations of 1k objects at a resolution of 64 64. We train both SRNs and a deterministic variant of the Generative Query Network [2] (dGQN, please see supplement for an extended discussion). Note that the dGQN is solving a harder problem, as it is inferring the scene representation in each forward pass, while our formulation requires solving an optimization problem to find latent codes for unseen objects. We benchmark novel view reconstruction accuracy on (1) the training set and (2) few-shot reconstruction of 100 objects from a held-out test set. On the training objects, SRNs achieve almost pixel-perfect results with a PSNR of 30.41 dB. The dGQN fails to learn object shape and multi-view geometry on this limited dataset, achieving 20.85 dB. See Fig. 2 for a qualitative comparison. In a two-shot setting (see Fig. 7 for reference views), we succeed in reconstructing any part of the object that has been observed, achieving 24.36 dB, while the dGQN achieves 18.56 dB. In a one-shot setting, SRNs reconstruct an object consistent with the observed view. As expected, due to the current non-probabilistic implementation, both the dGQN and SRNs reconstruct an object resembling the mean of the hundreds of feasible objects that may have generated the observation, achieving 17.51 dB and 18.11 dB respectively. Shapenet v2. We consider the “chair” and “car” classes of Shapenet v.2 [39] with 4.5k and 2.5k model instances respectively. We disable transparencies and specularities, and train on 50 observations of each instance at a resolution of 128 128 pixels. Camera poses are randomly generated on a sphere with the object at the origin. We evaluate perfor- mance on (1) novel-view synthesis of objects in the training set and (2) novel-view synthesis on objects in the held-out, official Shapenet v2 test sets, reconstructed from one or two observations, as discussed in Sec. 3.4. Fig. 7 shows the sampled poses for the few-shot case. In all settings, we assemble ground-truth novel views by sampling 250 views in an Archimedean spiral around each object instance. We compare v Sp << af y XS 7 Table 1: PSNR (in dB) and SSIM of images reconstructed with our method, the deterministic variant of the GQN [2] (dGQN), the model proposed by Tatarchenko et al. [1] (TCO), and the method proposed by Worrall et al. [4] (WRL). We compare novel-view synthesis performance on objects in the training set (containing 50 images of each object), as well as reconstruction from 1 or 2 images on the held-out test set. 50 images (training set) 2 images Single image Chairs Cars Chairs Cars Chairs Cars 20.38 / 0.83 19.16 / 0.82 19.61 / 0.81 26.23 / 0.95 26.32 / 0.94 18.41 / 0.80 21.33 / 0.88 17.20 / 0.78 22.28 / 0.90 22.36 / 0.89 18.79 / 0.79 24.48 / 0.92 22.94 / 0.88 18.15 / 0.79 21.27 / 0.88 16.89 / 0.77 22.11 / 0.90 18.19 / 0.78 21.59 / 0.87 22.89 / 0.91 20.72 / 0.85 SRNs to three baselines from recent literature. Table 1 and Fig. 6 report quantitative and qualitative results respectively. In all settings, we outperform all baselines by a wide margin. On the training set, we achieve very high visual fidelity. Generally, views are perfectly multi-view consistent, the only exception being objects with distinct, usually fine geometric detail, such as the windscreen of convertibles. None of the baselines succeed in generating multi-view consistent views. Several views per object are usually entirely degenerate. In the two-shot case, where most of the object has been seen, SRNs still reconstruct both object appearance and geometry robustly. In the single-shot case, SRNs complete unseen parts of the object in a plausible manner, demonstrating that the learned priors have truthfully captured the underlying distributions. If latent parameters of the scene are known, Supervising parameters for non-rigid deformation. we can condition on these parameters instead of jointly solving for latent variables zj. We generate 50 renderings each from 1000 faces sampled at random from the Basel face model [71]. Camera poses are sampled from a hemisphere in front of the face. Each face is fully defined by a 224-dimensional parameter vector, where the first 160 parameterize identity, and the last 64 dimensions control facial expression. We use a constant ambient illumination to render all faces. Conditioned on this disentangled latent space, SRNs succeed in reconstructing face geometry and appearance. After training, we animate facial expression by varying the 64 expression parameters while keeping the identity fixed, even though this specific combination of identity and expression has not been observed before. Fig. 3 shows qualitative results of this non-rigid deformation. Expressions smoothly transition from one to the other, and the reconstructed normal maps, which are directly computed from the depth maps (not shown), demonstrate that the model has learned the underlying geometry. Geometry reconstruction. SRNs reconstruct geometry in a fully unsupervised manner, purely out of necessity to explain observations in 3D. Fig. 4 visualizes geometry for 50-shot, single-shot, and single-scene reconstructions. Latent space interpolation. Our learned latent space allows meaningful interpolation of object instances. Fig. 5 shows latent space interpolation. Pose extrapolation. Due to the explicit 3D-aware and per-pixel formulation, SRNs naturally generalize to 3D transformations that have never been seen during training, such as camera close-ups or camera roll, even when trained only on up-right camera poses distributed on a sphere around the objects. Please see the supplemental video for examples of pose extrapolation. Failure cases. The ray marcher may “get stuck” in holes of sur- faces or on rays that closely pass by occluders, such as commonly occur in chairs. SRNs generates a continuous surface in these cases, or will sometimes step through the surface. If objects are far away from the training distribution, SRNs may fail to reconstruct geom- etry and instead only match texture. In both cases, the reconstructed geometry allows us to analyze the failure, which is impossible with black-box alternatives. See Fig. 8 and the supplemental video. | BS a, ; : Figure 8: Failure cases. 8 Towards representing room-scale scenes. We demonstrate reconstruction of a room-scale scene with SRNs. We train a single SRN on 500 observations of a minecraft room. The room contains multiple objects as well as four columns, such that parts of the scene are occluded in most observations. After training, the SRN enables novel view synthesis of the room. Though generated images are blurry, they are largely multi-view consistent, with artifacts due to ray marching failures only at object boundaries and thin structures. The SRN succeeds in inferring geometry and appearance of the room, reconstructing occluding columns and objects correctly, failing only on low-texture areas (where geometry is only weakly constrained) and thin tubes placed between columns. Please see the supplemental video for qualitative results. # 5 Discussion We introduce SRNs, a 3D-structured neural scene representation that implicitly represents a scene as a continuous, differentiable function. This function maps 3D coordinates to a feature-based representation of the scene and can be trained end-to-end with a differentiable ray marcher to render the feature-based representation into a set of 2D images. SRNs do not require shape supervision and can be trained only with a set of posed 2D images. We demonstrate results for novel view synthesis, shape and appearance interpolation, and few-shot reconstruction. There are several exciting avenues for future work. SRNs could be explored in a probabilistic framework [2, 3], enabling sampling of feasible scenes given a set of observations. SRNs could be extended to model view- and lighting-dependent effects, translucency, and participating media. They could also be extended to other image formation models, such as computed tomography or magnetic resonance imaging. Currently, SRNs require camera intrinsic and extrinsic parameters, which can be obtained robustly via bundle-adjustment. However, as SRNs are differentiable with respect to camera parameters; future work may alternatively integrate them with learned algorithms for camera pose estimation [72]. SRNs also have exciting applications outside of vision and graphics, and future work may explore SRNs in robotic manipulation or as the world model of an independent agent. While SRNs can represent room-scale scenes (see the supplemental video), generalization across complex, cluttered 3D environments is an open problem. Recent work in meta-learning could enable generalization across scenes with weaker assumptions on the dimensionality of the underlying manifold [73]. Please see the supplemental material for further details on directions for future work. # 6 Acknowledgements We thank Ludwig Schubert and Oliver Groth for fruitful discussions. Vincent Sitzmann was supported by a Stanford Graduate Fellowship. Michael Zollhöfer was supported by the Max Planck Center for Visual Computing and Communication (MPC-VCC). Gordon Wetzstein was supported by NSF awards (IIS 1553333, CMMI 1839974), by a Sloan Fellowship, by an Okawa Research Grant, and a PECASE. References [1] M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Single-view to multi-view: Reconstructing unseen views with a convolutional network,” CoRR abs/1511.06702, vol. 1, no. 2, p. 2, 2015. [2] S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor et al., “Neural scene representation and rendering,” Science, vol. 360, no. 6394, pp. 1204–1210, 2018. [3] A. Kumar, S. A. Eslami, D. Rezende, M. Garnelo, F. Viola, E. Lockhart, and M. Shanahan, “Consistent jumpy predictions for videos and scenes,” 2018. [4] D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow, “Interpretable transformations with encoder-decoder networks,” in Proc. ICCV, vol. 4, 2017. [5] D. Maturana and S. Scherer, “Voxnet: A 3d convolutional neural network for real-time object recognition,” in Proc. IROS, September 2015, p. 922 – 928. [6] V. Sitzmann, J. Thies, F. Heide, M. Nießner, G. Wetzstein, and M. Zollhöfer, “Deepvoxels: Learning persistent 3d feature embeddings,” in Proc. CVPR, 2019. 9 [7] A. Kar, C. Häne, and J. Malik, “Learning a multi-view stereo machine,” in Proc. NIPS, 2017, pp. 365–376. [8] H.-Y. F. Tung, R. Cheng, and K. Fragkiadaki, “Learning spatial common sense with geometry-aware recurrent networks,” Proc. CVPR, 2019. [9] T. H. Nguyen-Phuoc, C. Li, S. Balaban, and Y. Yang, “Rendernet: A deep convolutional network for differentiable rendering from 3d shapes,” in Proc. NIPS, 2018. [10] J.-Y. Zhu, Z. Zhang, C. Zhang, J. Wu, A. Torralba, J. Tenenbaum, and B. Freeman, “Visual object networks: image generation with disentangled 3d representations,” in Proc. NIPS, 2018, pp. 118–129. [11] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” Proc. CVPR, 2017. [12] E. Insafutdinov and A. Dosovitskiy, “Unsupervised learning of shape and pose with differentiable point clouds,” in Proc. NIPS, 2018, pp. 2802–2812. [13] M. Meshry, D. B. Goldman, S. Khamis, H. Hoppe, R. Pandey, N. Snavely, and R. Martin-Brualla, “Neural rerendering in the wild,” Proc. CVPR, 2019. [14] C.-H. Lin, C. Kong, and S. Lucey, “Learning efficient point cloud generation for dense 3d object recon- struction,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [15] D. Jack, J. K. Pontes, S. Sridharan, C. Fookes, S. Shirazi, F. Maire, and A. Eriksson, “Learning free-form deformations for 3d object reconstruction,” CoRR, 2018. [16] S. Tulsiani, T. Zhou, A. A. Efros, and J. Malik, “Multi-view supervision for single-view reconstruction via differentiable ray consistency,” in Proc. CVPR. [17] J. Wu, C. Zhang, T. Xue, W. T. Freeman, and J. B. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling,” in Proc. NIPS, 2016, pp. 82–90. [18] M. Gadelha, S. Maji, and R. Wang, “3d shape induction from 2d views of multiple objects,” in 3DV. Computer Society, 2017, pp. 402–411. IEEE [19] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. Guibas, “Volumetric and multi-view cnns for object classification on 3d data,” in Proc. CVPR, 2016. [20] X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum, and W. T. Freeman, “Pix3d: Dataset and methods for single-image 3d shape modeling,” in Proc. CVPR, 2018. [21] D. Jimenez Rezende, S. M. A. Eslami, S. Mohamed, P. Battaglia, M. Jaderberg, and N. Heess, “Unsuper- vised learning of 3d structure from images,” in Proc. NIPS, 2016. [22] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese, “3d-r2n2: A unified approach for single and multi-view 3d object reconstruction,” in Proc. ECCV, 2016. [23] G. Riegler, A. O. Ulusoy, and A. Geiger, “Octnet: Learning deep 3d representations at high resolutions,” in Proc. CVPR, 2017. [24] M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs,” in Proc. ICCV, 2017, pp. 2107–2115. [25] C. Haene, S. Tulsiani, and J. Malik, “Hierarchical surface prediction,” Proc. PAMI, pp. 1–1, 2019. [26] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas, “Learning representations and generative models for 3D point clouds,” in Proc. ICML, 2018, pp. 40–49. [27] M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Multi-view 3d models from single images with a convolu- tional network,” in Proc. ECCV, 2016. [28] T. Zhou, R. Tucker, J. Flynn, G. Fyffe, and N. Snavely, “Stereo magnification: learning view synthesis using multiplane images,” ACM Trans. Graph., vol. 37, no. 4, pp. 65:1–65:12, 2018. [29] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry, “Atlasnet: A papier-mâché approach to learning 3d surface generation,” in Proc. CVPR, 2018. [30] H. Kato, Y. Ushiku, and T. Harada, “Neural 3d mesh renderer,” in Proc. CVPR, 2018, pp. 3907–3916. [31] A. Kanazawa, S. Tulsiani, A. A. Efros, and J. Malik, “Learning category-specific mesh reconstruction from image collections,” in ECCV, 2018. 10 [32] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3d reconstruction in function space,” in Proc. CVPR, 2019. [33] J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “Deepsdf: Learning continuous signed distance functions for shape representation,” arXiv preprint arXiv:1901.05103, 2019. [34] K. Genova, F. Cole, D. Vlasic, A. Sarna, W. T. Freeman, and T. Funkhouser, “Learning shape templates with structured implicit functions,” Proc. ICCV, 2019. [35] B. Deng, K. Genova, S. Yazdani, S. Bouaziz, G. Hinton, and A. Tagliasacchi, “Cvxnets: Learnable convex decomposition,” arXiv preprint arXiv:1909.05736, 2019. [36] T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y. Yang, “Hologan: Unsupervised learning of 3d representations from natural images,” in Proc. ICCV, 2019. [37] F. Alet, A. K. Jeewajee, M. Bauza, A. Rodriguez, T. Lozano-Perez, and L. P. Kaelbling, “Graph element networks: adaptive, structured computation and memory,” in Proc. ICML, 2019. [38] Y. Liu, Z. Wu, D. Ritchie, W. T. Freeman, J. B. Tenenbaum, and J. Wu, “Learning to describe scenes with programs,” in Proc. ICLR, 2019. [39] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su et al., “Shapenet: An information-rich 3d model repository,” arXiv preprint arXiv:1512.03012, 2015. [40] G. E. Hinton and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. [41] D. P. Kingma and M. Welling, “Auto-encoding variational bayes.” in Proc. ICLR, 2013. [42] L. Dinh, D. Krueger, and Y. Bengio, “NICE: non-linear independent components estimation,” in Proc. ICLR Workshops, 2015. [43] D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” in NeurIPS, 2018, pp. 10 236–10 245. [44] A. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu, “Conditional image generation with pixelcnn decoders,” in Proc. NIPS, 2016, pp. 4797–4805. [45] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” in Proc. ICML, 2016. [46] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Proc. NIPS, 2014. [47] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in Proc. ICML, 2017. [48] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” in Proc. ICLR, 2018. [49] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros, “Generative visual manipulation on the natural image manifold,” in Proc. ECCV, 2016. [50] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” in Proc. ICLR, 2016. [51] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” 2014, arXiv:1411.1784. [52] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. CVPR, 2017, pp. 5967–5976. [53] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. ICCV, 2017. [54] K. O. Stanley, “Compositional pattern producing networks: A novel abstraction of development,” Genetic programming and evolvable machines, vol. 8, no. 2, pp. 131–162, 2007. [55] A. Mordvintsev, N. Pezzotti, L. Schubert, and C. Olah, “Differentiable image parameterizations,” Distill, vol. 3, no. 7, p. e12, 2018. 11 [56] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee, “Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision,” in Proc. NIPS, 2016. [57] M. Jaderberg, K. Simonyan, A. Zisserman, and k. kavukcuoglu, “Spatial transformer networks,” in Proc. NIPS, 2015. [58] G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming auto-encoders,” in Proc. ICANN, 2011. [59] A. Yuille and D. Kersten, “Vision as Bayesian inference: analysis by synthesis?” Trends in Cognitive Sciences, vol. 10, pp. 301–308, 2006. [60] T. Bever and D. Poeppel, “Analysis by synthesis: A (re-)emerging program of research for language and vision,” Biolinguistics, vol. 4, no. 2, pp. 174–200, 2010. [61] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, “Deep convolutional inverse graphics network,” in Proc. NIPS, 2015. [62] J. Yang, S. Reed, M.-H. Yang, and H. Lee, “Weakly-supervised disentangling with recurrent transformations for 3d view synthesis,” in Proc. NIPS, 2015. [63] T. D. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. K. Mansinghka, “Picture: A probabilistic programming language for scene perception,” in Proc. CVPR, 2015. [64] H. F. Tung, A. W. Harley, W. Seto, and K. Fragkiadaki, “Adversarial inverse graphics networks: Learning 2d-to-3d lifting and image-to-image translation from unpaired supervision,” in Proc. ICCV. [65] Z. Shu, E. Yumer, S. Hadap, K. Sunkavalli, E. Shechtman, and D. Samaras, “Neural face editing with intrinsic image disentangling,” in Proc. CVPR, 2017. [66] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press, 2003. [67] J. C. Hart, “Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces,” The Visual Computer, vol. 12, no. 10, pp. 527–545, 1996. [68] A. Van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, A. Graves et al., “Conditional image generation with pixelcnn decoders,” in Proc. NIPS, 2016. [69] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. [70] D. Ha, A. Dai, and Q. V. Le, “Hypernetworks,” in Proc. ICLR, 2017. [71] P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, “A 3d face model for pose and illumination invariant face recognition,” in 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance. [72] C. Tang and P. Tan, “Ba-net: Dense bundle adjustment network,” in Proc. ICLR, 2019. [73] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proc. ICML. JMLR. org, 2017, pp. 1126–1135. 12 # Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations —Supplementary Material- # Vincent Sitzmann Michael Zollhéfer Gordon Wetzstein {sitzmann, zollhoefer}@cs.stanford.edu, [email protected] Stanford University # Contents 1 Additional Results on Neural Ray Marching 2 Comparison to DeepVoxels Reproducibility 3.1 Architecture Details... 2... ee 3.2 Time & Memory Complexity... 2... ....... 000.0002. 00 0000. 3.3. Dataset Details .. 2.2... 0.0.0... ee 3.4 SRNs Training Details... 2... ee ee 3.4.1 Generaldetails 22... 2... 00... ee ee 3.4.2 Per-experimentdetails .... 2... 00.0... 00.00... .0.000. 4 Relationship to per-pixel autoregressive methods Baseline Discussions 5.1 Deterministic VariantofGQN ...... 0... 000000 ee eee 5.2 Tatarchenkoetal. 2... ee 5.3 Worralletal 2... ee 6 Differentiable Ray-Marching in the context of classical renderers Trade-offs of the Pixel Generator vs. CNN-based renderers 8 Future work NDA uN un f& Nn oe | 3 5 7 Preprint. Under review. Raycast_ progress - from top left _to bottom right progress - Final # Normal Map Raycast progress - from top left to bottom right Final # Normal Map Figure 1: Visualizations of ray marching progress and the final normal map. Note that the uniformly colored background does not constrain the depth - as a result, the depth is unconstrained around the silhouette of the object. Since the final normal map visualizes surface detail much better, we only report the final normal map in the main document. # 1 Additional Results on Neural Ray Marching Computation of Normal Maps _ We found that normal maps visualize fine surface detail signifi- cantly better than depth maps (see Fig. 1), and thus only report normal maps in the main submission. We compute surface normals as the cross product of the numerical horizontal and vertical derivatives of the depth map. Ray Marching Progress Visualization The z-coordinates of running and final estimates of inter- sections in each iteration of the ray marcher in camera coordinates yield depth maps, which visualize every step of the ray marcher. Fig. 1 shows two example ray marches, along with their final normal maps. # 2 Comparison to DeepVoxels We compare performance in single-scene novel-view synthesis with the recently proposed Deep Voxels architecture [1] on their four synthetic objects. DeepVoxels proposes a 3D-structured neural scene representation in the form of a voxel grid of features. Multi-view and projective geometry are hard-coded into the model architecture. We further report accuracy of the same baselines as in [1]: a Pix2Pix architecture [2] that receives as input the per-pixel view direction, as well as the methods proposed by Tatarchenko et al. [3] as well as by Worrall et al. [4] and Cohen and Welling [5]. Table 1 compares PSNR and SSIM of the proposed architecture and the baselines, averaged over all 4 scenes. We outperform the best baseline, DeepVoxels [1], by more than 3 dB. Qualitatively, DeepVoxels displays significant multi-view inconsistencies in the form of flickering artifacts, while the proposed method is almost perfectly multi-view consistent. We achieve this result with 550k parameters per model, as opposed to the DeepVoxels architecture with more than 160M free variables. However, we found that SRNs produce blurry output for some of the very high-frequency textural Figure 2: Qualitative results on DeepVoxels objects. For each object: Left: Normal map of recon- structed geometry. Center: SRNs output. Right: Ground Truth. Figure 3: Undersampled letters on the side of the cube (ground truth images). Lines of letters are less than two pixels wide, leading to significant aliasing. Additionally, the 2D downsampling as described in [1] introduced blur that is not multi-view consistent. Figure 4: By using a U-Net renderer similar to [1], we can reconstruct the undersampled letters. In exchange, we lose the guarantee of multi-view consistency. Left: Reconstructed normal map. Center: SRNs output. Right: ground truth. PSNR SSIM Tatarchenko et al. [3] 21.22 0.90 Worrall et al. [4] 21.22 0.90 Pix2Pix [2] 23.63 0.92 DeepVoxels [1] 30.55 0.97 SRNs 33.03 0.97 Table 1: Quantitative comparison to DeepVoxels [1]. With 3 orders of magnitude fewer parameters, we achieve a 3dB boost, with reduced multi-view inconsistencies. Differentiable Ray Marching (for n iteration steps) features at world coordinates x; Ray Marching LSTM (Sisv bisa Ci41) = LSTM(v;, hi, ¢;) Camera distance update 7 ot diga =i + Oi41 Scene representation Pixel B Generator t],K camera parameters 1x1 conv Kann gn W Ki Ki Ki features at final KN a) sa a A world coordinates x, OO O5) Abdi i, initial distance to camera [R, Figure 5: Architecture overview: at the heart of SRNs lies a continuous, 3D-aware neural scene representation, ®, which represents a scene as a function that maps (x, y, z) world coordinates to a feature representation of the scene at those coordinates. To render ®, a neural ray-marcher interacts with ® via world coordinates along camera rays, parameterized via their distance d to the camera projective center. Ray Marching begins at a distance do close to the camera. In each step, the scene representation network ® is queried at the current world coordinates x;. The resulting feature vector vu; is fed to the Ray Marching LSTM that predicts a step length 6;41. The world coordinates are updated according to the new distance to the camera, d;1 = d; + 6;41. This is repeated for a fixed number of iterations, n. The features at the final world coordinates v, = ®(«,,) are then translated to an RGB color by the pixel generator. detail - this is most notable with the letters on the sides of the cube. Fig. 3 demonstrates why this is the case. Several of the high-frequency textural detail of the DeepVoxels objects are heavily undersampled. For instance, lines of letters on the sides of the cube often only occupy a single pixel. As a result, the letters alias across viewing angles. This violates one of our key assumptions, namely that the same (a, y, 2) € R° world coordinate always maps to the same color, independent of the viewing angle. As a result, it is impossible for our model to generate these details. We note that detail that is not undersampled, such as the CVPR logo on the top of the cube, is reproduced with perfect accuracy. However, we can easily accommodate for this undersampling by using a 2D CNN renderer. This amounts to a trade-off of our guarantee of multi-view consistency discussed in Sec. 3 of the main paper with robustness to faulty training data. Fig. 2 shows the cube rendered with a U-Net based renderer — all detail is replicated truthfully. # 3 Reproducibility In this section, we discuss steps we take to allow the community to reproduce our results. All code and datasets will be made publicly available. All models were evaluated on the test sets exactly once. # 3.1 Architecture Details Scene representation network ® In all experiments, & is parameterized as a multi-layer perceptron (MLP) with ReLU activations, layer normalization before each nonlinearity [6], and four layers with 256 units each. In all generalization experiments in the main paper, its weights ¢ are the output of the hypernetwork W. In the DeepVoxels comparison (see Sec.2), where a separate ® is trained per scene, parameters of ¢ are directly initialized using the Kaiming Normal method [7]. Hypernetwork Y In generalization experiments, a hypernetwork Y maps a latent vector z; to the weights of the respective scene representation ¢;. Each layer of ® is the output of a separate hypernetwork. Each hypernetwork is parameterized as a multi-layer perceptron with ReLU activations, layer normalization before each nonlinearity [6], and three layers (where the last layer has as many units as the respective layer of ® has weights). In the Shapenet and Shepard-Metzler experiments, where the latent codes z; have length 256, hypernetworks have 256 units per layer. In the Basel face experiment, where the latent codes z; have length 224, hypernetworks have 224 units per layer. Weights are initialized by the Kaiming Normal method, scaled by a factor 0.1. We empirically found this initialization to stabilize early training. Ray marching LSTM In all experiments, the ray marching LSTM is implemented as a vanilla LSTM with a hidden state size of 16. The initial state is set to zero. Pixel Generator _ In all experiments, the pixel generator is parameterized as a multi-layer perceptron with ReLU activations, layer normalization before each nonlinearity [6], and five layers with 256 units each. Weights are initialized with the Kaiming Normal method [7]. # 3.2. Time & Memory Complexity Scene representation network ® © scales as a standard MLP. Memory and runtime scale linearly in the number of queries, therefore quadratic in image resolution. Memory and runtime further scale linearly with the number of layers and quadratically with the number of units in each layer. Hypernetwork YW scales as a standard MLP. Notably, the last layer of W predicts all parameters of the scene representation ®. As a result, the number of weights scales linearly in the number of weights of ®, which is significant. For instance, with 256 units per layer and 4 layers, ® has approximately 2 x 10° parameters. In our experiments, W is parameterized with 256 units in all hidden layers. The last layer of Y then has approximately 5 x 10” parameters, which is the bulk of learnable parameters in our model. Please note that Y only has to be queried once to obtain ®, at which point it could be discarded, as both the pixel generation and the ray marching only need access to the predicted ®. Differentiable Ray Marching Memory and runtime of the differentiable ray marcher scale linearly in the number of ray marching steps and quadratically in image resolution. As it queries ® repeatedly, it also scales linearly in the same parameters as ®. Pixel Generator The pixel generator scales as a standard MLP. Memory and runtime scale linearly in the number of queries, therefore quadratic in image resolution. Memory and runtime further scale linearly with the number of layers and quadratically with the number of units in each layer. # 3.3 Dataset Details Shepard-Metzler objects We modified an open-source implementation of a Shepard-Metzler ren- derer (https://github.com/musyoku/gqn-dataset-renderer.git) to generate meshes of Shepard-Metzler objects, which we rendered using Blender to have full control over camera intrinsic and extrinsic parameters consistent with other presented datasets. Shapenet v2 cars We render each object from random camera perspectives distributed on a sphere with radius 1.3 using Blender. We disabled specularities, shadows and transparencies and used environment lighting with energy 1.0. We noticed that a few cars in the dataset were not scaled optimally, and scaled their bounding box to unit length. A few meshes had faulty vertices, resulting ina faulty bounding box and subsequent scaling to a very small size. We discarded those 40 out of 2473 cars. Shapenet v2 chairs We render each object from random camera perspectives distributed on a sphere with radius 2.0 using Blender. We disabled specularities, shadows and transparencies and used environment lighting with energy 1.0. Faces dataset We use the Basel Face dataset to generate meshes with different identities at random, where each parameter is sampled from a normal distribution with mean 0 and standard deviation of 0.7. For expressions, we use the blendshape model of Thies et al. [8], and sample expression parameters uniformly in (—0.4, 1.6). DeepVoxels dataset We use the dataset as presented in [1]. # 3.4 SRNs Training Details # 3.4.1 General details Multi-Scale training Our per-pixel formulation naturally allows us to train in a coarse-to-fine setting, where we first train the model on downsampled images in a first stage, and then increase the resolution of images in stages. This allows larger batch sizes at the beginning of the training, which affords more independent views for each object, and is reminiscent of other coarse-to-fine approaches [9]. Solver For all experiments, we use the ADAM solver with 6; = 0.9, 82 = 0.999. Implementation & Compute We implement all models in PyTorch. All models were trained on single GPUs of the type RTX6000 or RTX8000. Hyperparameter search Training hyperparameters for SRNs were found by informal search — we did not perform a systematic grid search due to the high computational cost. # 3.4.2 Per-experiment details For a resolution of 64 x 64, we train with a batch size of 72. Due to the memory complexity being quadratic in the image sidelength, we decrease the batch size by a factor of 4 when we double the image resolution. Agepn is always set to 1 x 10-3 and Anatent is set to 1. The ADAM learning rate is set to 4 x 10~* if not reported otherwise. Shepard-Metzler experiment We directly train our model on images of resolution 64 x 64 for 352 epochs. Shapenet cars We train our model in 2 stages. We first train on a resolution of 64 x 64 for 5k iterations. We then increase the resolution to 128 x 128. We train on the high resolution for 70 epochs. The ADAM learning rate is set to 5 x 1075. Shapenet chairs We train our model in 2 stages. We first train on a resolution of 64 x 64 for 20k iterations. We then increase the resolution to 128 x 128. We train our model for 12 epochs. Basel face experiments We train our model in 2 stages. We first train on a resolution of 64 x 64 for 15k iterations. We then increase the resolution to 128 x 128 and train for another 5k iterations. DeepVoxels experiments We train our model in 3 stages. We first train on a resolution of 12 x 128 with a learning rate of 4 x 10~4 for 20k iterations. We then increase the resolution to 256 x 256, and lower the learning rate to 1 x 1074 and train for another 30k iterations. We then increase the resolution to 512 x 512, and lower the learning rate to 4 x 10~° and train for another 30k iterations. # 4 Relationship to per-pixel autoregressive methods With the proposed per-pixel generator, SRNs are also reminiscent of autoregressive per-pixel archi- tectures, such as PixelCNN and PixelRNN [10, 11]. The key difference to autoregressive per-pixel architectures lies in the modeling of the probability p(Z) of an image J €¢ RÂ¥*™3_ PixelCNN and PixelRNN model an image as a one-dimensional sequence of pixel values 7), ..., Zu. w, and estimate their joint distribution as Axw pZ)= T[ vGilhi,...,Zi-1). (1) i=l Instead, conditioned on a scene representation ®, pixel values are conditionally independent, as our approach independentaly and deterministically assigns a value to each pixel. The probability of observing an image Z thus simplifies to the probability of observing a scene ® under extrinsic E and intrinsic K camera parameters p(Z) = p(®)p(E)p(K). (2) This conditional independence of single pixels conditioned on the scene representation further motivates the per-pixel design of the rendering function O. # 5 Baseline Discussions # 5.1 Deterministic Variant of GQN Deterministic vs. Non-Deterministic Eslami et al. [12] propose a powerful probabilistic frame- work for modeling uncertainty in the reconstruction due to incomplete observations. However, here, we are exclusively interested in investigating the properties of the scene representation itself, and this submission discusses SRNs in a purely deterministic framework. To enable a fair comparison, we thus implement a deterministic baseline inspired by the Generative Query Network [12]. We note that the results obtained in this comparison are not necessarily representative of the performance of the unaltered Generative Query Network. We leave a formulation of SRNs in a probabilistic framework and a comparison to the unaltered GQN to future work. Architecture As representation network architecture, we choose the "Tower" representation, and leave its architecture unaltered. However, instead of feeding the resulting scene representation r to a convolutional LSTM architecture to parameterize a density over latent variables z, we instead directly feed the scene representation r to a generator network. We use as generator a deterministic, autoregressive, skip-convolutional LSTM C, the deterministic equivalent of the generator architecture proposed in [12]. Specifically, the generator can be described by the following equations: Initial state (co, ho, uo) = (0, 0,0) (3) Pre-process current canvas Pi = K(uz) (4) State update (Cr41, bi41) = C(E, r, e7, hy, pi) (5) Canvas update wai1 = uy t+ A(hj+1) (6) Final output x =n(uz), (7) with timestep / and final timestep L, LSTM output c; and cell hj states, the canvas u;, a downsampling network «, the camera extrinsic parameters E, an upsampling network A, and a1 x 1 convolutional layer 7. Consistent with [12], all up- and downsampling layers are convolutions of size 4 x 4 with stride 4. To account for the higher resolution of the Shapenet v2 car and chair images, we added a further convolutional layer / transposed convolution where necessary. Training On both the cars and chairs datasets, we trained for 180, 000 iterations with a batch size of 140, taking approximately 6.5 days. For the lower-resolution Shepard-Metzler objects, we trained for 160, 000 iterations at a batch size of 192, or approximately 5 days. Testing For novel view synthesis on the training set, the model receives as input the 15 nearest neighbors of the novel view in terms of cosine similarity. For two-shot reconstruction, the model receives as input whichever of the two reference views is closer to the novel view in terms of cosine similarity. For one-shot reconstruction, the model receives as input the single reference view. # Encoder # Decoder Image To feature transform fe 1900*3 From feature transform Novel View Fully Connected + LeakyReLU Figure 6: Architecture of the baseline method proposed in Worrall et al. [4]. # 5.2. Tatarchenko et al. Architecture We implement the exact same architecture as described in [3], with approximately 70 - 10° parameters. Training For training, we choose the same hyperparameters as proposed in Tatarchenko et al. [3]. As we assume no knowledge of scene geometry, we do not supervise the model with a depth map. As we observed the model to overfit, we stopped training early based on model performance on the held-out, official Shapenet v2 validation set. Testing For novel view synthesis on the training set, the model receives as input the nearest neighbor of the novel view in terms of cosine similarity. For two-shot reconstruction, the model receives as input whichever of the two reference views is closer to the novel view. Finally, for one-shot reconstruction, the model receives as input the single reference view. # 5.3. Worrall et al. Architecture Please see Fig. 6 for a visualization of the full architecture. The design choices in this architecture (nearest-neighbor upsampling, leaky ReLU activations, batch normalization) were made in accordance with Worrall et al. [4]. Training For training, we choose the same hyperparameters as proposed in Worrall et al. [4]. Testing For novel view synthesis on the training set, the model receives as input the nearest neighbor of the novel view in terms of cosine similarity. For two-shot reconstruction, the model receives as input whichever of the two reference views is closer to the novel view. Finally, for one-shot reconstruction, the model receives as input the single reference view. # 6 Differentiable Ray-Marching in the context of classical renderers The proposed neural ray-marcher is inspired by the classic sphere tracing algorithm [13]. Sphere tracing was originally developed to render scenes represented via analytical signed distance functions. It is defined by a special choice of the step length: each step has a length equal to the signed distance to the closest surface point of the scene. Since this distance is only zero on the surface of the scene, the algorithm takes non-zero steps until it has arrived at the surface, at which point no further steps are taken. A major downside of sphere-tracing is its weak convergence guarantee: Sphere tracing is only guaranteed to converge for an infinite number of steps. This is easy to see: For any fixed number of steps, we can construct a scene where a ray is parallel to a close surface (or falls through a slim tunnel) and eventually intersects a scene surface. For any constant number of steps, there exists a surface parallel to the ray that is so close that the ray will not reach the target surface. In classical sphere-tracing, this is circumvented by taking a large number of steps that generally take the intersection estimate within a small neighborhood of the scene surface — the color at this point is then simply defined as the color of the closest surface. However, this heuristic can still fail in constructed examples such as the one above. Extensions of sphere tracing propose heuristics to modifying the step length to speed up convergence [11]. The Ray-Marching LSTM instead has the ability to learn the step length. The key driver of computational and memory cost of the proposed rendering algorithm is the ray-marching itself: In every step of the ray-marcher, for every pixel, the scene representation @ is evaluated. Each evaluation of ¢ is a full forward pass through a multi-layer perceptron. See 3.2 for an exact analysis of memory and computational complexity of the different components. Other classical rendering algorithms usually follow a different approach. In modern computer graphics, scenes are often represented via explicit, discretized surface primitives - such as is the case in meshes. This allows rendering via rasterization, where scene geometry is projected onto the image plane of a virtual camera in a single step. As a result, rasterization is computationally cheap, and has allowed for real-time rendering that has approached photo-realism in computer graphics. However, the image formation model of rasterization is not appropriate to simulate physically accurate image formations that involve proper light transport, view-dependent effects, participating media, refraction, translucency etc. As a result, physics-based rendering usually uses ray-tracing algorithms, where for each pixel, a number of rays are traced from the camera via all possible paths to light sources through the scene. If the underlying scene representations are explicit, discrete representations — such as meshes — the intersection testing required is again cheap. Main drivers of computational complexity in such systems are then the number of rays that need to be traced to appropriately sample all paths to lights sources that contribute to the value of a single pixel. In this context, the proposed ray-marcher can be thought of as a sphere-tracing-inspired ray-tracer for implicitly defined scene geometry. It does not currently model multi-bounce ray-tracing, but could potentially be extended in the future (see 8). # 7 Trade-offs of the Pixel Generator vs. CNN-based renderers As described in the main paper, the pixel generator comes with a guarantee of multi-view consistency compared to a 2D-CNN based rendering network. On the flip side, we cannot make use of progress in the design of novel CNN architectures that save memory by introducing resolution bottlenecks and skip connections, such as the U-Net [14]. This means that the pixel generator is comparably memory- hungry, as each layer operates on the full resolution of the image to be generated. Furthermore, CNNs have empirically been demonstrated to be able to generate high-frequency image detail easily. It is unclear what the limitations of the proposed pipeline are with respect to generating high-frequency textural detail. We note that the pixel generator is not a necessary component of SRNs, and can be replaced by a classic 2D-CNN based renderer, as we demonstrate in 2. # 8 Future work Applications outside of vision. SRNs have promising applications outside of vision. Neural scene representations are a core aspect of artificial intelligence, as they allow an agent to model its environment, navigate, and plan interactions. Thus, natural applications of SRNs lie in robotic manipulation or as the world model of an independent agent. Extending SRNs to other image formation models. SRNs could be extended to other image formation models, such as computer tomography or magnetic resonance imaging. All that is required is a differentiable forward model of the image formation. The ray-marcher could be adapted accordingly to integrate features along a ray or to sample at pre-defined locations. For image formation models that observe scenes directly in 3D, the ray-marcher may be left out completely, and ¢ may be sampled directly. Probabilistic formulation. An interesting avenue of future work is to extend SRNs to a probabilis- tic model that can infer a probability distribution over feasible scenes consistent with a given set of observations. In the following, we formulate one such approach, very similar to the formulation of Kumar et al. [15], which is in turn based on the work of Eslami et al. [12]. Please note that this formulation is not experimentally verified in the context of SRNs and is described here purely to facilitate further research in this direction. Formally, the model can be summarized as: r, = M(Z, Ei, Ki) ®) rn (9) z~ Poe(z|r) th b= V(z) 2 T = 0(64,E,K) “© We assume that we are given a set of instance datasets D = {Cy}, where each C; consists of tuples {(Z;,E;,K;)}/_,. For a single scene C with n observations, we first replicate and concatenate the camera pose E; and intrinsic parameters K; of each observations to the image channels of the corresponding 2D image Z;. Using a learned convolutional encoder M, we encode each of the n observations to a code vector r;. These code vectors r; are then summed to form a permutation- invariant representation of the scene r. Via an autoregressive DRAW model [16], we form a probability distribution P, that is conditioned on the code vector r and sample latent variables z. z is decoded into the parameters of a scene representation network, ¢, via a hypernetwork U(z) = ¢. Lastly, via our differentiable rendering function ©, we can render images Z from ®, as described in the main paper. This allows to train the full model end-to-end given only 2D images and their camera parameters. We note that the resulting optimization problem is intractable and requires the optimization of an evidence lower bound via an approximate posterior, which we do not derive here — please refer to [15]. Similarly to [15], this formulation will lead to multi-view consistent renderings of each scene, as the scene representation ® stays constant across queries of O. View- and lighting-dependent effects, translucency, and participating media. Another exciting direction for future work is to model further aspects of realistic scenes. One such aspect is view- and lighting dependent effects, such as specularities. For fixed lighting, the pixel generator could receive as input the direction of the camera ray in world coordinates, and could thus reason about the view-dependent color of a surface. To model simple lighting-dependent effects, the pixel generator could further receive the light ray direction as an input (assuming no occlusions). Lastly, the proposed formulation could also be extended to model multiple ray bounces in a ray-casting framework. To model translucency and participating media, the ray-marcher could be extended to sum features along a ray instead of only sampling a feature at the final intersection estimate. Complex 3D scenes and compositionality. While SRNs can represent room-scale scenes (see supplementary video), generalization across such complex, cluttered 3D environments is an open problem. To the best of our knowledge, is has not yet been demonstrated that low-dimensional embeddings are a feasible representation for photo-realistic, general 3D environments. Recent work in meta-learning could enable generalization across scenes without the limitation to a highly low-dimensional manifold [17]. # References [1] V. Sitzmann, J. Thies, F Heide, M. NieBner, G. Wetzstein, and M. Zollhéfer, “Deepvoxels: Learning persistent 3d feature embeddings,” in Proc. CVPR, 2019. 10 P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. CVPR, 2017, pp. 5967-5976. M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Single-view to multi-view: Reconstructing unseen views with a convolutional network,” CoRR abs/1511.06702, vol. 1, no. 2, p. 2, 2015. D. E. Worrall, S. J. Garbin, D. Turmukhambetov, and G. J. Brostow, “Interpretable transformations with encoder-decoder networks,” in Proc. ICCV, vol. 4, 2017. T. S. Cohen and M. Welling, “Transformation properties of learned visual representations,” arXiv preprint arXiv: 1412.7659, 2014. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv: 1607.06450, 2016. K. He, X. Zhang. on imagenet cl. S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance fication,” in Proc. CVPR, 2015, pp. 1026-1034. J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Niefiner, “Face2face: Real-time face capture and reenactment of rgb videos,” in Proc. CVPR, 2016, pp. 2387-2395. T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017. 10 A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu, “Pixel recurrent neural networks,” arXiv preprint arXiv: 1601.06759, 2016. 11 A. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu, “Conditional image generation with pixelcnn decoders,” in Proc. NIPS, 2016. 12 S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor er al., “Neural scene representation and rendering,” Science, vol. 360, no. 6394, pp. 1204-1210, 2018. 13 J.C. Hart, “Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces,” The Visual Computer, vol. 12, no. 10, pp. 527-545, 1996. 14 O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmen- tation,” in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241. 15 A. Kumar, S. A. Eslami, D. Rezende, M. Garnelo, F. Viola, E. Lockhart, and M. Shanahan, “Consistent jumpy predictions for videos and scenes,” 2018. 16 K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “Draw: A recurrent neural network for image generation,” arXiv preprint arXiv: 1502.04623, 2015. 17 C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017, pp. 1126-1135. 11
{ "id": "1512.03012" }
1906.01604
KERMIT: Generative Insertion-Based Modeling for Sequences
We present KERMIT, a simple insertion-based approach to generative modeling for sequences and sequence pairs. KERMIT models the joint distribution and its decompositions (i.e., marginals and conditionals) using a single neural network and, unlike much prior work, does not rely on a prespecified factorization of the data distribution. During training, one can feed KERMIT paired data $(x, y)$ to learn the joint distribution $p(x, y)$, and optionally mix in unpaired data $x$ or $y$ to refine the marginals $p(x)$ or $p(y)$. During inference, we have access to the conditionals $p(x \mid y)$ and $p(y \mid x)$ in both directions. We can also sample from the joint distribution or the marginals. The model supports both serial fully autoregressive decoding and parallel partially autoregressive decoding, with the latter exhibiting an empirically logarithmic runtime. We demonstrate through experiments in machine translation, representation learning, and zero-shot cloze question answering that our unified approach is capable of matching or exceeding the performance of dedicated state-of-the-art systems across a wide range of tasks without the need for problem-specific architectural adaptation.
http://arxiv.org/pdf/1906.01604
William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, Jakob Uszkoreit
cs.CL, cs.LG, stat.ML
William Chan, Nikita Kitaev, Kelvin Guu, and Mitchell Stern contributed equally
null
cs.CL
20190604
20190604
9 1 0 2 n u J 4 ] L C . s c [ arXiv:1906.01604v1 1 v 4 0 6 1 0 . 6 0 9 1 : v i X r a # KERMIT: Generative Insertion-Based Modeling for Sequences William Chan∗ 1, Nikita Kitaev∗1,3, Kelvin Guu∗ 2, Mitchell Stern∗ 1,3, Jakob Uszkoreit1 1Google Research, Brain Team 2Google Research, AI Language Team 3University of California, Berkeley {williamchan,kguu,usz}@google.com {kitaev,mitchell}@berkeley.edu # Abstract We present KERMIT, a simple insertion-based approach to generative modeling for sequences and sequence pairs. KERMIT models the joint distribution and its decompositions (i.e., marginals and conditionals) using a single neural network and, unlike much prior work, does not rely on a prespecified factorization of the data distribution. During training, one can feed KERMIT paired data (x, y) to learn the joint distribution p(x, y), and optionally mix in unpaired data x or y to refine the marginals p(x) or p(y). During inference, we have access to the condi- tionals p(x | y) and p(y | x) in both directions. We can also sample from the joint distribution or the marginals. The model supports both serial fully autoregressive decoding and parallel partially autoregressive decoding, with the latter exhibiting an empirically logarithmic runtime. We demonstrate through experiments in ma- chine translation, representation learning, and zero-shot cloze question answering that our unified approach is capable of matching or exceeding the performance of dedicated state-of-the-art systems across a wide range of tasks without the need for problem-specific architectural adaptation. # 1 Introduction Neural sequence models (Sutskever et al., 2014; Cho et al., 2014) have been successfully applied to many conditional generation applications, including machine translation (Bahdanau et al., 2015; Luong et al., 2015), speech recognition (Chan et al., 2016; Bahdanau et al., 2016), speech synthesis (Oord et al., 2016; Wang et al., 2017) and image captioning (Vinyals et al., 2015; Xu et al., 2015). Much of the prior work in this area follows the seq2seq encoder-decoder paradigm, where an en- coder builds a representation of an observed sequence x, and a decoder gives the conditional output distribution p(y | x) according to a predetermined factorization, usually left-to-right. While effective for straightforward conditional generation, such an approach is inflexible and cannot readily be applied to other inference tasks such as non-left-to-right generation or infilling. In this work, we present a more general approach called Kontextuell Encoder Representations Made by Insertion Transformations, or KERMIT for short. KERMIT is a simple architecture that directly models the joint distribution p(x, y) and its decompositions (such as the marginals p(x) and p(y) ∗Equal contribution. WC initiated the KERMIT project for machine translation, implemented the corre- sponding code and experiments (Section 4.1) and advised the project. NK proposed using the same model for text generation, implemented and evaluated different monolingual pre-training approaches, and conducted all representation learning experiments (Section 4.2). KG proposed using KERMIT as a zero-shot QA model and conducted all associated experiments (Section 4.3); he also co-developed KERMIT’s training and infer- ence infrastructure. MS developed the mathematical formalism for the model (Section 3) and assisted in the implementation of KERMIT for translation. JU helped conceive the initial idea and advised the project. Preprint. Under review. The quite {Kurse, waren} courses proved popular [SEP] Die sehr beliebt [SEP] Figure 1: An example of the KERMIT insertion objective for the English ↔ German translation pair “The courses proved quite popular” ↔ “Die Kurse waren sehr beliebt”. The model is trained to predict the set of words that need to be inserted at each location. By incurring a loss on both sides, our system learns a fully generative model of the joint distribution over (x, y) pairs, and can accommodate arbitrary generation orders. Machine Translation Representation Learning Cloze Question Answering En → De (BLEU) 27.3a N/A 27.8 GLUE 72.8b 80.5c 79.8 Zero-shot SQuAD (F1) Autoregressive (Transformer, GPT, GPT-2) Masking (BERT) Insertion (KERMIT – Our Work) 16.6 18.9 30.3 Table 1: The KERMIT architecture works well for three categories of tasks: machine translation, rep- resentation learning, and zero-shot cloze question answering. aVaswani et al. (2017) bRadford et al. (2018) cDevlin et al. (2019) and the conditionals p(y | x) and p(x | y)) in a unified manner. In contrast with traditional seq2seq models, KERMIT does not rely on a prespecified factorization, but is instead able to condition on whatever information is available and infer what remains. During training, we present KERMIT with paired data (x, y) to learn the joint, and can optionally mix in unpaired data x or y to refine the marginals in a semi-supervised setting. At test time, a single KERMIT model can be used for conditional inference in either direction by restricting the output distribution to p(x | y) or p(y | x) as required. We can also generate paired samples from the joint distribution (x, y) ∼ p(x, y), or unpaired samples from the marginals x ∼ p(x) or y ∼ p(y). KERMIT uses a simple architecture and is easy to implement. It does not have a separate encoder and decoder, nor does it require causality masks. In our implementation, KERMIT consists of a single Transformer decoder stack (Vaswani et al., 2017). The model is trained to insert the missing tokens into any partially-complete sequence, as shown in Figure 1. We describe the implementation in more detail in Section 3. We apply KERMIT to a diverse set of tasks, finding that our unified approach is capable of matching or exceeding the performance of dedicated state-of-the-art systems without the need for problem- specific components. We first apply KERMIT to machine translation, where the inputs and outputs are parallel sentence pairs. Then, like its friends ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and ERNIE (Sun et al., 2019), we can also use KERMIT for self-supervised representation learning for use in downstream NLP tasks. Finally, we apply KERMIT to a zero-shot cloze question- answering task demonstrating the infilling capabilities of the model. Table 1 summarizes our results on all three tasks compared to other highly tuned models: Transformer, BERT, GPT and GPT-2. # 2 Background In this section, we define some notation and give a brief review of existing sequence models, in- cluding autoregressive left-to-right models (Sutskever et al., 2014; Cho et al., 2014) and masked language models (Devlin et al., 2019). # 2.1 Autoregressive Left-to-Right Models Let X and Y be the set of all input and output sequences, respectively. In a standard sequence-to- sequence task, we are presented with training data consisting of sequence pairs (x, y) ∈ X × Y, e.g. parallel translations, and we aim to learn the conditional distribution p(y | x). Traditional autoregres- sive models (Sutskever et al., 2014; Cho et al., 2014) use a left-to-right factorization, decomposing the distribution as a chain of predictions conditioning on the input x and prefixes y<t: p(y | x) = Y t 2 This structure is also used for unconditional sequence tasks such as language modeling where the goal is to learn an unconditional output distribution on its own. A left-to-right factorization is conve- nient because it allows for exact log-likelihood computation, thereby permitting efficient maximum likelihood estimation. It also leads to simple approximate inference algorithms such as greedy de- coding ˆyt = argmax p(y | x, ˆy<t) y (2) or beam search over sets of multiple hypotheses. However, there are some drawbacks to the autoregressive approach. First, in the case of conditional generation, it cannot handle situations where the input x is only partially observed. Second, since it utilizes a fixed left-to-right factorization, it cannot be used for other inference tasks like infilling where generation is not monotonic. Moreover, standard inference algorithms require n generation steps to generate n tokens, which could be a bottleneck in end-use applications. # 2.2 Masked Language Models Masked Language Models (MLMs) (Devlin et al., 2019) comprise another class of models targeting the unconditional setting. For MLMs, a partial canvas xs ⊆ x is observed where some of the tokens in x have been masked out, and the objective is to recover x from xs. For example, for a ground truth canvas x∗ = (A, B, C, D, E) and a partial canvas x∗ s = (A, _, C, D, _), the model should learn to replace the second blank with B and the last blank with E. The model outputs an independent prediction at each position, and its objective is to maximize p(x | xs). Because the exact locations of the slots are known in xs, the model does not need to predict where the missing items are located, but only what they should be. Consequently, the model is not immediately suitable for generation, as the canvas size needs to be fixed during inference and cannot change over time (i.e., |xs| = |x|). MLMs have been successfully applied in self-supervised representation learning settings, leading to strong results on downstream language tasks (Devlin et al., 2019). # 3 KERMIT In this section we propose KERMIT, a novel insertion-based generative model. Unlike the prior work mentioned in Section 2, KERMIT does not have the rigid construction of modeling the target sequence given some fully observed source sequence, nor does it assume a left-to-right factorization (and generation order) of the output sequence. To motivate and arrive at our model, we formal- ize then extend a recent insertion-based conditional modeling framework proposed by Stern et al. (2019). We begin with the unconditional setting. In order to model sequences without requiring a fixed factorization or imposing constraints on the order of generation, we make use of a framework in which sequences are constructed via insertion operations. Given a sequence x = (x1, . . . , xn) and a generation order z represented as a permutation of the indices {1, . . . , n}, we define the corresponding sequence ((cz n)) of insertion operations which produces x according to order z. Here, cz i ≤ i is an insertion location relative to the current hypothesis. For example, if constructing the sequence (A, B, C) as () → (C) → (A, C) → (A, B, C), we would have z = (3, 1, 2) with (cz 2) = 3, lz (A, 1), (cz Next let (xz,i i ) denote the subsequence of x corresponding to the (ordered) extraction of the elements at indices {z1, . . . , zi}. This is the partial output at iteration i. Note that this will be the same for all permutations z with the same unordered set of indices in the first i positions. For the example above for instance, we have (xz,2 Armed with these definitions, we can now write out p(x) as a marginalization over all possible orders z ∈ Sn for sequence length n, where Sn denotes the set of all permutations on n elements: 3) = (B, 2). 1 , . . . , xz,i p(x) = X z∈Sn p(x, z) (3) = X z∈Sn p(z)p(x | z) (4) 3 # n = X z∈Sn p(z) p((cz i , lz i ) | (cz 1, lz 1), . . . , (cz i−1, lz i−1)) (5) # Y i=1 n = X z∈Sn p(z) Y i=1 i ) | xz,i−1 1:i−1), p((cz i , lz (6) where the last line encodes the Markov assumption that the order of insertions leading to a given canvas is not important, just the result. Typically we will use a uniform prior over permutations for p(z), though other options are available, such as the balanced binary tree prior described by Stern et al. (2019). # 3.1 Learning Although exact computation of the log-likelihood is intractable due to the marginalization over the generation order z, we can lower bound the log-likelihood using Jensen’s inequality via log p(x) = log X z∈Sn p(z)p(x | z) (7) ≥ X z∈Sn p(z) log p(x | z) =: L(x). (8) Substituting in our expression for p(x | z) from above, we have # n L(x) = X z∈Sn p(z) log Y i=1 i ) | xz,i−1 1:i−1) p((cz i , lz (9) # n = X z∈Sn p(z) X i=1 i ) | xz,i−1 1:i−1). log p((cz i , lz (10) Next we interchange the summations and break the permutation z down into (z1, . . . , zi−1) corre- sponding to previous insertions, zi corresponding to the next insertion, and (zi+1, . . . , zn) corre- sponding to future insertions, giving # n L(x) = X z∈Sn p(z) log p((cz i ) | xz,i−1 1:i−1) i , lz (11) # X i=1 n = X z1:i−1 X zi X zi+1:n p(z) log p((cz i ) | xz,i−1 1:i−1) i , lz (12) # X i=1 n X i=1 X z1:i−1 p(z1:i−1) X zi p(zi | z1:i−1) log p((cz i ) | xz,i−1 i , lz 1:i−1) X zi+1:n p(zi+1:n | z1:i) = # n X i=1 X z1:i−1 p(z1:i−1) X zi p(zi | z1:i−1) log p((cz i ) | xz,i−1 1:i−1), i , lz (14) = where the simplification in the last line follows from the fact that Pzi+1:n p(zi+1:n | z1:i) = 1. From here, we can multiply and divide the outer sum by n to turn it into a mean, then arrive at the following simple sampling procedure to compute an unbiased estimate of our lower bound L(x) on the log-likelihood for a single example: 1. Sample a generation step i ∼ Uniform([1, n]). 2. Sample a partial permutation z1:i−1 ∼ p(z1:i−1) for the first i − 1 insertions. i ) | xz,i−1 3. Compute a weighted sum over the next-step losses log p((cz i , lz weighting distribution p(zi | z1:i−1) and the sequence length n. 1:i−1) scaled by the 4 (13) E′ D E A′ E′ F ′ Softmax Softmax Self-Attention Casual Self-Attention + Cross-Attention Self-Attention Embedding Embedding Embedding A B C D E F G hsi hs′i A′ B′ C′ D′ A B C _ _ F hsi _ B′ C′ D′ _ _ hsi′ (a) Transformer (b) BERT A′ {E′, F ′} {D, E} A′ {E′, F ′} Softmax Softmax Self-Attention Self-Attention + Cross-Attention Self-Attention Embedding Embedding Embedding A B C D E F G hsi B′ C′ D′ hsi′ A B C F hsi B′ C′ D′ hsi′ ) ) ) (c) Insertion Transformer Figure 2: Diagram of various models. The Transformer (a) model predicts the next right token given the left context. The BERT (b) model predicts what is missing in the blank slots given the context. The Insertion Transformer (c) model predicts where and what is missing given the context. The KERMIT (d) model is an generalization of (c) where the context is over multiple sequences. # 3.2 Inference Using this model, inference can be autoregressive via greedy decoding (ˆc, ˆl) = argmax p(c, l|ˆxt) c,l (15) or partially autoregressive via parallel decoding ˆcl = argmax p(c | l, ˆxt). c (16) In the case of parallel decoding, we perform simultaneous insertions at all non-finished slots. If we use a balanced binary tree prior for p(z) (Stern et al., 2019), we can even achieve an empirical runtime of ≈ log2 n iterations to generate n tokens. One key advantage of insertion-based models over MLMs is that the output canvas can dynamically grow in size, meaning the length does not need to be chosen before the start of generation. # 3.3 Pairs of Sequences Thus far, we have discussed KERMIT for single sequences. We can easily extend KERMIT to pairs of sequences by directly modeling (x, y) as a concatenation of two sequences, (x, y) = (x1, . . . , xn, y1, . . . , ym). For example, let our first sequence be x = (A, B, C, hEOSi) and our second sequence be y = (A′, B′, C′, D′, E′, hEOSi). The concatenated sequence would then be (x, y) = (A, B, C, hEOSi, A′, B′, C′, D′, E′, hEOSi). With this approach, we can model pairs of sequences as if they were single sequences. Moreover, unlike seq2seq, our model is symmetric with regards to its treatment of the source and target, making it a strong candidate for extensions to multimodal data settings in future work. By keeping our architecture order-agnostic and marginalizing over all possible orders in our training objective, KERMIT is able to learn the joint distribution and all its decompositions, including the marginals p(x) and p(y) and conditionals p(y | x) and p(x | y). We can also perform targeted training. More explicitly, if the model is provided with a canvas that fully contains x or y, then it will learn a conditional distribution. If the model is provided with an example where x or y is empty, then it will learn the opposing marginal distribution. # 3.4 Model We implement KERMIT as a single Transformer decoder stack (Vaswani et al., 2017), without any form of causal masking. The full self-attention mechanism allows the model to capture any rela- 5 ↔ En → De De → En Iterations Transformer (Vaswani et al., 2017) Transformer (Our Implementation) ✗ ✗ 27.3 27.8 31.2 n n NAT (Gu et al., 2018) Iterative Refinement (Lee et al., 2018) Blockwise Parallel (Stern et al., 2018) Insertion Transformer (Stern et al., 2019) ✗ ✗ ✗ ✗ 17.7 21.6 27.4 27.4 21.5 25.5 Unidirectional (p(y | x) or p(x | y)) Bidirectional (p(y | x) and p(x | y)) Joint (p(x, y)) + Marginal Refining (p(x) and p(y)) ֒→ Unidirectional Finetuning ֒→ Bidirectional Finetuning ✗ ✓ ✓ ✓ ✗ ✓ 27.8 27.2 25.6 25.8 28.7 28.1 30.7 27.6 27.4 28.6 31.4 28.6 Table 2: WMT English ↔ German newstest2014 BLEU. Models capable of translating in both directions are marked with ↔. tionships between the input canvas and the predicted insertion operations with a constant number of operations. We follow Stern et al. (2019) and model the (content, location) distribution p(c, l) as a factorized distribution p(c, l) = p(c | l)p(l), where p(c | l) is the standard Transformer softmax over the vocabulary, and a p(l) is a softmax over the locations. Figure 2 visualizes the differences between a standard Transformer (Vaswani et al., 2017), BERT (Devlin et al., 2019), Insertion Trans- former (Stern et al., 2019) and KERMIT. # 4 Experiments We perform experiments with KERMIT on the tasks of machine translation, self-supervised repre- sentation learning, and zero-shot cloze question answering. # 4.1 Machine Translation We first apply KERMIT on the competitive WMT 2014 English ↔ German translation task. We follow the hyperparameter settings of the base Transformer (Vaswani et al., 2018). However, since KERMIT does not have an encoder, we simply double the decoder width. We perform no addi- tional hyperparameter tuning. We also follow prior work (Gu et al., 2018; Stern et al., 2018, 2019; Lee et al., 2018) in using distillation (Hinton et al., 2015; Kim and Rush, 2016) to train our models. We follow Stern et al. (2019) in using a balanced binary tree loss, and we similarly observe an em- pirically logarithmic number of generation steps in sequence length when using parallel decoding. However, unlike Stern et al. (2019) we did not need to tune an EOS penalty, but simply set it to zero for all experiments. We train several different KERMIT models for translation. First we train two unidirectional models, where the model observes a full source sentence (i.e., English or German) and is asked to generate the corresponding target sentence (i.e., German or English). These separately learn the conditional distributions p(y | x) and p(x | y), mimicking the traditional conditional generation setup. On the WMT 2014 test set, we achieve 27.8/30.7 BLEU with this approach, roughly matching our base Transformer baseline of 27.8/31.2 BLEU. We also train a bidirectional model on the union of the two unidirectional training sets, yielding a single model that captures both conditional distributions p(y | x) and p(x | y). We do not change any hyperparameters when training this model (i.e., we do not increase model capacity). The combined approach obtains 27.2/27.6 BLEU, nearly matching the baseline for English → German but falling slightly behind in the reverse direction. We also train a full joint model that captures the full joint distribution p(x, y) and factorizations thereof. Like the bidirectional model, the joint model can translate in either direction, but it can 6 Input: In order to develop such a concept, the town is reliant on the cooperation of its citizens. Predicted: Um ein solches Konzept zu entwickeln, ist die Stadt auf die Zusammenarbeit ihrer Bürger angewiesen. # Parallel decode: Um_ ein_ solches_ Konzept_ zu_ entwickeln_ , _ ist_ die_ Stadt_ auf_ die_ Zusammenarbeit_ ihrer_ Bürger_ angewiesen_ ._ Um_ ein_ solches_ Konzept_ zu_ entwickeln_ , _ ist_ die_ Stadt_ auf_ die_ Zusammenarbeit_ ihrer_ Bürger_ angewiesen_ ._ Um_ ein_ solches_ Konzept_ zu_ entwickeln_ , _ ist_ die_ Stadt_ auf_ die_ Zusammenarbeit_ ihrer_ Bürger_ angewiesen_ ._ Um_ ein_ solches_ Konzept_ zu_ entwickeln_ , _ ist_ die_ Stadt_ auf_ die_ Zusammenarbeit_ ihrer_ Bürger_ angewiesen_ ._ Um_ ein_ solches_ Konzept_ zu_ entwickeln_ , _ ist_ die_ Stadt_ auf_ die_ Zusammenarbeit_ ihrer_ Bürger_ angewiesen_ ._ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Input: Frühere Gespräche zwischen den Parteien haben nur wenig zur Beilegung der Spannungen beigetragen, die durch eine Reihe von Zusammenstößen in diesem Jahr befeuert wurden. Predicted: Previous talks between the parties have done little to resolve the tensions fueled by a series of clashes this year. # Parallel decode: Prev ious_ talks_ between_ the_ parties_ have_ done_ little_ to_ resolve_ the_ tensions_ fueled_ by_ a_ series_ of_ cla she s_ this_ year_ ._ Prev ious_ talks_ between_ the_ parties_ have_ done_ little_ to_ resolve_ the_ tensions_ fueled_ by_ a_ series_ of_ cla she s_ this_ year_ ._ Prev ious_ talks_ between_ the_ parties_ have_ done_ little_ to_ resolve_ the_ tensions_ fueled_ by_ a_ series_ of_ cla she s_ this_ year_ ._ Prev ious_ talks_ between_ the_ parties_ have_ done_ little_ to_ resolve_ the_ tensions_ fueled_ by_ a_ series_ of_ cla she s_ this_ year_ ._ Prev ious_ talks_ between_ the_ parties_ have_ done_ little_ to_ resolve_ the_ tensions_ fueled_ by_ a_ series_ of_ cla she s_ this_ year_ ._ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 3: Example parallel decodes using KERMIT for English → German and German → English translation. In each row, the blue underlined tokens are those being inserted, and the gray tokens are those from the final output that have not yet been generated. Empirically, KERMIT requires only ≈ log2 n steps to generate a sequence of length n when trained with a balanced binary tree prior. additionally be used for sampling or completing partial inputs. We use the same hyperparameter set as before. Since the model is now faced with a much more challenging task, it does slightly worse when limited to the same model size, but still reaches a respectable 25.6/27.4 BLEU. Unlike the previous models, however, we can incorporate monolingual data into the joint model’s training setup to supplement its knowledge of the marginals p(x) and p(y). We accordingly train a joint model with all our paired data and 1M additional samples of English and German monolingual data randomly selected from the WMT 2014 monolingual corpus. Without altering model capacity, we find that refining the marginals gives us a 1.2 BLEU improvement on German → English. Finally, we take the model which was trained on the full joint distribution with marginal refinement, and further finetune it on both the unidirectional and bidirectional settings. We find a small improvement in BLEU over the original models in both settings. Table 2 summarizes our results. We emphasize that virtually all of our models outperform prior non-fully-autoregressive approaches in terms of BLEU. We also note that the observed number of iterations required to generate n tokens is roughly log2 n due to the use of a balanced binary tree loss and parallel decoding, which is substantially lower than autoregressive models which require n steps. Some examples of parallel decodes are shown in Figure 3. Our models require an average of 5.5-6.5 decoding iterations for the sentences in the test set, outperforming the constant-time models of Lee et al. (2018) which require 10 iterations in both BLEU and empirical decoding complexity. We also draw samples from the model to highlight its infilling and generation capabilities. Figure 4 captures some examples. We first show unconditional sampling of an (English, German) sentence pair. We also take a translation example from the newstest2013 dev set and split it in half, sampling completions after seeding the English side with the first half and the German side with the second half. We find the model is capable of generating a very diverse set of coherent samples. # 4.2 Representation Learning Like its close friend BERT (Devlin et al., 2019), KERMIT can also be used for self-supervised representation learning and applied to various language understanding tasks. We follow the same training procedure and hyperparameter setup as BERTLARGE . However, instead of masking 15% of the tokens and replacing them with blank tokens like in BERT (Devlin et al., 2019), KERMIT simply drops them out completely from the sequence. 7 # No seeding (unconditional): English: Nonetheless, we feel, with fury, at the fact that the 500 million have no contradiction on a common approach to productivity. German: Dennoch sind wir mit Wut der Ansicht, dass die 500 Millionen keinen Widerspruch in einem gemeinsamen Produktivitätsansatz aufweisen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . English Groundtruth: - Please tell us, in simple terms, about the work your research group does. German Groundtruth: - Beschreiben Sie bitte kurz, welche Forschungen Ihre Gruppe betreibt. English Seed: - Please tell us, in simple terms German Seed: welche Forschungen Ihre Gruppe betreibt. English: - Please tell us what sort of research your group is conducting, in simple terms. German: - Bitte teilen Sie uns einfach mit, welche Forschungen Ihre Gruppe betreibt. English: - Please tell us, in quite simple terms, what kind of research your group operates. German: - Bitte teilen Sie uns in ganz einfach mit, welche Forschungen Ihre Gruppe betreibt. English: - Please tell us what research, in simple terms, what your group actually runs. German: - Bitte sagen Sie uns ganz einfach, welche Forschungen Ihre Gruppe eigentlich betreibt. English: - Please, tell us what research your group is doing, in more simple terms. German: - Bitte sagen Sie uns, welche Forschungen Ihre Gruppe einfacher betreibt. English: - Please tell us what your group will be conducting public research on, in simple terms. German: - Bitte teilen Sie uns einfach mit, welche Forschungen Ihre Gruppe betreibt. English: - Please tell us, what sort of research your group is undertaking in simple terms. German: - Bitte sagen Sie uns, welche Forschungen Ihre Gruppe betreibt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 4: Paired translation samples drawn from KERMIT, with and without seeding the initial canvas with text. In the bottom portion of the figure, the seed text is shown in gray, and different continuations sampled from the model are shown in black. We emphasize the diversity of generation. Model Generative? CoLA SST-2 MRPC STS-B QQP MNLI-(m/mm) QNLI RTE WNLI AX Score GPT (Radford et al., 2018) BERT (Devlin et al., 2019) ✓ ✗ 45.4 60.5 91.3 94.9 82.3/75.7 89.3/85.4 82.0/80.0 87.6/86.5 70.3/88.5 72.1/89.3 82.1/81.4 86.7/85.9 87.4 92.7 56.0 70.1 53.4 65.1 29.8 39.6 KERMIT ✓ 60.0 94.2 88.6/84.3 86.6/85.6 71.7/89.0 85.6/85.2 92.0 68.4 65.1 37.6 72.8 80.5 79.8 Table 3: GLUE benchmark scores (as computed by the GLUE evaluation server). Of these models, only GPT and KERMIT admit a straightforward generation process. Prior to BERT, the best representation learning approach was to use a language model such as GPT (Radford et al., 2018). BERT outperforms GPT in large part because of its deeply bi-directional architecture, but in the process BERT sacrifices the ability to perform straightforward generation. While we find KERMIT to perform slightly behind BERT, KERMIT maintains the ability to generate text while obtaining results that are much closer to BERT rather than GPT. The GLUE benchmark (Wang et al., 2019) results are summarized in Table 3. # 4.3 Zero-Shot Cloze Question Answering Finally, we also investigate the infilling abilities of KERMIT and related approaches by evaluating their performance on zero-shot cloze question answering. In particular, we aim to understand how effective these models are for fill-in-the-blank-style question answering after being trained only on language modeling data without any task-specific fine-tuning. For this experiment, we use the human-annotated QA2D dataset assembled by Demszky et al. (2018), which consists of examples from the SQuAD dataset (Rajpurkar et al., 2016) in which the answer has been extended from a single phrase into a full declarative sentence. These can be trans- formed into cloze instances by removing the answer phrase from the declarative output. For example, given the question “When was Madonna born?” and the answer “August 16, 1958”, the full declar- ative answer would be “Madonna was born on August 16, 1958.”, and the associated cloze instance would be “Madonna was born on 8 Plymouth has a post-war shopping area in the city centre with substantial pedestrianisation. At the west end of the zone inside a grade II listed building is the Pannier Market that was completed in 1959 – pannier meaning "basket" from French, so it translates as "basket market". In terms of retail floorspace, Plymouth is ranked in the top five in the South West, and 29th nationally. Plymouth was one of the first ten British cities to trial the new Business Improvement District initiative. The Tinside Pool is situated at the foot of the Hoe and became a grade II listed building in 1998 before being restored to its 1930s look for £3.4 million. What notable location was named a grade II listed building in 1998? ___ was named a grade II listed building in 1998 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . GPT-2 + Oracle Length → “A listed building” → “Plymouth” BERT → “plymouth” + Oracle Length → “: the pool” KERMIT → “the tinside pool” Correct → “Tinside Pool” Figure 5: Example of KERMIT, BERT and GPT-2 performing zero-shot cloze question answering on SQuAD. The question and cloze question are bolded. Note that BERT and GPT-2 prefer a shorter, incorrect answer, unless given the oracle answer length. We take the KERMIT model trained from Section 4.2 and two powerful language models (BERT (Devlin et al., 2019) and the largest public version of GPT-22 (Radford et al., 2019)), and evaluate their ability to fill in the blank of each cloze instance, each without specifically being trained on data of this form. We employ different decoding strategies as required for each model, detailed below. KERMIT We split the passage in half and present KERMIT with examples of the form [CLS] passage(1/2) [SEP] passage(2/2) question cloze(left) cloze(right) [SEP] where cloze(left) and cloze(right) are the portions of the declarative answer before and after the gap. Since KERMIT can natively perform insertions, we simply perform a parallel decode constrained to take place within the gap and extract the output as our answer. BERT We split the passage in half and present BERT with examples of the form [CLS] passage(1/2) [SEP] passage(2/2) question cloze(left) [MASK]*n cloze(right) [SEP] Here we include explicit [MASK] tokens, running separate decodes with n = 1, 2, . . . up to 4 or the oracle answer length, whichever is greater. We then choose the one with the highest score under the model and extract the outputs at the masked positions as the answer. Each decode consists of a beam search in which one [MASK] is filled at a time. For each element on the beam, we choose the remaining [MASK] position with the highest confidence (lowest entropy) as the next position to fill. We found that this beam-search did substantially better than left-to-right decoding or parallel decoding. GPT-2 For GPT-2, a left-to-right language model, we cannot directly condition on both the left and right context. Instead, we first present the model with the prefix # passage question cloze(left) and sample continuations of varying lengths. For each continuation, we then append cloze(right) and compute the score of the full sequence under the model. We select the best-scoring sequence and extract the portion in the gap as the answer. To efficiently obtain continuations of varying lengths, we generate 20 extended continuations from the model, then treat all prefixes of those continuations as candidate values to go in the gap. We evaluate on 50,000 cloze-formulated questions from SQuAD, using the standard SQuAD evaluation script to compute accuracy in terms of exact match and token-level F1. Results are presented in Table 4. KERMIT performs significantly better on this zero-shot cloze task than the other two approaches thanks to its infilling capabilities learned through its insertion-oriented objective, achieving 30.3 F1 and 20.9% exact match. BERT’s performance falls short of KERMIT, as it often prefers shorter comple- tions since it is not required to handle length modeling Model Exact Match F1 GPT-2 + Oracle Length BERT + Oracle Length 10.9 12.2 12.3 16.2 16.6 18.2 18.9 23.1 KERMIT 20.9 30.3 — Table 4: SQuAD zero-shot cloze ques- tion answering. # 2The 345M parameter “medium size” model. 9 during training. GPT-2 lags further behind the others due to its inability to condition on the context on both sides of the gap during inference. Even when the oracle length (i.e., the ground-truth length of the answer) is provided to BERT and GPT-2, KERMIT still substantially outperforms all other models. # 5 Conclusion In this paper, we present KERMIT, an insertion-based framework for sequences that can model the joint data distribution and its decompositions (i.e., marginals and conditionals). KERMIT can gen- erate text in an arbitrary order – including bidirectional machine translation and cloze-style infilling – and empirically can generate sequences in logarithmic time. It uses a simple neural architecture that can additionally produce contextualized vector representations of words and sentences. We find KERMIT is capable of matching or exceeding state-of-the-art performance on three diverse tasks: machine translation, representation learning, and zero shot cloze question answering. # Acknowledgments We give thanks to Samy Bengio, Zhifeng Chen, Jamie Kiros, Luheng He, Geoffrey Hinton, Quoc Le, Lala Li, Mohammad Norouzi, Yu Zhang, and the Google Brain team for useful discussions and technical assistance. Special thanks to Jamie Kiros for brainstorming the name KERMIT. # References Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR. Bahdanau, D., Chorowski, J., Serdyuk, D., Brakel, P., and Bengio, Y. (2016). End-to-End Attention- based Large Vocabulary Speech Recognition. In ICASSP. Chan, W., Jaitly, N., Le, Q., and Vinyals, O. (2016). Listen, Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition. In ICASSP. Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In EMNLP. Demszky, D., Guu, K., and Liang, P. (2018). Transforming Question Answering Datasets Into Natural Language Inference Datasets. In arXiv. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. In NAACL. Gu, J., Bradbury, J., Xiong, C., Li, V. O., and Socher, R. (2018). Non-Autoregressive Neural Ma- chine Translation. In ICLR. Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the Knowledge in a Neural Network. In NIPS Deep Learning and Representation Learning Workshop. Kim, Y. and Rush, A. M. (2016). Sequence-Level Knowledge Distillation. In EMNLP. Lee, J., Mansimov, E., and Cho, K. (2018). Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement. In EMNLP. Luong, M.-T., Pham, H., and Manning, C. D. (2015). Effective Approaches to Attention-based Neural Machine Translation. In EMNLP. Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio. In arXiv. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. In NAACL. 10 Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving Language Under- standing by Generative Pre-Training. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). SQuAD: 100,000+ Questions for Ma- chine Comprehension of Text. In EMNLP. Stern, M., Chan, W., Kiros, J., and Uszkoreit, J. (2019). Insertion Transformer: Flexible Sequence Generation via Insertion Operations. In ICML. Stern, M., Shazeer, N., and Uszkoreit, J. (2018). Blockwise Parallel Decoding for Deep Autoregres- sive Models. In NeurIPS. Sun, Y., Wang, S., Li, Y., Feng, S., Chen, X., Zhang, H., Tian, X., Zhu, D., Tian, H., and Wu, H. (2019). ERNIE: Enhanced Representation through Knowledge Integration. In arXiv. Sutskever, I., Vinyals, O., and Le, Q. (2014). Sequence to Sequence Learning with Neural Networks. In NIPS. Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A. N., Gouws, S., Jones, L., Kaiser, L., Kalchbrenner, N., Parmar, N., Sepassi, R., Shazeer, N., and Uszkoreit, J. (2018). Tensor2Tensor for Neural Machine Translation. In AMTA. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polo- sukhin, I. (2017). Attention Is All You Need. In NIPS. Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. (2015). Show and Tell: A Neural Image Caption Generator. In CVPR. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2019). GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR. Wang, Y., Skerry-Ryan, R., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., Yang, Z., Xiao, Y., Chen, Z., Bengio, S., Le, Q., Agiomyrgiannakis, Y., Clark, R., and Saurous, R. A. (2017). Tacotron: Towards End-to-End Speech Synthesis. In INTERSPEECH. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., and Bengio, Y. (2015). Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. In ICML. 11
{ "id": "1906.01604" }
1906.01502
How multilingual is Multilingual BERT?
In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2018) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.
http://arxiv.org/pdf/1906.01502
Telmo Pires, Eva Schlinger, Dan Garrette
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20190604
20190604
9 1 0 2 n u J 4 ] L C . s c [ 1 v 2 0 5 1 0 . 6 0 9 1 : v i X r a # How multilingual is Multilingual BERT? # Eva Schlinger Google Research {telmop,eschling,dhgarrette}@google.com # Abstract In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et al. (2019) as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annota- tions in one language are used to fine-tune the model for evaluation in another language. To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs. From these results, we can conclude that M-BERT does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs. # Introduction Deep, contextualized language models provide powerful, general-purpose linguistic represen- tations that have enabled significant advances among a wide range of natural language process- ing tasks (Peters et al., 2018b; Devlin et al., 2019). These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations. Previous work on model prob- ing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore fo- cused on what models trained on English capture about English (Peters et al., 2018a; Tenney et al., 2019b,a). In this paper, we empirically investigate the degree to which these representations generalize across languages. We explore this question us- ing Multilingual BERT (henceforth, M-BERT), re- leased by Devlin et al. (2019) as a single language model pre-trained on the concatenation of mono- lingual Wikipedia corpora from 104 languages.1 M-BERT is particularly well suited to this probing study because it enables a very straightforward ap- proach to zero-shot cross-lingual model transfer: we fine-tune the model using task-specific super- vised training data from one language, and evalu- ate that task in a different language, thus allowing us to observe the ways in which the model gener- alizes information across languages. Our results show that M-BERT is able to perform cross-lingual generalization surprisingly well. More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages im- proves transfer, M-BERT is also able to transfer between languages written in different scripts— thus having zero lexical overlap—indicating that it captures multilingual representations. We fur- ther show that transfer works best for typolog- ically similar languages, suggesting that while M-BERT’s multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target lan- guage with different word order. # 2 Models and Data Like the original English BERT model (hence- forth, EN-BERT), M-BERT is a 12 layer trans- former (Devlin et al., 2019), but instead of be- ∗Google AI Resident. 1https://github.com/google-research/bert Fine-tuning \ Eval EN DE NL ES EN 90.70 73.83 65.46 65.38 DE 69.74 82.00 65.68 59.40 NL 77.36 76.25 89.86 64.39 ES 73.59 70.03 72.10 87.18 Table 1: NER F1 results on the CoNLL data. ing trained only on monolingual English data with an English-derived vocabulary, it is trained on the Wikipedia pages of 104 languages with a shared word piece vocabulary. It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translation- equivalent pairs to have similar representations. For NER and POS, we use the same sequence tagging architecture as Devlin et al. (2019). We to- kenize the input sentence, feed it to BERT, get the last layer’s activations, and pass them through a fi- nal layer to make the tag predictions. The whole model is then fine-tuned to minimize the cross en- tropy loss for the task. When tokenization splits words into multiple pieces, we take the prediction for the first piece as the prediction for the word. # 2.1 Named entity recognition experiments We perform NER experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and Ger- man (Tjong Kim Sang, 2002; Sang and Meulder, 2003); and an in-house dataset with 16 languages,2 using the same CoNLL categories. Table 1 shows M-BERT zero-shot performance on all language pairs in the CoNLL data. # 2.2 Part of speech tagging experiments We perform POS experiments using Universal De- pendencies (UD) (Nivre et al., 2016) data for 41 languages.3 We use the evaluation sets from Ze- man et al. (2017). Table 2 shows M-BERT zero- shot results for four European languages. We see that M-BERT generalizes well across languages, achieving over 80% accuracy for all pairs. 2Arabic, Bengali, Czech, German, English, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Por- tuguese, Russian, Turkish, and Chinese. 3Arabic, Bulgarian, Catalan, Czech, Danish, German, Greek, English, Spanish, Estonian, Basque, Persian, Finnish, French, Galician, Hebrew, Hindi, Croatian, Hungarian, In- donesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (Eu- ropean and Brazilian), Romanian, Russian, Slovak, Slove- nian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese. Fine-tuning \ Eval EN DE ES IT EN 96.82 83.99 81.64 86.79 DE 89.40 93.99 88.87 87.82 ES 85.91 86.32 96.71 91.28 IT 91.60 88.39 93.71 98.11 Table 2: POS accuracy on a subset of UD languages. eee Multilingual BERT x English BERT Zero-shot F1 Score 25 30 35 40 Average overlap [%] Figure 1: Zero-shot NER F1 score versus entity word piece overlap among 16 languages. While performance using EN-BERT depends directly on word piece over- lap, M-BERT’s performance is largely independent of overlap, indicating that it learns multilingual represen- tations deeper than simple vocabulary memorization. # 3 Vocabulary Memorization Because M-BERT uses a single, multilingual vo- cabulary, one form of cross-lingual transfer occurs when word pieces present during fine-tuning also appear in the evaluation languages. In this sec- tion, we present experiments probing M-BERT’s dependence on this superficial form of generaliza- tion: How much does transferability depend on lexical overlap? And is transfer possible to lan- guages written in different scripts (no overlap)? # 3.1 Effect of vocabulary overlap If M-BERT’s ability to generalize were mostly due to vocabulary memorization, we would expect zero-shot performance on NER to be highly depen- dent on word piece overlap, since entities are of- ten similar across languages. To measure this ef- fect, we compute Etrain and Eeval, the sets of word pieces used in entities in the training and evalu- ation datasets, respectively, and define overlap as the fraction of common word pieces used in the entities: overlap = |Etrain∩Eeval| / |Etrain∪Eeval|. Figure 1 plots NER F1 score versus entity over- lap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both M-BERT and EN-BERT.4 We can see that 4Results on CoNLL data follow the same trends, but those trends are more apparent with 16 languages than with 4. Model Lample et al. (2016) EN-BERT EN 90.94 91.07 DE 78.76 73.32 NL 81.74 84.23 ES 85.75 81.84 Table 3: NER F1 results fine-tuning and evaluating on the same language (not zero-shot transfer). performance using EN-BERT depends directly on word piece overlap: the ability to transfer dete- riorates as word piece overlap diminishes, and F1 scores are near zero for languages written in differ- ent scripts. M-BERT’s performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between 40% and 70%, showing that M-BERT’s pretraining on multiple languages has enabled a representational capacity deeper than simple vocabulary memorization.5 To further verify that EN-BERT’s inability to generalize is due to its lack of a multilingual rep- resentation and not an inability of its English- specific word piece vocabulary to represent data in other languages, we evaluate on non-cross-lingual NER and see that it performs comparably to a pre- vious state of the art model (see Table 3). # 3.2 Generalization across scripts M-BERT’s ability to transfer between languages that are written in different scripts, and thus have effectively zero lexical overlap, is surprising given that it was trained on separate monolingual cor- pora and not with a multilingual objective. To probe deeper into how the model is able to per- form this generalization, Table 4 shows a sample of POS results for transfer across scripts. Among the most surprising results, an M-BERT model that has been fine-tuned using only POS- labeled Urdu (written in Arabic script), achieves 91% accuracy on Hindi (written in Devanagari script), even though it has never seen a single POS- tagged Devanagari word. This provides clear ev- idence of M-BERT’s multilingual representation ability, mapping structures onto new vocabularies based on a shared representation induced solely from monolingual language model training data. However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indi- cating that M-BERT’s multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section 4.2, is typological similarity. English and Japanese have a different order of subject, verb 5Individual language trends are similar to aggregate plots. HI UR HI 97.1 91.1 UR 85.9 93.8 EN BG JA EN 96.8 82.2 57.4 BG 87.1 98.9 67.2 JA 49.4 51.6 96.5 Table 4: POS accuracy on the UD test set for languages with different scripts. Row=fine-tuning, column=eval. and object, while English and Bulgarian have the same, and M-BERT may be having trouble gener- alizing across different orderings. # 4 Encoding Linguistic Structure In the previous section, we showed that M-BERT’s ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. In this section, we present probing experiments that investigate the nature of that representation: How does typological similarity affect M-BERT’s abil- ity to generalize? Can M-BERT generalize from monolingual inputs to code-switching text? Can the model generalize to transliterated text without transliterated language model pretraining? # 4.1 Effect of language similarity Following Naseem et al. (2012), we compare lan- guages on a subset of the WALS features (Dryer and Haspelmath, 2013) relevant to grammatical ordering.6 Figure 2 plots POS zero-shot accuracy against the number of common WALS features. As expected, performance improves with similar- ity, showing that it is easier for M-BERT to map linguistic structures when they are more similar, although it still does a decent job for low similar- ity languages when compared to EN-BERT. # 4.2 Generalizing across typological features Table 5 shows macro-averaged POS accuracies for transfer between languages grouped according to two typological features: subject/object/verb or- der, and adjective/noun order7 (Dryer and Haspel- math, 2013). The results reported include only they do not include cases zero-shot transfer, i.e. 681A (Order of Subject, Object and Verb), 85A (Order of Adposition and Noun), 86A (Order of Genitive and Noun), 87A (Order of Adjective and Noun), 88A (Order of Demon- strative and Noun), and 89A (Order of Numeral and Noun). 7SVO languages: Bulgarian, Catalan, Czech, Danish, English, Spanish, Estonian, Finnish, French, Galician, He- brew, Croatian, Indonesian, Italian, Latvian, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, SOV Languages: Basque, Farsi, Hindi, and Chinese. Japanese, Korean, Marathi, Tamil, Telugu, Turkish, and Urdu. 100- + 80- 60- HET 20 Zero-shot Accuracy [%] =— Average score for Multilingual BERT =-» Average score for English BERT 1 2 3 4 5 6 Number of common WALS features Figure 2: Zero-shot POS accuracy versus number of common WALS features. Due to their scarcity, we ex- clude pairs with no common features. SVO SVO 81.55 SOV 63.98 SOV 66.52 64.22 AN AN 73.29 NA 75.10 NA 70.94 79.64 Table 5: Macro-average POS accuracies when trans- ferring between SVO/SOV languages or AN/NA lan- guages. Row = fine-tuning, column = evaluation. training and testing on the same language. We can see that performance is best when transferring between languages that share word order features, suggesting that while M-BERT’s multilingual rep- resentation is able to map learned structures onto new vocabularies, it does not seem to learn sys- tematic transformations of those structures to ac- commodate a target language with different word order. # 4.3 Code switching and transliteration Code-switching (CS)—the mixing of multi- ple languages within a single utterance—and transliteration—writing that in the lan- guage’s standard script—present unique test cases for M-BERT, which is pre-trained on monolingual, standard-script corpora. Generalizing to code- switching is similar to other cross-lingual trans- fer scenarios, but would benefit to an even larger degree from a shared multilingual representation. Likewise, generalizing to transliterated text is sim- ilar to other cross-script transfer experiments, but has the additional caveat that M-BERT was not pre-trained on text that looks like the target. We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to De- vanagari script. Table 6 shows the results for mod- Corrected Transliterated Train on monolingual HI+EN M-BERT Ball and Garrette (2018) Train on code-switched HI/EN 86.59 — 50.41 77.40 M-BERT Bhat et al. (2018) 90.56 — 85.64 90.53 Table 6: M-BERT’s POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS. els fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version). i.e., when Hindi is written in Devanagari, M-BERT’s performance when trained only on monolingual corpora is com- parable to performance when training on code- switched data, and it is likely that some of the remaining difference is due to domain mismatch. This provides further evidence that M-BERT uses a representation that is able to incorporate infor- mation from multiple languages. However, M-BERT is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particu- lar language that allows transfer to that language. M-BERT is outperformed by previous work in both the monolingual-only and code-switched su- pervision scenarios. Neither Ball and Garrette (2018) nor Bhat et al. (2018) use contextualized word embeddings, but both incorporate explicit transliteration signals into their approaches. # 5 Multilingual characterization of the feature space In this section, we study the structure of M-BERT’s feature space. If it is multilingual, then the transformation mapping between the same sentence in 2 languages should not depend on the sentence itself, just on the language pair. # 5.1 Experimental Setup We sample 5000 pairs of sentences from WMT16 (Bojar et al., 2016) and feed each sentence (sep- arately) to M-BERT with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except [CLS] and [SEP], to get a vector for each sentence, at each layer l, v(l) LANG. For each pair of sentences, 30 - . es EN-DE _ _ « ne EN-RU 23m e-0 URHI 20 1 2 3 4 5 6 7 6 9 10 11 12 Layer Figure 3: Accuracy of nearest neighbor translation for EN-DE, EN-RU, and HI-UR. e.g. W®,, 0.) we compute the vector point- ing from one to the other and average it over all pairs: 5 spe = u y; (whe, - uf). where M is the number of pairs. Finally, we translate each sentence, w., by 72 _spp- find the closest Ger- man sentence vector’, and measure the fraction of times the nearest neighbour is the correct pair, which we call the “nearest neighbor accuracy”. # 5.2 Results In Figure 3, we plot the nearest neighbor accu- racy for EN-DE (solid line). It achieves over 50% accuracy for all but the bottom layers,9 which seems to imply that the hidden representations, al- though separated in space, share a common sub- space that represents useful linguistic information, in a language-agnostic way. Similar curves are ob- tained for EN-RU, and UR-HI (in-house dataset), showing this works for multiple languages. As to the reason why the accuracy goes down in the last few layers, one possible explanation is that since the model was pre-trained for language mod- eling, it might need more language-specific infor- mation to correctly predict the missing word. # 6 Conclusion In this work, we showed that M-BERT’s ro- bust, often surprising, ability to generalize cross- lingually is underpinned by a multilingual repre- sentation, without being explicitly trained for it. The model handles transfer across scripts and to code-switching fairly well, but effective transfer to typologically divergent and transliterated targets 5Tn terms of ¢5 distance. °Our intuition is that the lower layers have more “token level” information, which is more language dependent, par- ticularly for languages that share few word pieces. will likely require the model to incorporate an ex- plicit multilingual training objective, such as that used by Lample and Conneau (2019) or Artetxe and Schwenk (2018). As to why M-BERT generalizes across lan- guages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space. It is our hope that these kinds of probing exper- iments will help steer researchers toward the most promising lines of inquiry by encouraging them to focus on the places where current contextualized word representation approaches fall short. # 7 Acknowledgements We would like to thank Mark Omernick, Livio Baldini Soares, Emily Pitler, Jason Riesa, and Slav Petrov for the valuable discussions and feedback. # References Mikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- arXiv shot cross-lingual preprint arXiv:1812.10464. Kelsey Ball and Dan Garrette. 2018. Part-of-speech tagging for code-switched, transliterated texts with- out explicit language identification. In Proceedings of EMNLP. Irshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal dependency parsing In Proceedings for Hindi-English code-switching. of NAACL. Ondˇrej Bojar, Yvette Graham, Amir Kamran, and Miloˇs Stanojevi´c. 2016. Results of the WMT16 metrics shared task. In Proceedings of the First Con- ference on Machine Translation: Volume 2, Shared Task Papers. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL. Matthew S. Dryer and Martin Haspelmath, edi- tors. 2013. WALS Online. Max Planck In- stitute for Evolutionary Anthropology, Leipzig. https://wals.info/. Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL. Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291. Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of ACL. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan T. McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A mul- In Proceedings of tilingual treebank collection. LREC. Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018a. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word rep- resentations. In Proceedings of NAACL. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP pipeline. In Proceedings of ACL. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? Probing for sentence structure in contex- In Proceedings of tualized word representations. ICLR. Introduction to the CoNLL-2002 shared task: Language-independent In Proceedings of named entity recognition. CoNLL. Daniel Zeman, Martin Popel, Milan Straka, Jan Ha- jic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, V´aclava Kettnerov´a, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Mis- sil¨a, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- ung, Marie-Catherine de Marneffe, Manuela San- guinetti, Maria Simi, Hiroshi Kanayama, Valeria de- Paiva, Kira Droganova, H´ector Mart´ınez Alonso, C¸ a˘grı C¸ ¨oltekin, Umut Sulubacak, Hans Uszkor- eit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Str- nadov´a, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multi- lingual parsing from raw text to universal dependen- cies. In Proceedings of CoNLL. # A Model Parameters All models were fine-tuned with a batch size of 32, and a maximum sequence length of 128 for 3 epochs. We used a learning rate of 3e−5 with learning rate warmup during the first 10% of steps, and linear decay after- wards. We also applied 10% dropout on the last layer. No parameter tuning was performed. We used the BERT-Base, Multilingual Cased checkpoint from https://github. com/google-research/bert. # B CoNLL Results for EN-BERT Fine-tuning \Eval EN DE NL ES EN 91.07 55.36 59.36 55.09 DE 24.38 73.32 27.57 26.13 NL 40.62 54.84 84.23 48.75 ES 49.99 50.80 53.15 81.84 Table 7: NER results on the CoNLL test sets for EN-BERT. The row is the fine-tuning language, the column the evaluation language. There is a big gap between this model’s zero-shot performance and M-BERT’s, showing that the pre-training is helping in cross-lingual transfer. # C Some POS Results for EN-BERT Fine-tuning \Eval EN DE ES IT EN 96.94 28.62 28.78 52.48 DE 38.31 92.63 46.15 48.08 ES 50.38 30.23 94.36 76.51 IT 46.07 25.59 71.50 96.41 Table 8: POS accuracy on the UD test sets for a subset of European languages using EN-BERT. The row spec- ifies a fine-tuning language, the column the evaluation language. There is a big gap between this model’s zero- shot performance and M-BERT’s, showing the pre- training is helping learn a useful cross-lingual repre- sentation for grammar.
{ "id": "1812.10464" }
1906.00695
Continual learning with hypernetworks
Artificial neural networks suffer from catastrophic forgetting when they are sequentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of-the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets.
http://arxiv.org/pdf/1906.00695
Johannes von Oswald, Christian Henning, Benjamin F. Grewe, João Sacramento
cs.LG, cs.AI, stat.ML, 68T99
Published at ICLR 2020
null
cs.LG
20190603
20220411
2 2 0 2 r p A 1 1 ] G L . s c [ 4 v 5 9 6 0 0 . 6 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # CONTINUAL LEARNING WITH HYPERNETWORKS # Johannes von Oswald*, Christian Henning*, Benjamin F. Grewe, João Sacramento *Equal contribution Institute of Neuroinformatics University of Zürich and ETH Zürich Zürich, Switzerland {voswaldj,henningc,bgrewe,rjoao}@ethz.ch # ABSTRACT Artificial neural networks suffer from catastrophic forgetting when they are se- quentially trained on multiple tasks. To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity. Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks only require rehearsing task-specific weight realizations, which can be maintained in memory using a simple regularizer. Besides achieving state-of- the-art performance on standard CL benchmarks, additional experiments on long task sequences reveal that task-conditioned hypernetworks display a very large capacity to retain previous memories. Notably, such long memory lifetimes are achieved in a compressive regime, when the number of trainable hypernetwork weights is comparable or smaller than target network size. We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. Finally, forward information transfer is further supported by empirical results on a challenging CL benchmark based on the CIFAR-10/100 image datasets. # INTRODUCTION We assume that a neural network f (x, Θ) with trainable weights Θ is given data from a set of tasks {(X(1), Y(1)), . . . , (X(T ), Y(T ))}, with input samples X(t) = {x(t,i)}nt i=1 and output samples Y(t) = {y(t,i)}nt i=1, where nt ≡ |X(t)|. A standard training approach learns the model using data from all tasks at once. However, this is not always possible in real-world problems, nor desirable in an online learning setting. Continual learning (CL) refers to an online learning setup in which tasks are presented sequentially (see van de Ven & Tolias, 2019, for a recent review on CL). In CL, when learning a new task t, starting with weights Θ(t−1) and observing only (X(t), Y(t)), the goal is to find a new set of parameters Θ(t) that (1) retains (no catastrophic forgetting) or (2) improves (positive backward transfer) performance on previous tasks compared to Θ(t−1) and (3) solves the new task t potentially utilizing previously acquired knowledge (positive forward transfer). Achieving these goals is non-trivial, and a longstanding issue in neural networks research. Here, we propose addressing catastrophic forgetting at the meta level: instead of directly attempting to retain f (x, Θ) for previous tasks, we fix the outputs of a metamodel fh(e, Θh) termed task-conditioned hypernetwork which maps a task embedding e to weights Θ. Now, a single point has to be memorized per task. To motivate such approach, we perform a thought experiment: we assume that we are allowed to store all inputs {X(1), . . . , X(T )} seen so far, and to use these data to compute model outputs corresponding to Θ(T −1). In this idealized setting, one can avoid forgetting by simply mixing data from the current task with data from the past, {(X(1), ˆY(1)), . . . , (X(T −1), ˆY(T −1)), (X(T ), Y(T ))}, where ˆY(t) refers to a set of synthetic targets generated using the model itself f ( · , Θ(t−1)). Hence, by training to retain previously acquired input-output mappings, one can obtain a sequential algorithm in principle as powerful as multi-task learning. Multi-task learning, where all tasks are learned 1 Published as a conference paper at ICLR 2020 simultaneously, can be seen as a CL upper-bound. The strategy described above has been termed rehearsal (Robins, 1995). However, storing previous task data violates our CL desiderata. Therefore, we introduce a change in perspective and move from the challenge of maintaining individual input-output data points to the problem of maintaining sets of parameters {Θ(t)}, without explicitly storing them. To achieve this, we train the metamodel parameters Θh analogous to the above outlined learning scheme, where synthetic targets now correspond to weight configurations that are suitable for previous tasks. This exchanges the storage of an entire dataset by a single low-dimensional task descriptor, yielding a massive memory saving in all but the simplest of tasks. Despite relying on regularization, our approach is a conceptual departure from previous algorithms based on regularization in weight (e.g., Kirkpatrick et al., 2017; Zenke et al., 2017) or activation space (e.g., He & Jaeger, 2018). Our experimental results show that task-conditioned hypernetworks do not suffer from catastrophic forgetting on a set of standard CL benchmarks. Remarkably, they are capable of retaining memories with practically no decrease in performance, when presented with very long sequences of tasks. Thanks to the expressive power of neural networks, task-conditioned hypernetworks exploit task-to- task similarities and transfer information forward in time to future tasks. Finally, the task-conditional metamodelling perspective that we put forth is generic, as it does not depend on the specifics of the target network architecture. We exploit this key principle and show that the very same metamodelling framework extends to, and can improve, an important class of CL methods known as generative replay methods, which are current state-of-the-art performers in many practical problems (Shin et al., 2017; Wu et al., 2018; van de Ven & Tolias, 2018). 2 MODEL # 2.1 TASK-CONDITIONED HYPERNETWORKS Hypernetworks parameterize target models. The centerpiece of our approach to continual learn- ing is the hypernetwork, Fig. 1a. Instead of learning the parameters Θtrgt of a particular function ftrgt directly (the target model), we learn the parameters Θh of a metamodel. The output of such meta- model, the hypernetwork, is Θtrgt. Hypernetworks can therefore be thought of as weight generators, which were originally introduced to dynamically parameterize models in a compressed form (Ha et al., 2017; Schmidhuber, 1992; Bertinetto et al., 2016; Jia et al., 2016). # a b Figure 1: Task-conditioned hypernetworks for continual learning. (a) Commonly, the parame- ters of a neural network are directly adjusted from data to solve a task. Here, a weight generator termed hypernetwork is learned instead. Hypernetworks map embedding vectors to weights, which parameterize a target neural network. In a continual learning scenario, a set of task-specific em- beddings is learned via backpropagation. Embedding vectors provide task-dependent context and bias the hypernetwork to particular solutions. (b) A smaller, chunked hypernetwork can be used iteratively, producing a chunk of target network weights at a time (e.g., one layer at a time). Chunked hypernetworks can achieve model compression: the effective number of trainable parameters can be smaller than the number of target network weights. 2 Published as a conference paper at ICLR 2020 Continual learning with hypernetwork output regularization. One approach to avoid catas- trophic forgetting is to store data from previous tasks and corresponding model outputs, and then fix such outputs. This can be achieved using an output regularizer of the following form, where past outputs play the role of pseudo-targets (Robins, 1995; Li & Hoiem, 2018; Benjamin et al., 2018): T-1(x| Loupar = 32 S |lF 0") — Fo), @)/P, a) t=1 i=1 In the equation above, Θ∗ is the set of parameters before attempting to learn task T , and f is the learner. This approach, however, requires storing and iterating over previous data, a process that is known as rehearsing. This is potentially expensive memory-wise and not strictly online learning. A possible workaround is to generate the pseudo-targets by evaluating f on random patterns (Robins, 1995) or on the current task dataset (Li & Hoiem, 2018). However, this does not necessarily fix the behavior of the function f in the regions of interest. Hypernetworks sidestep this problem naturally. In target network weight space, a single point (i.e., one set of weights) has to be fixed per task. This can be efficiently achieved with task-conditioned hypernetworks, by fixing the hypernetwork output on the appropriate task embedding. Similar to Benjamin et al. (2018), we use a two-step optimization procedure to introduce memory- preserving hypernetwork output constraints. First, we compute a candidate change ∆Θh which minimizes the current task loss L(T ) task = Ltask(Θh, e(T ), X(T ), Y(T )) with respect to Θ. The candidate ∆Θh is obtained with an optimizer of choice (we use Adam throughout; Kingma & Ba, 2015). The actual parameter change is then computed by minimizing the following total loss: Liotal = Liask(On, el), x, yy) + Loutput(On, On, A@n, fe}) Boutput > = Leu (Onse™, XI, YIM) + Bate SI fy(el, 03) — fnle,Oy + 46n))|?, 2) t=1 where Θ∗ h is the set of hypernetwork parameters before attempting to learn task T , ∆Θh is considered fixed and βoutput is a hyperparameter that controls the strength of the regularizer. On Appendix D, we run a sensitivity analysis on βoutput and experiment with a more efficient stochastic regularizer where the averaging is performed over a random subset of past tasks. More computationally-intensive algorithms that involve a full inner-loop refinement, or use second- order gradient information by backpropagating through ∆Θh could be applied. However, we found empirically that our one-step correction worked well. Exploratory hyperparameter scans revealed that the inclusion of the lookahead ∆Θh in (2) brought a minor increase in performance, even when computed with a cheap one-step procedure. Note that unlike in Eq. 1, the memory-preserving term Loutput does not depend on past data. Memory of previous tasks enters only through the collection of task embeddings {e(t)}T −1 t=1 . Learned task embeddings. Task embeddings are differentiable deterministic parameters that can be learned, just like Θh. At every learning step of our algorithm, we also update the current task embedding e(T ) to minimize the task loss L(T ) task . After learning the task, the final embedding is saved and added to the collection {e(t)}. 2.2 MODEL COMPRESSION WITH CHUNKED HYPERNETWORKS Chunking. In a straightforward implementation, a hypernetwork produces the entire set of weights of a target neural network. For modern deep neural networks, this is a very high-dimensional output. However, hypernetworks can be invoked iteratively, filling in only part of the target model at each step, in chunks (Ha et al., 2017; Pawlowski et al., 2017). This strategy allows applying smaller hypernetworks that are reusable. Interestingly, with chunked hypernetworks it is possible to solve tasks in a compressive regime, where the number of learned parameters (those of the hypernetwork) is effectively smaller than the number of target network parameters. Chunk embeddings and network partitioning. Reapplying the same hypernetwork multiple times introduces weight sharing across partitions of the target network, which is usually not desirable. 3 Published as a conference paper at ICLR 2020 To allow for a flexible parameterization of the target network, we introduce a set C = {ci}NC i=1 of chunk embeddings, which are used as an additional input to the hypernetwork, Fig. 1b. Thus, the full set of target network parameters Θtrgt = [fh(e, c1), . . . , fh(e, cNC )] is produced by iteration over C, keeping the task embedding e fixed. This way, the hypernetwork can produce distinct weights for each chunk. Furthermore, chunk embeddings, just like task embeddings, are ordinary deterministic parameters that we learn via backpropagation. For simplicity, we use a shared set of chunk embeddings for all tasks and we do not explore special target network partitioning strategies. How flexible is our approach? Chunked neural networks can in principle approximate any target weight configuration arbitrarily well. For completeness, we state this formally in Appendix E. # 2.3 CONTEXT-FREE INFERENCE: UNKNOWN TASK IDENTITY Determining which task to solve from input data. Our hypernetwork requires a task embedding input to generate target model weights. In certain CL applications, an appropriate embedding can be immediately selected as task identity is unambiguous, or can be readily inferred from contextual clues. In other cases, knowledge of the task at hand is not explicitly available during inference. In the following, we show that our metamodelling framework generalizes to such situations. In particular, we consider the problem of inferring which task to solve from a given input pattern, a noted benchmark challenge (Farquhar & Gal, 2018; van de Ven & Tolias, 2019). Below, we explore two different strategies that leverage task-conditioned hypernetworks in this CL setting. Task-dependent predictive uncertainty. Neural network models are increasingly reliable in sig- nalling novelty and appropriately handling out-of-distribution data. For categorical target distributions, the network ideally produces a flat, high entropy output for unseen data and, conversely, a peaked, low-entropy response for in-distribution data (Hendrycks & Gimpel, 2016; Liang et al., 2017). This suggests a first, simple method for task inference (HNET+ENT). Given an input pattern for which task identity is unknown, we pick the task embedding which yields lowest predictive uncertainty, as quantified by output distribution entropy. While this method relies on accurate novelty detection, which is in itself a far from solved research problem, it is otherwise straightforward to implement and no additional learning or model is required to infer task identity. Hypernetwork-protected synthetic replay. When a generative model is available, catastrophic forgetting can be circumvented by mixing current task data with replayed past synthetic data (for recent work see Shin et al., 2017; Wu et al., 2018). Besides protecting the generative model itself, synthetic data can protect another model of interest, for example, another discriminative model. This conceptually simple strategy is in practice often the state-of-the-art solution to CL (van de Ven & Tolias, 2019). Inspired by these successes, we explore augmenting our system with a replay network, here a standard variational autoencoder (VAE; Kingma & Welling, 2014) (but see Appendix F for experiments with a generative adversarial network, Goodfellow et al., 2014). Synthetic replay is a strong, but not perfect, CL mechanism as the generative model is subject to drift, and errors tend to accumulate and amplify with time. Here, we build upon the following key observation: just like the target network, the generator of the replay model can be specified by a hypernetwork. This allows protecting it with the output regularizer, Eq. 2, rather than with the model’s own replay data, as done in related work. Thus, in this combined approach, both synthetic replay and task-conditional metamodelling act in tandem to reduce forgetting. We explore hypernetwork-protected replay in two distinct setups. First, we consider a minimalist architecture (HNET+R), where only the replay model, and not the target classifier, is parameterized by a hypernetwork. Here, forgetting in the target network is obviated by mixing current data with synthetic data. Synthetic target output values for previous tasks are generated using a soft targets method, i.e., by simply evaluating the target function before learning the new task on synthetic input data. Second (HNET+TIR), we introduce an auxiliary task inference classifier, protected using synthetic replay data and trained to predict task identity from input patterns. This architecture requires additional modelling, but it is likely to work well when tasks are strongly dissimilar. Furthermore, the task inference subsystem can be readily applied to process more general forms of contextual information, beyond the current input pattern. We provide additional details, including network architectures and the loss functions that are optimized, in Appendices B and C. 4 Published as a conference paper at ICLR 2020 # 3 RESULTS We evaluate our method on a set of standard image classification benchmarks on the MNIST, CIFAR- 10 and CIFAR-100 public datasets1. Our main aims are to (1) study the memory retention capabilities of task-conditioned hypernetworks across three continual learning settings, and (2) investigate information transfer across tasks that are learned sequentially. Continual learning scenarios. In our experiments we consider three different CL scenarios (van de Ven & Tolias, 2019). In CL1, the task identity is given to the system. This is arguably the standard sequential learning scenario, and the one we consider unless noted otherwise. In CL2, task identity is unknown to the system, but it does not need to be explicitly determined. A target network with a fixed head is required to solve multiple tasks. In CL3, task identity has to be explicitly inferred. It has been argued that this scenario is the most natural, and the one that tends to be harder for neural networks (Farquhar & Gal, 2018; van de Ven & Tolias, 2019). Experimental details. Aiming at comparability, for the experiments on the MNIST dataset we model the target network as a fully-connected network and set all hyperparameters after van de Ven & Tolias (2019), who recently reviewed and compared a large set of CL algorithms. For our CIFAR experiments, we opt for a ResNet-32 target neural network (He et al., 2016) to assess the scalability of our method. A summary description of the architectures and particular hyperparameter choices, as well as additional experimental details, is provided in Appendix C. We emphasize that, on all our experiments, the number of hypernetwork parameters is always smaller or equal than the number of parameters of the models we compare with. # a b c Figure 2: 1D nonlinear regression. (a) Task-conditioned hypernetworks with output regularization can easily model a sequence of polynomials of increasing degree, while learning in a continual fashion. (b) The solution found by a target network which is trained directly on all tasks simultaneously is similar. (c) Fine-tuning, i.e., learning sequentially, leads to forgetting of past tasks. Dashed lines depict ground truth, markers show model predictions. Nonlinear regression toy problem. To illustrate our approach, we first consider a simple nonlinear regression problem, where the function to be approximated is scalar-valued, Fig. 2. Here, a sequence of polynomial functions of increasing degree has to be inferred from noisy data. This motivates the continual learning problem: when learning each task in succession by modifying Θh with the memory-preserving regularizer turned off (βoutput = 0, see Eq. 2) the network learns the last task but forgets previous ones, Fig. 2c. The regularizer protects old solutions, Fig. 2a, and performance is comparable to an offline non-continual learner, Fig. 2b. Permuted MNIST benchmark. Next, we study the permuted MNIST benchmark. This problem is set as follows. First, the learner is presented with the full MNIST dataset. Subsequently, novel tasks are obtained by applying a random permutation to the input image pixels. This process can be repeated to yield a long task sequence, with a typical length of T = 10 tasks. Given the low similarity of the generated tasks, permuted MNIST is well suited to study the memory capacity of a continual learner. For T = 10, we find that task-conditioned hypernetworks are state-of-the-art on CL1, Table 1. Interestingly, inferring tasks through the predictive distribution entropy (HNET+ENT) works well on the permuted MNIST benchmark. Despite the simplicity of the method, both synaptic intelligence (SI; Zenke et al., 2017) and online elastic weight consolidation (EWC; Schwarz et al., 2018) are overperformed on CL3 by a large margin. When complemented with generative replay 1Source code is available under https://github.com/chrhenning/hypercl. 5 Published as a conference paper at ICLR 2020 # a b Figure 3: Experiments on the permuted MNIST benchmark. (a) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST-100). Task- conditioned hypernetworks (hnet, in red) achieve very large memory lifetimes on the permuted MNIST benchmark. Synaptic intelligence (SI, in blue; Zenke et al., 2017), online EWC (in orange; Schwarz et al., 2018) and deep generative replay (DGR+distill, in green; Shin et al., 2017) methods are shown for comparison. Memory retention in SI and DGR+distill degrade gracefully, whereas EWC suffers from rigidity and can never reach very high accuracy, even though memories persist for the entire experiment duration. (b) Compression ratio |Θh∪{e(t)}| versus task-averaged test set accuracy after learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for the PermutedMNIST-10 benchmark. Hypernetworks allow for model compression and perform well even when the number of target model parameters exceeds their own. Performance decays nonlinearly: accuracies stay approximately constant for a wide range of compression ratios below unity. Hyperparameters were tuned once for compression ratio ≈ 1 and were then used for all compression ratios. Shaded areas denote STD (a) resp. SEM (b) across 5 random seeds. methods, task-conditioned hypernetworks (HNET+TIR and HNET+R) are the best performers on all three CL scenarios. Performance differences become larger in the long sequence limit, Fig. 3a. For longer task sequences (T = 100), SI and DGR+distill (Shin et al., 2017; van de Ven & Tolias, 2018) degrade gracefully, while the regularization strength of online EWC prevents the method from achieving high accuracy (see Fig. A6 for a hyperparameter search on related work). Notably, task-conditioned hypernetworks show minimal memory decay and find high performance solutions. Because the hypernetwork operates in a compressive regime (see Fig. 3b and Fig. A7 for an exploration of compression ratios), our results do not naively rely on an increase in the number of parameters. Rather, they suggest that previous methods are not yet capable of making full use of target model capacity in a CL setting. We report a set of extended results on this benchmark on Appendix D, including a study of CL2/3 (T = 100), where HNET+TIR strongly outperforms the related work. Split MNIST benchmark. Split MNIST is another popular CL benchmark, designed to introduce task overlap. In this problem, the various digits are sequentially paired and used to form five binary classification tasks. Here, we find that task-conditioned hypernetworks are the best overall performers. In particular, HNET+R improves the previous state-of-the-art method DGR+distill on both CL2 and CL3, almost saturating the CL2 upper bound for replay models (Appendix D). Since HNET+R is essentially hypernetwork-protected DGR, these results demonstrate the generality of task-conditioned hypernetworks as effective memory protectors. To further support this, in Appendix F we show that our replay models (we experiment with both a VAE and a GAN) can learn in a class-incremental manner the full MNIST dataset. Finally, HNET+ENT again outperforms both EWC and SI, without any generative modelling. On the split MNIST problem, tasks overlap and therefore continual learners can transfer information across tasks. To analyze such effects, we study task-conditioned hypernetworks with two-dimensional task embedding spaces, which can be easily visualized. Despite learning happening continually, we 6 Published as a conference paper at ICLR 2020 Table 1: Task-averaged test accuracy (± SEM, n = 20) on the permuted (‘P10’) and split (‘S’) MNIST experiments. In the table, EWC refers to online EWC and DGR refers to DGR+distill (results reproduced from van de Ven & Tolias, 2019). We tested three hypernetwork-based models: for HNET+ENT (HNET alone for CL1), we inferred task identity based on the entropy of the predictive distribution; for HNET+TIR, we trained a hypernetwork-protected recognition-replay network (based on a VAE, cf. Fig. A1) to infer the task from input patterns; for HNET+R the main classifier was trained by mixing current task data with synthetic data generated from a hypernetwork-protected VAE. EWC SI DGR HNET+ENT HNET+TIR HNET+R P10-CL1 95.96 ± 0.06 94.75 ± 0.14 97.51 ± 0.01 97.57 ± 0.02 97.57 ± 0.02 97.87 ± 0.01 P10-CL2 94.42 ± 0.13 95.33 ± 0.11 97.35 ± 0.02 92.80 ± 0.15 97.58 ± 0.02 97.60 ± 0.01 P10-CL3 33.88 ± 0.49 29.31 ± 0.62 96.38 ± 0.03 91.75 ± 0.21 97.59 ± 0.01 97.76 ± 0.01 S-CL1 S-CL2 S-CL3 99.12 ± 0.11 99.09 ± 0.15 99.61 ± 0.02 99.79 ± 0.01 99.79 ± 0.01 99.83 ± 0.01 64.32 ± 1.90 65.36 ± 1.57 96.83 ± 0.20 87.01 ± 0.47 94.43 ± 0.28 98.00 ± 0.03 19.96 ± 0.07 19.99 ± 0.06 91.79 ± 0.32 69.48 ± 0.80 89.59 ± 0.59 95.30 ± 0.13 # b # a a T1 after learning T5 T5 after learning TS 5 5 100 e,0 &, 0 75 50 —5 —5 —5 0 5 —5 0 5 e, e, Figure 4: Two-dimensional task embedding space for the split MNIST benchmark. Color- coded test set classification accuracies after learning the five splits, shown as the embedding vector components are varied. Markers denote the position of final task embeddings. (a) High classification performance with virtually no forgetting is achieved even when e-space is low-dimensional. The model shows information transfer in embedding space: the first task is solved in a large volume that includes embeddings for subsequently learned tasks. (b) Competition in embedding space: the last task occupies a finite high performance region, with graceful degradation away from the embedding vector. Previously learned task embeddings still lead to moderate, above-chance performance. find that the algorithm converges to a hypernetwork configuration that can produce target model parameters that simultaneously solve old and new tasks, Fig. 4, given the appropriate task embedding. Split CIFAR-10/100 benchmark. Finally, we study a more challenging benchmark, where the learner is first asked to solve the full CIFAR-10 classification task and is then presented with sets of ten classes from the CIFAR-100 dataset. We perform experiments both with a high-performance ResNet-32 target network architecture (Fig. 5) and with a shallower model (Fig. A3) that we exactly reproduced from previous work (Zenke et al., 2017). Remarkably, on the ResNet-32 model, we find that task-conditioned hypernetworks essentially eliminate altogether forgetting. Furthermore, forward information transfer takes place; knowledge from previous tasks allows the network to find better solutions than when learning each task individually from initial conditions. Interestingly, forward transfer is stronger on the shallow model experiments (Fig. A3), where we otherwise find that our method performs comparably to SI. # 4 DISCUSSION Bayesian accounts of continual learning. According to the standard Bayesian CL perspec- tive, a posterior parameter distribution is recursively updated using Bayes’ rule as tasks arrive 7 # g 8 ES Published as a conference paper at ICLR 2020 82 81 82 | 1 . 1 . I I | 3 4 5 Split CIFAR-10/100 88 - 83 79 1 a CIFAR-10 1 2 100 ~ a Accuracy [%] a 3 nD a ° Task t EH hnetduring =H hnet from scratch fine-tuning Figure 5: Split CIFAR-10/100 CL benchmark. Test set accuracies (mean ± STD, n = 5) on the entire CIFAR- 10 dataset and subsequent CIFAR-100 splits of ten classes. Our hypernetwork- protected ResNet-32 displays virtually no forgetting; final averaged perfor- mance (hnet, in red) matches the imme- diate one (hnet-during, in blue). Further- more, information is transferred across tasks, as performance is higher than when training each task from scratch (purple). Disabling our regularizer leads to strong forgetting (in yellow). (Kirkpatrick et al., 2017; Huszár, 2018; Nguyen et al., 2018). While this approach is theoretically sound, in practice, the approximate inference methods that are typically preferred can lead to stiff models, as a compromise solution that suits all tasks has to be found within the mode determined by the first task. Such restriction does not apply to hypernetworks, which can in principle model complex multimodal distributions (Louizos & Welling, 2017; Pawlowski et al., 2017; Henning et al., 2018). Thus, rich, hypernetwork-modelled priors are one avenue of improvement for Bayesian CL methods. Interestingly, task-conditioning offers an alternative possibility: instead of consolidating every task onto a single distribution, a shared task-conditioned hypernetwork could be leveraged to model a set of parameter posterior distributions. This conditional metamodel naturally extends our framework to the Bayesian learning setting. Such approach will likely benefit from additional flexibility, compared to conventional recursive Bayesian updating. Related approaches that rely on task-conditioning. Our model fits within, and in certain ways generalizes, previous CL methods that condition network computation on task descriptors. Task- conditioning is commonly implemented using multiplicative masks at the level of modules (Rusu et al., 2016; Fernando et al., 2017), neurons (Serra et al., 2018; Masse et al., 2018) or weights (Mallya & Lazebnik, 2018). Such methods work best with large networks and come with a significant storage overhead, which typically scales with the number of tasks. Our approach differs by explicitly modelling the full parameter space using a metamodel, the hypernetwork. Thanks to this metamodel, generalization in parameter and task space is possible, and task-to-task dependencies can be exploited to efficiently represent solutions and transfer present knowledge to future problems. Interestingly, similar arguments have been drawn in work developed concurrently to ours (Lampinen & McClelland, 2019), where task embedding spaces are further explored in the context of few-shot learning. In the same vein, and like the approach developed here, recent work in CL generates last-layer network parameters as part of a pipeline to avoid catastrophic forgetting (Hu et al., 2019) or distills parameters onto a contractive auto-encoding model (Camp et al., 2018). Positive backwards transfer. In its current form, the hypernetwork output regularizer protects previously learned solutions from changing, such that only weak backwards transfer of information can occur. Given the role of selective forgetting and refinement of past memories in achieving intelligent behavior (Brea et al., 2014; Richards & Frankland, 2017), investigating and improving backwards transfer stands as an important direction for future research. Relevance to systems neuroscience. Uncovering the mechanisms that support continual learning in both brains and artificial neural networks is a long-standing question (McCloskey & Cohen, 1989; French, 1999; Parisi et al., 2019). We close with a speculative systems interpretation (Kumaran et al., 2016; Hassabis et al., 2017) of our work as a model for modulatory top-down signals in cortex. Task embeddings can be seen as low-dimensional context switches, which determine the behavior of a modulatory system, the hypernetwork in our case. According to our model, the hypernetwork would in turn regulate the activity of a target cortical network. As it stands, implementing a hypernetwork would entail dynamically changing the entire connectivity of a target network, or cortical area. Such a process seems difficult to conceive in the brain. However, this strict literal interpretation can be relaxed. For example, a hypernetwork can output lower- dimensional modulatory signals (Marder, 2012), instead of a full set of weights. This interpretation 8 Published as a conference paper at ICLR 2020 is consistent with a growing body of work which suggests the involvement of modulatory inputs in implementing context- or task-dependent network mode-switching (Mante et al., 2013; Jaeger, 2014; Stroud et al., 2018; Masse et al., 2018). # 5 CONCLUSION We introduced a novel neural network model, the task-conditioned hypernetwork, that is well-suited for CL problems. A task-conditioned hypernetwork is a metamodel that learns to parameterize target functions, that are specified and identified in a compressed form using a task embedding vector. Past tasks are kept in memory using a hypernetwork output regularizer, which penalizes changes in previously found target weight configurations. This approach is scalable and generic, being applicable as a standalone CL method or in combination with generative replay. Our results are state-of-the-art on standard benchmarks and suggest that task-conditioned hypernetworks can achieve long memory lifetimes, as well as transfer information to future tasks, two essential properties of a continual learner. # ACKNOWLEDGMENTS This work was supported by the Swiss National Science Foundation (B.F.G. CRSII5-173721), ETH project funding (B.F.G. ETH-20 19-01) and funding from the Swiss Data Science Center (B.F.G, C17-18, J. v. O. P18-03). Special thanks to Simone Carlo Surace, Adrian Huber, Xu He, Markus Marks, Maria R. Cervera and Jannes Jegminat for discussions, helpful pointers to the CL literature and for feedback on our paper draft. # REFERENCES Ari S. Benjamin, David Rolnick, and Konrad Kording. Measuring and regularizing networks in function space. arXiv preprint arXiv:1805.08289, 2018. Luca Bertinetto, João F. Henriques, Jack Valmadre, Philip Torr, and Andrea Vedaldi. Learning feed- forward one-shot learners. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 523–531. Curran Associates, Inc., 2016. Johanni Brea, Robert Urbanczik, and Walter Senn. A Normative Theory of Forgetting: Lessons from the Fruit Fly. PLOS Computational Biology, 10(6):e1003640, 2014. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019. Blake Camp, Jaya Krishna Mandivarapu, and Rolando Estrada. Self-net: Lifelong learning via continual self-modeling. arXiv preprint arXiv:1805.10354, 2018. Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. arXiv preprint arXiv:1907.02544, 2019. Sebastian Farquhar and Yarin Gal. Towards robust evaluations of continual learning. arXiv preprint arXiv:1805.09733, 2018. Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. Robert M. French. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3 (4):128–135, April 1999. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Process- ing Systems 27, pp. 2672–2680. Curran Associates, Inc., 2014. 9 Published as a conference paper at ICLR 2020 David Ha, Andrew M. Dai, and Quoc V. Le. HyperNetworks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. Boris Hanin. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. arXiv preprint: arXiv:1708.02691, 2017. Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-Inspired Artificial Intelligence. Neuron, 95(2):245–258, July 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Xu He and Herbert Jaeger. Overcoming catastrophic interference using conceptor-aided backpropa- gation. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh, and Razvan Pascanu. Task agnostic continual learning via meta learning. arXiv preprint arXiv:1906.05201, 2019. Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016. Christian Henning, Johannes von Oswald, João Sacramento, Simone Carlo Surace, Jean-Pascal Pfister, and Benjamin F Grewe. Approximating the predictive distribution via adversarially-trained hypernetworks. In NeurIPS Bayesian Deep Learning Workshop, 2018. Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, and Rui Yan. Overcoming catastrophic forgetting for continual learning via model adaptation. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. Ferenc Huszár. Note on the quadratic penalties in elastic weight consolidation. Proceedings of the National Academy of Sciences, 115(11):E2496–E2497, March 2018. Herbert Jaeger. Controlling Recurrent Neural Networks by Conceptors. arXiv preprint: arXiv:1403.3369, 2014. Xu Jia, Bert De Brabandere, Tinne Tuytelaars, and Luc V Gool. Dynamic Filter Networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 667–675. Curran Associates, Inc., 2016. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410, 2019. Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd Interna- tional Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, March 2017. Dharshan Kumaran, Demis Hassabis, and James L. McClelland. What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated. Trends in Cognitive Sciences, 20(7):512–534, July 2016. 10 Published as a conference paper at ICLR 2020 Andrew K Lampinen and James L McClelland. Embedded meta-learning: Toward more flexible deep-learning models. arXiv preprint arXiv:1905.09950, 2019. Moshe Leshno and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6:861–867, 1993. Z. Li and D. Hoiem. Learning without Forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935–2947, 2018. Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017. Christos Louizos and Max Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 2218–2227. JMLR.org, 2017. Mario Luˇci´c, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, and Sylvain Gelly. High-fidelity image generation with fewer labels. In International Conference on Machine Learning, pp. 4183–4192, 2019. Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7765–7773, 2018. Valerio Mante, David Sussillo, Krishna V. Shenoy, and William T. Newsome. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503(7474):78–84, 2013. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2813–2821. IEEE, 2017. Eve Marder. Neuromodulation of Neuronal Circuits: Back to the Future. Neuron, 76(1):1–11, October 2012. Nicolas Y Masse, Gregory D Grant, and David J Freedman. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences, 115(44):E10467–E10475, 2018. Michael McCloskey and Neal J. Cohen. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem. volume 24, pp. 109–165. Academic Press, 1989. Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. 2018. German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. Neural Networks, 113:54–71, May 2019. Nick Pawlowski, Andrew Brock, Matthew C. H. Lee, Martin Rajchl, and Ben Glocker. Implicit Weight Uncertainty in Neural Networks. arXiv preprint arXiv:1711.01297, 2017. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML’14, pp. II–1278– II–1286. JMLR.org, 2014. Blake A. Richards and Paul W. Frankland. The Persistence and Transience of Memory. Neuron, 94 (6):1071–1084, June 2017. Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 3738–3748. Curran Associates, Inc., 2018. 11 Published as a conference paper at ICLR 2020 Anthony Robins. Catastrophic Forgetting, Rehearsal and Pseudorehearsal. Connection Science, 7(2): 123–146, June 1995. David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P Lillicrap, and Greg Wayne. Experience replay for continual learning. arXiv preprint arXiv:1811.11682, 2018. Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive Neural Networks. arXiv preprint arXiv:1606.04671, 2016. Jürgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131–139, 1992. Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International learning. Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4528–4537, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. Joan Serra, Didac Suris, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4548–4557, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual Learning with Deep Generative Replay. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 2990–2999. Curran Associates, Inc., 2017. Jake P. Stroud, Mason A. Porter, Guillaume Hennequin, and Tim P. Vogels. Motor primitives in space and time via targeted gain modulation in cortical networks. Nature Neuroscience, 21(12):1774, December 2018. Siddharth Swaroop, Cuong V Nguyen, Thang D Bui, and Richard E Turner. Improving and under- standing variational continual learning. Continual Learning Workshop at NeurIPS, 2018. Gido M. van de Ven and Andreas S. Tolias. Generative replay with feedback connections as a general strategy for continual learning. arXiv preprint arXiv:1809.10635, 2018. Gido M. van de Ven and Andreas S. Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019. Chenshen Wu, Luis Herranz, Xialei Liu, yaxing wang, Joost van de Weijer, and Bogdan Raducanu. Memory Replay GANs: Learning to Generate New Categories without Forgetting. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5962–5972. Curran Associates, Inc., 2018. Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual Learning Through Synaptic Intelligence. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 3987–3995. JMLR.org, 2017. # A TASK-CONDITIONED HYPERNETWORKS: MODEL SUMMARY In our model, a task-conditioned hypernetwork produces the parameters Θtrgt = fh(e, Θh) of a target neural network. Given one such parameterization, the target model then computes predictions ˆy = ftrgt(x, Θtrgt) based on input data. Learning amounts to adapting the parameters Θh of the hypernetwork, including a set of task embeddings {e(t)}T t=1, as well as a set of chunk embeddings {ci}NC i=1 in case compression is sought or if the full hypernetwork is too large to be handled directly. To avoid castastrophic forgetting, we introduce an output regularizer which fixes the behavior of the hypernetwork by penalizing changes in target model parameters that are produced for previously learned tasks. 12 Published as a conference paper at ICLR 2020 Variables that need to be stored while learning new tasks. What are the storage requirements of our model, when learning continually? 1. Memory retention relies on saving one embedding per task. This collection {e(t)}T t=1 therefore grows linearly with T . Such linear scaling is undesirable asymptotically, but it turns out to be essentially negligible in practice, as each embedding is a single low- dimensional vector (e.g., see Fig. 4 for a run with 2D embeddings). 2. A frozen snapshot of the hypernetwork parameters Θ∗ 2. A frozen snapshot of the hypernetwork parameters Θ∗ needs to be kept, to evaluate the output regularizer in Eq. 2. h , taken before learning a new task, # B ADDITIONAL DETAILS ON HYPERNETWORK-PROTECTED REPLAY MODELS Variational autoencoders. For all HNET+TIR and HNET+R experiments reported on the main text we use VAEs as our replay models (Fig. A1a, Kingma & Welling, 2014). Briefly, a VAE consists of an encoder-decoder network pair, where the encoder network processes some input pattern x and its outputs fenc(x) = (µ, σ2) comprise the parameters µ and σ2 (encoded in log domain, to enforce nonnegativity) of a diagonal multivariate Gaussian pZ(z; µ, σ2), which governs the distribution of latent samples z. On the other side of the circuit, the decoder network processes a latent sample z and a one-hot-encoded task identity vector and returns an input pattern reconstruction, fdec(z, 1t) = ˆx. VAEs can preserve memories using a technique called generative replay: when training task T , input samples are generated from the current replay network for old tasks t < T , by varying 1t and drawing latent space samples z. Generated data can be mixed with the current dataset, yielding an augmented dataset ˜X used to relearn model parameters. When protecting a discriminative model, synthetic ‘soft’ targets can be generated by evaluating the network on ˜X . We use this strategy to protect an auxiliary task inference classifier in HNET+TIR, and to protect the main target model in HNET+R. Hypernetwork-protected replay. In our HNET+TIR and HNET+R experiments, we parameterize the decoder network through a task-conditioned hypernetwork, fh,dec(e, Θh,dec). In combination with our output regularizer, this allows us to take advantage of the memory retention capacity of hypernetworks, now on a generative model. The replay model (encoder, decoder and decoder hypernetwork) is a separate subsystem that is optimized independently from the target network. Its parameters Θenc and Θh,dec are learned by minimizing our regularized loss function, Eq. 2, here with the task-specific term set to the standard VAE objective function, # Genes On,aec) = Lrec(X, ene, Ouce) + Lprior(X, Sencs Qaec), LVAE (3) with Θdec = fh,dec(e, Θh,dec) introducing the dependence on Θh,dec. LVAE balances a reconstruction Lrec and a prior-matching Lprior penalties. For our MNIST experiments, we choose binary cross- entropy (in pixel space) as the reconstruction loss, that we write below for a single example x Lrec(X; Oenc; Odec) = Lent (X; face (2, Li(x); dec) ) (4) where Lyent(t,y) = — 2, te log ys, is the cross entropy. For a diagonal Gaussian pz, the prior- matching term can be evaluated analytically, l2| 1 Lorior 5 + logo? — 07 — p12). (5) i=l Above, z is a sample from pZ(z; µ(˜x), σ2(˜x)) obtained via the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014). This introduces the dependency of Lrec on Θenc. Task inference network (HNET+TIR). In the HNET+TIR setup, we extend our system to include a task inference neural network classifier α(x) parameterized by ΘTI, where tasks are encoded with a T -dimensional softmax output layer. In both CL2 and CL3 scenarios we use a growing single-head setup for α, and increase the dimensionality of the softmax layer as tasks arrive. 13 (4) Published as a conference paper at ICLR 2020 This network is prone to catastrophic forgetting when tasks are learned continually. To prevent this from happening we resort to replay data generated from a hypernetwork-protected VAE, described above. More specifically, we introduce a task inference loss, LTI(˜x, ΘTI) = Lxent(1t(˜x), α(˜x, Θenc)), (6) where t(˜x) denotes the correct task identity for a sample ˜x from the augmented dataset ˜X = { ˜X(1), . . . ˜X(T −1), ˜X(T )} with ˜X(t) being synthetic data fdec(z, 1t, Θdec) for t = 1 . . . T − 1 and ˜X(T ) = X(T ) is the current task data. Importantly, synthetic data is essential to obtain a well defined objective function for task inference; the cross-entropy loss LTI requires at least two groundtruth classes to be optimized. Note that replayed data can be generated online by drawing samples z from the prior. a b c Figure A1: Hypernetwork-protected replay model setups. (a) A hypernetwork-protected VAE, that we used for HNET+R and HNET+TIR main text experiments. (b) A hypernetwork-protected GAN, that we used for our class-incremental learning Appendix F experiments. (c) A task inference classifier protected with synthetic replay data, used on HNET+TIR experiments. Hypernetwork-protected GANs. Generative adversarial networks (Goodfellow et al., 2014) have become an established method for generative modelling and tend to produce higher quality images compared to VAEs, even at the scale of datasets as complex as ImageNet (Brock et al., 2019; Luˇci´c et al., 2019; Donahue & Simonyan, 2019). This makes GANs perfect candidates for powerful replay models. A suitable GAN instantiation for CL is the conditional GAN (Mirza & Osindero, 2014) as studied by Wu et al. (2018). Recent developments in the GAN literature already allude towards the potential of using hypernetwork-like structures, e.g., when injecting the latent noise (Karras et al., 2019) or when using class-conditional batch-normalization as in (Brock et al., 2019). We propose to go one step further and use a hypernetwork that maps the condition to the full set of generator parameters Θ∗ gen. Our framework allows training a conditional GAN one condition at the time. This is potentially of general interest, and goes beyond the scope of replay models, since conditional GANs trained in a mutli-task fashion as in Brock et al. (2019) require very large computational resources. For our showcase experiment on class-incremental MNIST learning, Fig. A8, we did not aim to compare to related work and therefore did not tune to have less weights in the hypernetwork than on the target network (for the VAE experiments, we use the same compressive setup as in the main text, see Appendix C). The GAN hypernetwork is a fully-connected chunked hypernetwork with 2 hidden layers of size 25 and 25 followed by an output size of 75,000. We used learning rates for both discriminator and the generator hypernetwork of 0.0001, as well as dropout of 0.4 in the discriminator and the system is trained for 10000 iterations per task. We use the Pearson Chi2 Least-Squares GAN loss from Mao et al. (2017) in our experiments. # C ADDITIONAL EXPERIMENTAL DETAILS All experiments are conducted using 16 NVIDIA GeForce RTX 2080 TI graphics cards. For simplicity, we decided to always keep the previous task embeddings e(t), t = 1, . . . , T − 1, fixed and only learn the current task embedding e(T ). In general, performance should be improved if the 14 Published as a conference paper at ICLR 2020 regularizer in Eq. 2 has a separate copy of the task embeddings e(t,∗) from before learning the current task, such that e(t) can be adapted. Hence, the targets become fh(e(t,∗), Θ∗ h ) and remain constant while learning task T . This would give the hypernetwork the flexibility to adjust the embeddings i.e. the preimage of the targets and therefore represent any function that includes all desired targets in its image. Nonlinear regression toy problem. The nonlinear toy regression from Fig. lis an illustrative example for a continual learning problem where a set of ground-truth functions {g, ..., g‘)} is given from which we collect 100 noisy training samples per task {(x, y) | y = g(x) +e with e~ N(0, 0?I),x ~U(X)}, where ¥ denotes the input domain of task t. We set 7 = 0.05 in this experiment. We perform 1D regression and choose the following set of tasks: g(1)(x) = x + 3 g(2)(x) = 2x2 − 1 g(3)(x) = (x − 3)3 X (1) = [−4, −2] X (2) = [−1, 1] X (3) = [2, 4] (7) (8) (9) The target network ftrgt consists of two fully-connected hidden layers using 10 neurons each. For illustrative purposes we use a full hypernetwork fh that generates all 141 weights of ftrgt at once, also being a fully-connected network with two hidden-layers of size 10. Hence, this is the only setup where we did not explore the possibility of a chunked hypernetwork. We use sigmoid activation functions in both networks. The task embedding dimension was set to 2. We train each task for 4000 iterations using the Adam optimizer with a learning rate of 0.01 (and otherwise default PyTorch options) and a batch size of 32. To test our regularizer in Fig. 2a we set βoutput to 0.005, while it is set to 0 for the fine-tuning experiment in Fig. 2c. For the multi-task learner in Fig. 2b we trained only the target network (no hypernetwork) for 12000 iterations with a learning rate of 0.05. Comparable performance could be obtained when training the task-conditioned hypernetwork in this multi-task regime (data not shown). It is worth noting that the multi-task learner from Fig. 2b that uses no hypernetwork is only able to learn the task since we choose the input domains to be non-overlapping. Permuted MNIST benchmark. For our experiments conducted on MNIST we replicated the experimental setup proposed by van de Ven & Tolias (2019) whenever applicable. We therefore use the same number of training iterations, the same or a lower number of weights in the hypernetwork than in the target network, the same learning rates and the same optimizer. For the replay model, i.e., the hypernetwork-empowered VAE, as well as for the standard classifier we used 5000 training iterations per task and learning rate is set to 0.0001 for the Adam optimizer (otherwise PyTorch default values). The batchsize is set to 128 for the VAE whereas the classifier is simultaneously trained on a batch of 128 samples of replayed data (evenly distributed over all past tasks) and a batch of 128 images from the currently available dataset. MNIST images are padded with zeros, which results in network inputs of size 32 × 32, again strictly following the implementation of the compared work. We experienced better performance when we condition our replay model on a specific task input. We therefore construct for every task a specific input namely a sample from a standard multivariate normal of dimension 100. In practice we found the dimension to be not important. This input stays constant throughout the experiment and is not learned. Note that we use the same hyperparameters for all learning scenarios, which is not true for the reported related work since they have tuned special hyperparameters for all scenarios and all methods. • Details of hypernetwork for the VAE. We use one hypernetwork configuration to generate weights for all variational autoencoders used for our PermutedMNIST-10 experiments namely a fully-connected chunked hypernetwork with 2 hidden layers of size 25 and 25 followed by an output size of 85,000. We use ELU nonlinearities in the hidden layers 15 Published as a conference paper at ICLR 2020 # a # b PermutedMNIST-100 100 3 g 7% g < 50 1 50 100 Task t = Boutput = 1 = Boutput = 0.05 = Boutput = 0.001 = Boutput = 0-5 = Boutput = 0.01 — — output = 0.0005 = Boutput = 9.1 — Boutput = 0.005 PermutedMNIST-100 100 = & 90 3 2 96.78% 95.66% 80 T T 1 50 100 Task t = (12 +128) + 200 > 250 —+ 350 > 7500, Bouput = 0.01 = (12 +128) + 200 + 250 —+ 300 > 6000, Bouput = 0.01 # c PermutedMNIST-100 100 & TOT Te = -- © 90 - 5 3 & iq 80 0.0005 0.005 0.05 0.5 0.001 0.01 0.1 1 output = during == final Figure A2: Additional experiments on the PermutedMNIST-100 benchmark. (a) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST- 100). All runs use exactly the same hyperparameter configuration except for varying values of βoutput. The final accuracies are robust for a wide range of regularization strengths. If βoutput is too weak, forgetting will occur. However, there is no severe disadvantage of choosing βoutput too high (cmp. (c)). A too high βoutput simply shifts the attention of the optimizer away from the current task, leading to lower baseline accuracies when the training time is not increased. (b) Due to an increased number of output neurons, the target network for PermutedMNIST-100 has more weights than for PermutedMNIST-10 (this is only the case for CL1 and CL3). This plot shows that the performance drop is minor when choosing a hypernetwork with a comparable number of weights as the target network in CL2 (orange) compared to one that has a similar number of weights as the target network for CL1 in PermutedMNIST-100 (red). (c) Task-averaged test set accuracy after learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for the runs depicted in (a). For low values of βoutput final accuracies are worse than immediate once (forgetting occurs). If βoutput is too high, baseline accuracies decrease since the optimizer puts less emphasis on the current task (note that the training time per task is not increased). Shaded areas in (a) and (b) denote STD, whereas error bars in (c) denote SEM (always across 5 random seeds). 16 Published as a conference paper at ICLR 2020 of the hypernetwork. The size of task embeddings e has been set to 24 and the size of chunk embeddings c to 8. The parameter βoutput is 0.05 . The number of weights in this hypernetwork is 2,211,907 (2,211,691 network weights + 216 task embedding weights). The corresponding target network (and therefore output of the chunked hypernetwork), as taken from related work, has 2,227,024 weights. • Details of the VAE for HNET+TIR. For this variational autoencoder, we use two fully- connected neural networks with layers of size 1000, 1000 for the encoder and 1000, 1000 for the decoder and a latent space of 100. This setup is again copied from work we compare against. • Details of the VAE for HNET+R. For this variational autoencoder, we use two fully- connected neural networks with layers of size 400, 400 for the encoder and 400, 400 for the decoder (both 1000, 1000 in the related work) and a latent space of dimension 100. Here, we departure from related work by choosing a smaller architecture for the autoencoder. Note that we still use a hypernetwork with less trainable parameters than the target network (in this case the decoder) that is used in related work. the hypernetwork for the target classifier in PermutedMNIST-10 (HNET+TIR & HNET+ENT). We use the same setup for the hypernetwork as used for the VAEs above, but since the target network is smaller we reduce the output of the hypernetwork to 78,000. We also adjust the parameter βoutput to 0.01, consistent with our PermutedMNIST-100 experiments. The number of weights in this hypernetwork is therefore 2,029,931 parameters (2,029,691 network weights + 240 task embedding weights). The corresponding target network (from related work) would have 2,126,100 weights for CL1 and CL3 and 2,036,010 for CL2 (only one output head). • Details of the hypernetwork for the target classifier for PermutedMNIST-100. For these experiments we chose an architecture that worked well on the PermutedMNIST-10 benchmark and did not conduct any more search for new architectures. For PermutedMNIST- 100, the reported results were obtained by using a chunked hypernetwork with 3 hidden layers of size 200, 250 and 350 (300 for CL2) and an output size of 7500 (6000 for CL2) (such that we approximately match the corresponding target network size for CL1/CL2/CL3). Interest- ingly, Fig. A2b shows that even if we don’t adjust the number of hypernetwork weights to the increased number of target network weights, the superiority of our method is evident. Aside from this, the plots in Fig. 3 have been generated using the PermutedMNIST-10 HNET+TIR setup (note that this includes the conditions set by related work for PermutedMNIST-10, e.g., target network sizes, the number of training iterations, learning rates, etc.). • Details of the VAE and the hypernetwork for the VAE in PermutedMNIST-100 for CL2/CL3. We use a very similar setup for the VAE and it’s hypernetwork used in HNET+TIR for PermutedMNIST-10 as described above. We only applied the follow- ing changes: Fully-connected hypernetwork with one hidden layer of size 100; chunk embedding sizes are set to 12; task embedding sizes are set two 128 and the hidden layer sizes of the VAE its generator are 400, 600. Also we increased the regularisation strength βoutput = 0.1 for the VAE its generator hypernetwork. • Details of the target classifier for HNET+TIR & HNET+ENT. For this classifier, we use the same setup as in the study we compare to (van de Ven & Tolias, 2019), i.e., a fully-connected network with layers of size 1000, 1000. Note that if the classifier is used as a task inference model, it is trained on replay data and the corresponding hard targets, i.e., the argmax of the soft targets. Below, we report the specifications for our automatic hyperparameter search (if not noted otherwise, these specifications apply for the split MNIST and split CIFAR experiments as well): • Hidden layer sizes of the hypernetwork: (no hidden layer), "5,5" "10,10", "25,25", "50,50", "100,100", "10", "50", "100" • Output size of the hypernetwork: fitted such that we obtain less parameters then the target network which we compare against Embedding sizes (for e and c): 8, 12, 24, 36, 62, 96, 128 • βoutput: 0.0005, 0.001, 0.005, 0.01, 0.005, 0.1, 0.5, 1.0 17 Published as a conference paper at ICLR 2020 • Hypernetwork transfer functions: linear, ReLU, ELU, Leaky-ReLU Note that only a random subset of all possible combinations of hyperparameters has been explored. After we found a configuration with promising accuracies and a similar number of weights compared to the original target network, we manually fine-tuned the architecture to increase/decrease the number of hypernetwork weights to approximately match the number of target network weights. The choice of hypernetwork architecture seems to have a strong influence on the performance. It might be worth exploring alternatives, e.g., an architecture inspired by those used in typical generative models. We note that in addition to the above specifications we explored manually some hyperparameter configurations to gain a better understanding of our method. Split MNIST benchmark. Again, whenever applicable we reproduce the setup from van de Ven & Tolias (2019). Differences to the PermutedMNIST-10 experiments are just the learning rate (0.001) and the number of training iterations (set to 2000). • Details of hypernetwork for the VAE. We use one hypernetwork configuration to generate weights for all variational autoencoders used for our split MNIST experiments, namely a fully-connected chunked hypernetwork with 2 hidden layers of size 10, 10 followed by an output size of 50,000. We use ELU nonlinearities in the hidden layers of the hypernetwork. The size of task embeddings e has been set to 96 and the size of chunk embeddings c to 96. The parameter βoutput is 0.01 for HNET+R and 0.05 for HNET+TIR . The number of weights in this hypernetwork is 553,576 (553,192 network weights + 384 task embedding weights). The corresponding target network (and therefore output of the chunked hypernetwork), as taken from related work, has 555,184 weights. For a qualitative analyses of the replay data of this VAE (class incrementally learned), see A8. • Details of the VAE for HNET+TIR. For this variational autoencoder, we use two fully- connected neural networks with layers of size 400, 400 for the encoder and 50, 150 for the decoder (both 400, 400 in the related work) and a latent space of dimension 100. • Details of the VAE for HNET+R. For this variational autoencoder, we use two fully- connected neural networks with layers of size 400, 400 for the encoder and 250, 350 for the decoder (both 400, 400 in the related work) and a latent space of dimension 100. • Details of the hypernetwork for the target classifier in split MNIST (HNET+TIR & HNET+ENT). We use the same setup for the hypernetwork as used for the VAE above, but since the target network is smaller we reduce the output of the hypernetwork to 42,000. We also adjust the βoutput to 0.01 although this parameter seems to not have a strong effect on the performance. The number of weights in this hypernetwork is therefore 465,672 parameters (465,192 network weights + 480 task embedding weights). The corresponding target network (from related work) would have 478,410 weights for CL1 and CL3 and 475,202 for CL2 (only one output head). • Details of the target classifier for HNET+TIR & HNET+ENT. For this classifier, we again use the same setup as in the study we compare to (van de Ven & Tolias, 2019), i.e., a fully-connected neural networks with layers of size 400, 400. Note that if the classifier is used as a task inference model, it is trained on replay data and the corresponding hard targets, i.e., the argmax the soft targets. Split CIFAR-10/100 benchmark. For these experiments, we used as a target network a ResNet-32 network (He et al. (2016)) and again produce the weights of this target network by a hypernetwork in a compressive manner. The hypernetwork in this experiment directly maps from the joint task and chunk embedding space (both dimension 32) to the output space of the hypernetwork, which is of dimension 7,000. This hypernetwork has 457,336 parameters (457,144 network weights + 192 task embedding weights). The corresponding target network, the ResNet-32, has 468.540 weights (including batch-norm weights). We train for 200 epochs per task using the Adam optimizer with an initial learning rate of 0.001 (and otherwise default PyTorch values) and a batch size of 32. In addition, we apply the two learning rate schedules suggested in the Keras CIFAR-10 example2. # 2See https://keras.io/examples/cifar10_resnet/. 18 Published as a conference paper at ICLR 2020 Due to the use of batch normalization, we have to find an appropriate way to handle the running statistics which are estimated during training. Note, these are not parameters which are trained through backpropagation. There are different ways how the running statistics could be treated: 1. One could ignore the running statistics altogether and simply compute statistics based on the current batch during evaluation. 2. The statistics could be part of the hypernetwork output. Therefore, one would have to manipulate the target hypernetwork output of the previous task, such that the estimated running statistics of the previous task will be distilled into the hypernetwork. 3. The running statistics can simply be checkpointed and stored after every task. Note, this method would lead to a linear memory growth in the number of tasks that scales with the number of units in the target network. For simplicity, we chose the last option and simply checkpointed the running statistics after every task. For the fine-tuning results in Fig. 5 we just continually updated the running statistics (thus, we applied no checkpointing). # D ADDITIONAL EXPERIMENTS AND NOTES Split CIFAR-10/100 benchmark using the model of Zenke et al. (2017). We re-run the split CIFAR-10/100 experiment reported on the main text while reproducing the setup from Zenke et al. (2017). Our overall classification performance is comparable to synaptic intelligence, which achieves 73.85% task-averaged test set accuracy, while our method reaches 71.29% ± 0.32%, with initial baseline performance being slightly worse in our approach, Fig. A3. Split CIFAR-10/100 100 777 76 76 7575 7877 7, 4 74 72 70 a m4 73, 70 - _ 7 . P63 66 . ° 6s iS : <7 se 1 > . 50 On & 50 4 I 3 * 31 : 2 . 25 0 CIFAR-10 1 2 3 4 5 Task t § hnet during @ hnet from scratch fine-tuning sl Figure A3: Replication of the split CIFAR-10/100 experiment of Zenke et al. (2017). Test set accuracies on the entire CIFAR-10 dataset and subsequent CIFAR-100 splits. Both task-conditioned hypernetworks (hnet, in red) and synaptic intelligence (SI, in green) transfer information forward and are protected from catastrophic forgetting. The performance of the two methods is comparable. For completeness, we report our test set accuracies achieved immediately after training (hnet-during, in blue), when training from scratch (purple), and with our regularizer turned off (fine-tuning, yellow). To obtain our results, we use a hypernetwork with 3 hidden-layers of sizes 100, 150, 200 and output size 5500. The size of task embeddings e has been set to 48 and the size of chunk embeddings c to 80. The parameter βoutput is 0.01 and the learning rate is set to 0.0001. The number of weights in this hypernetwork is 1,182,678 (1,182,390 network weights + 288 task embedding weights). The corresponding target network would have 1,276,508 weights. In addition to the above specified hyperparameter search configuration we also included the following learning rates: 0.0001, 0.0005, 0.001 and manually tuned some architectural parameters. 19 Published as a conference paper at ICLR 2020 # a b — Figure A4: Context-free inference using hypernetwork-protected replay (HNET+TIR) on long task sequences. Final test set classification accuracy on the t-th task after learning one hundred permutations of the MNIST dataset (PermutedMNIST-100) for the CL2 (a) and CL3 (b) scenarios, where task identity is not explicitly provided to the system. As before, the number of hypernetwork parameters is not larger than that of the related work we compare to. (a) HNET+TIR displays almost perfect memory retention. We used a stochastic regularizer (cf. Appendix D note below) which evaluates the output regularizer in Eq. 2 only for a random subset of previous tasks (here, twenty). (b) HNET+TIR is the only method that is capable of learning PermutedMNIST-100 in this learning scenario. For this benchmark, the input data domains are easily separable and the task inference system achieves virtually perfect (~100%) task inference accuracy throughout, even for this long experiment. HNET+TIR uses a divide-and-conquer strategy: if task inference is done right, CL3 becomes just CL1. Furthermore, once task identity is predicted, the final softmax computation only needs to consider the corresponding task outputs in isolation (here, of size 10). Curiously, for HNET+TIR, CL2 can be harder than CL3 as the single output layer (of size 10, shared by all tasks) introduces a capacity bottleneck. The related methods, on the other hand, have to consider the entire output layer (here, of size 10*100) at once, which is known to be harder to train sequentially. This leads to overwhelming error rates on long problems such as PermutedMNIST-100. Shaded areas in (a) and (b) denote STD (n = 5). 20 Published as a conference paper at ICLR 2020 Upper bound for replay models. We obtain an upper bound for the replay-based experiments (Table 2) by sequentially training a classifier, in the same way as for HNET+R and DGR, now using true input data from past tasks and a synthetic, self-generated target. This corresponds to the rehearsal thought experiment delineated in Sect. 1. Table 2: Task-averaged test accuracy (± SEM, n = 20) on the permuted (‘P10’) and split (‘S’) MNIST experiments. For HNET+R and DGR+distill (van de Ven & Tolias, 2019) the classification network is trained sequentially on data from the current task and replayed data from all previous tasks. Our HNET+R comes close to saturating the corresponding replay upper bound RPL-UB. DGR HNET+R RPL-UB P10-CL1 97.51 ± 0.01 97.85 ± 0.02 97.89 ± 0.02 P10-CL2 97.35 ± 0.02 97.60 ± 0.02 97.72 ± 0.01 P10-CL3 96.38 ± 0.03 97.71 ± 0.06 97.91 ± 0.01 S-CL1 S-CL2 S-CL3 99.61 ± 0.02 99.81 ± 0.01 99.83 ± 0.01 96.83 ± 0.20 97.88 ± 0.05 98.96 ± 0.03 91.79 ± 0.32 94.97 ± 0.18 98.38 ± 0.02 Quantification of forgetting in our continual learning experiments. In order to quantify forget- ting of our approach, we compare test set accuracies of every single task directly after training with it’s test set accuracy after training on all tasks. Only CL1 is shown since other scenarios i.e. CL2 and CL3 depend on task inference which only is measurable after training on all tasks. Table 3: Task-averaged test accuracy (± SEM, n = 20) on the permutedMNIST-10 (‘P10’) and splitMNIST (‘S’) experiments during and after training. HNET+TIR during HNET+TIR after HNET+R during HNET+R after S-CL1 99.79 ± 0.01 99.79 ± 0.01 99.82 ± 0.01 99.83 ± 0.01 P10-CL1 97.58 ± 0.02 97.57 ± 0.02 98.03 ± 0.01 97.87 ± 0.01 Table 4: Task-averaged test accuracy (± SEM, n = 5) on the permutedMNIST-100 (‘P100’) experiments during and after training. HNET+TIR during HNET+TIR after P100-CL1 96.12 ± 0.08 96.18 ± 0.09 P100-CL2 95.97 ± 0.05 P100-CL3 96.00 ± 0.03 - - Table 5: Task-averaged test accuracy (± SEM, n = 5) on split CIFAR-10/100 on CL1 on two different target network architectures. during after 74.75 ± 0.09 71.29 ± 0.32 ZenkeNet ResNet-32 82.36 ± 0.44 82.34 ± 0.44 Robustness of βoutput-choice. In Fig. A2a and Fig. A2c we provide additional experiments for our method on PermutedMNIST-100. We show that our method performs comparable for a wide range of βoutput-values (including the one depicted in Fig. 3a). 21 Published as a conference paper at ICLR 2020 # a # b PermutedMNIST-100 100 = = 75 8 5 50 8 < 25 T T 1 50 100 Task t Smet Ngo 2 000 = A=1 = A=100 = d= 5000 = \=10 = =500 = \=10000 PermutedMNIST-100 100 oe > Te & 90 - ete > . eee & 80 3 * 3 2 70 60 T T T T T T hnet 10 50 250 1000 500010000 1 25 100 500 2500 7500 A = during = final # c # d PermutedMNIST-100 PermutedMNIST-100 100 1005 _ _ : - - E 7 = 7 - 3 40 5 8 8 40 10 : T T T 10 i - 1 50 100 , ’ ' Task t 0 hnet hnet fine-tuning _ fine-tuning = during = final = hnet = het fine-tuning = fine-tuning Figure A5: Additional experiments with online EWC and fine-tuning on the PermutedMNIST- 100 benchmark. (a) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST-100) using the online EWC algorithm (Schwarz et al., 2018) to prevent forgetting. All runs use exactly the same hyperparameter configuration except for varying values of the regularization strength λ. Our method (hnet, in red) and the online EWC run (λ = 100, in orange) from Fig. 3a are shown for comparison. It can be seen that even when tuning the regularization strength one cannot attain similar performance as with our approach (cmp. Fig. A2a). Too strong regularization prevents the learning of new tasks whereas too weak regularization doesn’t prevent forgetting. However, a middle ground (e.g., using λ = 100) does not reach acceptable per-task performances. (b) Task-averaged test set accuracy after learning all tasks (labelled ‘final’, in red) and immediately after learning a task (labelled ‘during’, in purple) for a range of regularization strengths λ when using the online EWC algorithm. Results are complementary to those shown in (a). (c) Final test set classification accuracy on the t-th task after learning one hundred permutations (PermutedMNIST-100) when applying fine-tuning to the hypernetwork (labelled ‘hnet fine-tuning’, in blue) or target network (labelled ‘fine-tuning’, in green). Our method (hnet, in red) from Fig. 3a is shown for comparison. It can be seen that without protection the hypernetwork suffers much more severely from catastrophic forgetting as when training a target network only. (d) This plot is complementary to (c). See description of (b) for an explanation of the labels. Shaded areas in (a) and (c) denote STD, whereas error bars in (b) and (d) denote SEM (always across 5 random seeds). 22 Published as a conference paper at ICLR 2020 # a b Figure A6: Hyperparameter search for online EWC and SI on the PermutedMNIST-100 bench- mark. We conduct the same hyperparameter search as performed in van de Ven & Tolias (2018). We did not compute different random seeds for this search. (a) Hyperparameter search on the regular- isation strength c for the SI algorithm. Accuracies during and after the experiment are shown. (b) Hyperparameter search for parameters λ and γ of the online EWC algorithm. Only accuracies after the experiment are shown. Varying the regularization strength for online EWC. The performance of online EWC in Fig. 3a is closest to our method (labelled hnet, in red) compared to the other methods. Therefore, we take a closer look at this method and show that further adjustments of the regularization strength λ do not lead to better performance. Results for a wide range of regularization strengths can be seen in Fig. A5a and Fig. A5b. As shown, online EWC cannot attain a performance comparable to our method when tuning the regularization strength only. The impact of catastrophic forgetting on the hypernetwork and target network. We have successfully shown that by shifting the continual learning problem from the target network to the hypernetwork we can successfully overcome forgetting due to the introduction of our regularizer in Eq. 2. We motivated this success by claiming that it is an inherently simpler task to remember a few input-output mappings in the hypernetwork (namely the weight realizations of each task) rather than the massive number of input-output mappings {(x(t,i), y(t,i))}nt i=1 associated with the remembering of each task t by the target network. Further evidence of this claim is provided by fine-tuning experiments in Fig. A5c and Fig. A5d. Fine-tuning refers to sequentially learning a neural network on a set of tasks without any mechanism in place to prevent forgetting. It is shown that fine-tuning a target network (no hypernetwork in this setup) has no catastrophic influence on the performance of previous tasks. Instead there is a graceful decline in performance. On the contrary, catastrophic forgetting has an almost immediate affect when training a hypernetwork without protection (i.e., training our method with βoutput = 0. The performance quickly drops to chance level, suggesting that if we weren’t solving a simpler task then preventing forgetting in the hypernetwork rather than in the target network might not be beneficial. Chunking and hypernetwork architecture sensitivity. In this note we investigate the perfor- mance sensitivity for different (fully-connected) hypernetwork architectures on split MNIST and PermutedMNIST-10, Fig. A7. We trained thousands of randomly drawn architectures from the following grid (the same training hyperparameters as reported for for CL1, see Ap- pendix C, were used throughout): possible number of hidden layers 1, 2, possible layer size 5, 10, 20, . . . , 90, 100, possible chunk embedding size 8, 12, 24, 56, 96 and hypernetwork output size in {10, 50, 100, 200, 300, 400, 500, 750, 1k, 2k, . . . , 9k, 10k, 20k, 30k, 40k}. Since we realize compression through chunking, we sort our hypernetwork architectures by compression ratio, and consider only architectures with small compression ratios. Performance of split MNIST stays in the high 90 percentages even when reaching compression ratios close to 1% whereas for PermutedMNIST-10 accuracies decline in a non-linear fashion. For both experiments, the choice of the chunked hypernetwork archicture is robust and high performing even in 23 Published as a conference paper at ICLR 2020 # a # b splitMNIST compression-performance trade-off permutedMNIST compression-performance for random hnet architectures for random hnet architectures 100 100 z 80 = 80 = 60 Z 60 8 40 g 40 <x 20 20 0 T 0-4 T T 0.0 0.5 1.0 1.5 0.0 0.5 1.0 Compression ratio Compression ratio # <x Figure A7: Robustness to hypernetwork architecture choice for a large range of compression ratios. Performance vs. compression for random hypernetwork architecture choices, for split MNIST and PermutedMNIST-10 (mean ± STD, n = 500 architectures per bin). Every model was trained with the same setup (including all hyperparameters) used to obtain results reported in Table 1 (CL1). We considered architectures yielding compression ratios |Θh ∪ {e(t)}|/|Θtrgt| ∈ [0.01, 2.0] (a) split MNIST performance for CL1 stays high even for compression ratios ≈ 1%. (b) PermutedMNIST-10 accuracies degrade gracefully when compression ratios decline to 1%. Notably, for both benchmarks, performance remained stable across a large pool of hypernetwork configurations. the compressive regime. Note that the discussed compression ratio compares the amount of trainable parameters in the hypernetwork to its output size, i.e. the parameters of the target network. Small capacity target networks for the permuted MNIST benchmark. Swaroop et al. (2018) argue for using only small capacity target networks for this benchmark. Specifically, they propose to use hidden layer sizes [100, 100]. Again, we replicated the setup of van de Ven & Tolias (2019) wherever applicable, except for the now smaller hidden layer sizes of [100, 100] in the target network. We use a fully-connected chunked hypernetwork with chunk embeddings c having size 12, hidden layers having size 100, 75, 50 and an output size of 2000, resulting in a total number of hypernetwork weights of 122,459 (including 10 × 64 task embedding weights) compared to 122,700 weights that are generated for the target network. βoutput is set to 0.05. The experiments performed here correspond to CL1. We achieve an average accuracy of 93.91 ± 0.04 for PermutedMNIST-10 after having trained on all tasks. In general, we saw that the hypernetwork training can benefit from noise injection. For instance, when training with soft-targets (i.e., we modified the 1-hot target to be 0.95 for the correct class and 1−0.95 # classes−1 for the remaining classes), we could improve the average accuracy to 94.24 ± 0.03. We also checked the challenging PermutedMNIST-50 benchmark with this small target network as previously investigated by Ritter et al. (2018). Therefore, we slightly adapted the above setup by using a hypernetwork with hidden layer sizes [100, 100] and a regularization strength of βoutput = 0.1. This hypernetwork is slightly bigger than the corresponding target network |Θh∪{e(t)}| |Θtrgt| = 1.37. With this configuration, we obtain an average accuracy of 90.91 ± 0.07. Comparison to HAT. Serra et al. (2018) proposed the hard attention to the task (HAT) algorithm, a strong CL1 method which relies on learning a per-task, per-neuron mask. Since the masks are pushed to become binary, HAT can be viewed as an algorithm for allocating subnetworks (or modules) within the target network, which become specialized to solve a given task. Thus, the method is similar to ours in the sense that the computation of the target network is task-dependent, but different in spirit, as it relies on network modularity. In HAT, task identity is assumed to be provided, so that the appropriate mask can be picked during inference (scenario CL1). HAT requires explicitly storing a neural mask for each task, whose size 24 # trade-off Published as a conference paper at ICLR 2020 scales with the number of neurons in the target network. In contrast, our method allows solving tasks in a compressive regime. Thanks to the hypernetwork, whose input dimension can be freely chosen, only a low-dimensional embedding needs to be stored per task (cf. Fig. 4), and through chunking it is possible to learn to parameterize large target models with a small number of plastic weights (cf. Fig. 3b). Here, we compare our task-conditioned hypernetworks to HAT on the permuted MNIST benchmarks (T = 10 and T = 100), cf. Table 6. For large target networks, both methods perform strongly, reaching comparable final task-averaged accuracies. For small target network sizes, task-conditioned hypernetworks perform better, the difference becoming more apparent on PermutedMNIST-100. We note that the two algorithms use different training setups. In particular, HAT uses 200 epochs (batch size set to 64) and applies a learning rate scheduler that acts on a held out validation set. Furthermore, HAT uses differently tuned forgetting hyperparameters when target network sizes change. This is important to control for the target network capacity used per task and assumes knowledge of the (number of) tasks at hand. Using the code freely made available by the authors, we were able to rerun HAT for our target network size and longer task sequences. Here, we used the setup provided by the author’s code for HAT-Large for PermutedMNIST-10 and PermutedMNIST-100. To draw a fairer comparison, when changing our usual target network size to match the ones reported in Serra et al. (2018), we trained for 50 epochs per task (no training loss improvements afterwards observed) and also changed the batch size to 64 but did not changed our training scheme otherwise; in particular, we did not use a learning rate scheduler. Table 6: Comparison of HNET and HAT, Serra et al. (2018). Task-averaged test accuracy on the PermutedMNIST experiment with T = 10 and T = 100 tasks (’P10’, ’P100’) with three different target network sizes, i.e., three fully connected neural networks with hidden layer sizes of (100, 100) or (500, 500) or (2000, 2000) are shown. For these architectures, a single accuracy was reported by Serra et al. (2018) without statistics provided. We reran HAT for PermutedMNIST-100 with code provided at https://github.com/joansj/hat, and for PermutedMNIST-10 with hidden layer size (1000, 1000) to match our setup. HAT and HNET perform similarly on large target networks for PermutedMNIST-10, while HNET is able to achieve larger performances with smaller target networks as well as for long task sequences. HAT HNET P10-100,100 P10-500,500 P10-2000,2000 91.6 97.4 98.6 95.92 ± 0.02 97.35 ± 0.02 98.06 ± 0.02 P10-1000,1000 97.67 ± 0.02 97.56 ± 0.02 P100-1000,1000 86.04 ± 0.26 94.98 ± 0.07 Efficient PermutedMNIST-250 experiments with a stochastic regularizer on subsets of previ- ous tasks. An apparent drawback of Eq. 2 is that the runtime complexity of the regularizer grows linearly with the number of tasks. To overcome this obstacle, we show here that it is sufficient to consider a small random subset of previous tasks. In particular, we consider the PermutedMNIST-250 benchmark (250 tasks) on CL1 using the hyper- parameter setup from our PermutedMNIST-100 experiments except for a hypernetwork output size of 12000 (to adjust to the bigger multi-head target network) and a regularization strength βoutput = 0.1. Per training iteration, we choose maximally 32 random previous tasks to estimate the regularizer from Eq. 2. With this setup, we achieve a final average accuracy of 94.19 ± 0.16 (compared to an average during accuracy (i.e., the accuracies achieved right after training on the corresponding task) of 95.54 ± 0.05). All results are across 5 random seeds. These results indicate that a full evaluation of the regularizer at every training iteration is not necessary such that the linear runtime complexity can be cropped to a constant one. Combining hypernetwork output regularizers with weight importance. Our hypernetwork reg- ularizer pulls uniformly in every direction, but it is possible to introduce anisotropy using an EWC-like approach (Kirkpatrick et al., 2017). Instead of weighting parameters, hypernetwork outputs can be weighted. This would allow for a more flexible regularizer, at the expense of additional storage. 25 Published as a conference paper at ICLR 2020 Task inference through predictive entropy (HNET+ENT). In this setup, we rely on the capa- bility of neural networks to separate in- from out-of-distribution data. Although this is a difficult research problem on its own, for continual learning, we face a potentially simpler problem, namely to detect and distinguish between the tasks our network was trained on. We here take the first minimal step exploiting this insight and compare the predictive uncertainty, as quantified by output distribution entropy, of the different models given an input. Hence, during test time we iterate over all embeddings and therefore the models our metamodel can generate and compare the predictive entropies which results in making a prediction with the model of lowest entropy. For future work, we wish to explore the possibility of improving our predictive uncertainty by taking parameter uncertainty into account through the generation of approximate, task-specific weight posterior distributions. Learning without task boundaries with hypernetworks. An interesting problem we did not address in this paper is that of learning without task boundaries. For most CL methods, it is crucial to know when learning one task ends and training of a new tasks begins. This is no exception for the methods introduced in this paper. However, this is not necessarily a realistic or desirable assumption; often, one desires to learn in an online fashion without task boundary supervision, which is particularly relevant for reinforcement learning scenarios where incoming data distributions are frequently subject to change (Rolnick et al., 2018). At least for discrete changes, with our hypernetwork setup, this boils down to a detection mechanism that activates the saving of the current model, i.e., the embedding e(T ), and its storage to the collection of embeddings {e(t)}. We leave the integration of our model with such a hypernetwork-specific switching detection mechanism for future work. Interestingly, our task-conditioned hypernetworks would fit very well with methods that rely on fast remembering (a recently proposed approach which appeared in parallel to our paper, He et al., 2019). E UNIVERSAL FUNCTION APPROXIMATION WITH CHUNKED NEURAL NETWORKS Proposition 1. Given a compact subset K C R™ and a continuous function on K i.e. f € C(K), more specifically, f : K + R” withn =r - Nc. Now Ve > 0, there exists a chunked neural network ff : R™ x C > R" with parameters Oy, discrete setC = {c1,...,¢n,} and c; € R* such that e(x) — f(x)| <¢, Vx € K and with fe(x) = [ff(x,c1),-.., fE (x, ene). For the following proof, we assume the existence of one form of the universal approximation theorem (UAT) for neural networks (Leshno & Schocken, 1993; Hanin, 2017). Note that we will not restrict ourselves to a specific architecture, nonlinearity, input or output dimension. Any neural network that is proven to be a universal function approximator is sufficient. Proof. Given any € > 0, we assume the existence of a neural network fy, : R™ — R” that approximates function f on K: |falx) — f(x)| < 5 Va € K. (10) We will in the following show that we can always find a chunked neural network f c approximating the neural network fh on K and conclude with the triangle inequality # R™ x C > # | ¯f c h (x) − f (x)| ≤ | ¯f c FE (%) — FR) SFE CO) — fr) + fu) — f@)| <6 Vee K. (11) Indeed, given the neural network fh such that (10) holds true, we construct ns) c=e fu(x,0) ~ {i else (12) # cNC h (x)] with ˆfh : Rm × C → Rr. by splitting the full neural network fh(x) = [f c1 h (x), f c2 Note that ˆfh is continuous on Rm × C with the product topology composed of the topology on Rm induced by the metric | · − · | : Rm × Rm → R and the discrete topology on C. Now we can make use 26 R" Published as a conference paper at ICLR 2020 of the UAT again: Given the compact K C R”, the discrete set C = {c1,..., cy. } and any 3Nc > 0, there exists a neural network function ff : R’™ x R* — R” such that € |fe(x,¢) — falx,e)| < , Vx Ee K,VWeec. (13) 2Nc It follows that cc Py € _€ DMs) = foced! < Done = Vx € K, (14) which is equivalent to E(x, €1) fulx, ex) . | : - : [= lf) — fal <5, Vee K (15) fi (X, enc) ful, CNc) We have shown (11) which concludes the proof. Note that we did not specify the number of chunks NC, r or the dimension s of the embeddings ci. Despite this theoretical result, we emphasize that we are not aware of a constructive procedure to define a chunked hypernetwork that comes with a useful bound on the achievable performance and/or compression rate. We evaluate such aspects empirically in our experimental section. 27 Published as a conference paper at ICLR 2020 # F QUALITATIVE ANALYSES OF HYPERNETWORK-PROTECTED REPLAY MODELS a b oO0O°e 00e 008 000 o0o°0 000 00 0 0 O08 0080 ooo ooo ooo 000 o°72°o o2°0 o0o0°0 o0o0°0 o0°0 } ! } ( | ‘ete jie Bd 1 / tf) ' } if / 7 if / 7? Lis @ ote of ‘ot it ¢ i/o ¢ ia f aA & 2 27 4? 24?) 222 222 222 @a22 a22 a 22 2A2a2 aaa aaa zat 2a 3 22 2 23 2 23 2 23> 2 > 5 3 7 3 3 7 3 3 333 3328 3338 33 3 33 3 33 3 33 3 36 3 3638 338 338 338: 3 3 3 33 3 333 44 wu 1 + & 49 @ 5 = § 5s § fers 4 + ¢ 7 “~ + ¥ 4 ¢ $5 £ 5 § £ st £ 44 B44 4A SSS SEF EFE S$ 5 $6 $3 5s $$ s 44 4 44 4 44 4 § 5 5 § 8s § 6 S$ 44 4 44 ¢ 44 ¢ 6s § 63s 6 63 6 44 4 44 4 44 4 éoeeé éoee6 é¢@6 666 666 é66 666 6 @ & 6 ¢@6 ¢¢ 6 ¢é 6 nr ¢é & + & & ¢ 6 & 666 666 666 7 8@?7 78? 7 @?7 77 °7 7 37°7 om tg FE 7?) AF? >»? 2? 777 777 7 774 779 © 9% 777 3777 4777 73 @ 7% @ 6 & @ $8 # srs $s sé $88 8 $8 @ 8 738 @ 8 6 § 8 ¢ 8 ¢ | 7h 7 4 a? 3 oO0O°e 00e 008 00 0 0 O08 0080 000 o°72°o o2°0 } ! } ( | 1 / tf) ' } ote of ‘ot aA & 2 27 4? 24?) @a22 a22 a 22 zat 2a 3 22 2 > 5 3 7 3 3 7 3 3 33 3 33 3 33 3 338 338 338: 44 wu 1 + & 49 @ 4 + ¢ 7 “~ + ¥ 4 ¢ 44 B44 4A S$ 5 $6 $3 5s $$ s § 5 5 § 8s § 6 S$ 6s § 63s 6 63 6 éoeeé éoee6 é¢@6 666 6 @ & 6 ¢@6 ¢é & + & & ¢ 6 & 7 8@?7 78? 7 @?7 7?) AF? >»? 2? 774 779 © 9% 73 @ 7% @ srs $s sé 738 @ 8 6 | 7 4 99 # 000 o0o°0 000 ooo ooo ooo o0o0°0 o0o0°0 o0°0 ‘ete jie Bd if / 7 if / 7? Lis @ it ¢ i/o ¢ ia f 222 222 222 2A2a2 aaa aaa 23 2 23 2 23> 2 333 3328 3338 33 3 36 3 3638 3 3 3 33 3 333 5 = § 5s § fers $5 £ 5 § £ st £ SSS SEF EFE 44 4 44 4 44 4 44 4 44 ¢ 44 ¢ 44 4 44 4 44 4 666 666 é66 ¢¢ 6 ¢é 6 nr 666 666 666 77 °7 7 37°7 om tg FE 777 777 7 777 3777 4777 6 & @ $8 # $88 8 $8 @ 8 § 8 ¢ 8 ¢ 7h a? 3 7 oo Figure A8: Image samples from hypernetwork-protected replay models. The left column of both of the subfigures display images directly after training the replay model on the corresponding class, compared to the right column(s) where samples are obtained after training on eights and nines i.e. all classes. (a) Image samples from a class-incrementally trained VAE. Here the exact same training configuration to obtain results for split MNIST with the HNET+R setup are used, see Appendix C. (b) Image samples from a class-incrementally trained GAN. For the training configurations, see Appendix B. In both cases the weights of the generative part, i.e., the decoder or the generator, are produced and protected by a hypernetwork. 28
{ "id": "1907.02544" }
1906.00300
Latent Retrieval for Weakly Supervised Open Domain Question Answering
Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.
http://arxiv.org/pdf/1906.00300
Kenton Lee, Ming-Wei Chang, Kristina Toutanova
cs.CL
Accepted to ACL 2019
null
cs.CL
20190601
20190627
9 1 0 2 n u J 7 2 ] L C . s c [ 3 v 0 0 3 0 0 . 6 0 9 1 : v i X r a # Latent Retrieval for Weakly Supervised Open Domain Question Answering # Kenton Lee Ming-Wei Chang Kristina Toutanova Google Research Seattle, WA {kentonl,mingweichang,kristout}@google.com # Abstract Recent work on open domain question answer- ing (QA) assumes strong supervision of the supporting evidence and/or assumes a black- box information retrieval (IR) system to re- trieve evidence candidates. We argue that both are suboptimal, since gold evidence is not al- ways available, and QA is fundamentally dif- ferent from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evi- dence retrieval from all of Wikipedia is treated as a latent variable. Since this is impracti- cal to learn from scratch, we pre-train the re- triever with an Inverse Cloze Task. We evalu- ate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match. # Introduction Due to recent advances in reading comprehension systems, there has been a revival of interest in open domain question answering (QA), where the evidence must be retrieved from an open corpus, rather than being given as input. This presents a more realistic scenario for practical applications. et al., 2017), SearchQA (Dunn et al., 2017), and Quasar (Dhingra et al., 2017), the dependency on strong supervision is removed by assuming that the IR system provides noisy gold evidence. These approaches rely on the IR system to mas- sively reduce the search space and/or reduce spu- rious ambiguity. However, QA is fundamentally different from IR (Singh, 2012). Whereas IR is concerned with lexical and semantic matching, questions are by definition under-specified and re- quire more language understanding, since users are explicitly looking for unknown information. Instead of being subject to the recall ceiling from blackbox IR systems, we should directly learn to retrieve using question-answering data. In this work, we introduce the first Open- Retrieval Question Answering system (ORQA). ORQA learns to retrieve evidence from an open corpus, and is supervised only by question- answer string pairs. While recent work on im- proving evidence retrieval has made significant progress (Wang et al., 2018; Kratzwald and Feuer- riegel, 2018; Lee et al., 2018; Das et al., 2019), they still only rerank a closed evidence set. The main challenge to fully end-to-end learning is that retrieval over the open corpus must be considered a latent variable that would be impractical to train from scratch. IR systems offer a reasonable but potentially suboptimal starting point. Current approaches require a blackbox informa- tion retrieval (IR) system to do much of the heavy lifting, even though it cannot be fine-tuned on the downstream task. In the strongly supervised set- ting popularized by DrQA (Chen et al., 2017), they also assume a reading comprehension model trained on question-answer-evidence triples, such as SQuAD (Rajpurkar et al., 2016). The IR sys- tem is used at test time to generate evidence candi- dates in place of the gold evidence. In the weakly supervised setting, proposed by TriviaQA (Joshi The key insight of this work is that end-to- end learning is possible if we pre-train the re- triever with an unsupervised Inverse Cloze Task (ICT). In ICT, a sentence is treated as a pseudo- question, and its context is treated as pseudo- evidence. Given a pseudo-question, ICT requires selecting the corresponding pseudo-evidence out of the candidates in a batch. ICT pre-training provides a sufficiently strong initialization such that ORQA, a joint retriever and reader model, can be fine-tuned end-to-end by simply optimiz- Task Training Evidence Answer Evaluation Evidence Answer Example Reading Comprehension Open-domain QA given span given string SQuAD (Rajpurkar et al., 2016) Unsupervised QA Strongly Supervised QA Weakly Supervised QA Closed Retrieval QA Open Retrieval QA none given heuristic learned none span string string none heuristic heuristic learned string string string string GPT-2 (Radford et al., 2019) DrQA (Chen et al., 2017) TriviaQA (Joshi et al., 2017) ORQA (this work) Table 1: Comparison of assumptions made by related tasks, along with references to examples. Heuristic evidence refers to the typical strategy of considering only a closed set of evidence documents from a traditional IR system, which sets a strict upper-bound on task performance. In this work (ORQA), only question-answer string pairs are observed during training, and evidence retrieval is learned in a completely end-to-end manner. ing the marginal log-likelihood of correct answers that were found. We evaluate ORQA on open versions of five ex- isting QA datasets. On datasets where the question writers already know the answer—SQuAD (Ra- jpurkar et al., 2016) and TriviaQA (Joshi et al., 2017)—the retrieval problem resembles tradi- tional IR, and BM25 (Robertson et al., 2009) provides state-of-the-art retrieval. On datasets where question writers do not know the answer— Natural Questions (Kwiatkowski et al., 2019), WebQuestions (Berant et al., 2013), and Curat- edTrec (Baudis and Sediv´y, 2015)—we show that learned retrieval is crucial, providing improve- ments of 6 to 19 points in exact match over BM25. Models are defined with respect to an unstruc- tured text corpus that is split into B blocks of ev- idence texts. An answer derivation is a pair (b, s), where 1 ≤ b ≤ B indicates the index of an ev- idence block and s denotes a span of text within block b. The start and end token indices of span s are denoted by START(s) and END(s) respectively. Models define a scoring function S(b, s, q) indi- cating the goodness of an answer derivation (b, s) given a question q. Typically, this scoring func- tion is decomposed over a retrieval component Sretr (b, q) and a reader component Sread (b, s, q): S(b, s, q) = Sretr (b, q) + Sread (b, s, q) During inference, the model outputs the answer string of the highest scoring derivation: # 2 Overview In this section, we introduce notation for open do- main QA that is useful for comparing prior work, baselines, and our proposed model. # 2.1 Task In open domain question answering, the input q is a question string, and the output a is an answer string. Unlike reading comprehension, the source of evidence is a modeling choice rather than a part of the task definition. We compare the assump- tions made by variants of reading comprehension and question answering tasks in Table 1. Evaluation is exact match with any of the ref- erence answer strings after minor normalization such as lowercasing, following evaluation scripts from DrQA (Chen et al., 2017). # 2.2 Formal Definitions a∗ = TEXT(argmax S(b, s, q)) b,s where TEXT(b, s) deterministically maps answer derivation (b, s) to an answer string. A major chal- lenge of any open domain question answering sys- tem is handling the scale. In our experiments on the English Wikipedia corpus, we consider over 13 million evidence blocks b, each with over 2000 possible answer spans s. # 2.3 Existing Pipelined Models In existing retrieval-based open domain question answering systems, a blackbox IR system first chooses a closed set of evidence candidates. For example, the score from the retriever component of DrQA (Chen et al., 2017) is defined as: 0 b € TOP(k, TF-IDF(q, b)) Sretr(D, q) = : —oo otherwise We introduce several general definitions of model components that subsume many retrieval-based open domain question answering systems. Most work following DrQA use the same candi- dates from TF-IDF and focus on reading compre- hension or re-ranking. The reading component Sretr (0, q) BERTB(0) BERTR(q, 0) M L P Sread (0, “The term”, q) M Sretr (1, q) [CLS]...The term ‘ZIP’ is an acronym for Zone Improvement Plan...[SEP] BERTB(1) Top K [CLS] What does the zip in zip code stand for? [SEP]...The term ‘ZIP’ is an acronym for Zone Improvement Plan...[SEP] M L P Sread (0, “Zone Improvement Plan”, q) Sread (0, ..., q) BERTQ(q) [CLS]What does the zip in zip code stand for?[SEP] [CLS]...group of ze- bras are referred to as a herd or dazzle...[SEP] BERTR(q, 2) M L P Sread (2, “ZIPs”, q) Sretr (2, q) BERTB(2) M [CLS]...ZIPs for other operating systems may be preceded by...[SEP] Top K [CLS] What does the zip in zip code stand for? [SEP]...ZIPs for other operating systems may be preceded by...[SEP] M L P P Sread (2, “operating systems”, q) Sread (2, ..., q) Sretr (..., q) BERTB(...) ... Figure 1: Overview of ORQA. A subset of all possible answer derivations given a question q is shown here. Retrieval scores Sretr (q, b) are computed via inner products between BERT-based encoders. Top-scoring evidence blocks are jointly encoded with the question, and span representations are scored with a multi-layer perceptron (MLP) to compute Sread (q, b, s). The final joint model score is Sretr (q, b) + Sread (q, b, s). Unlike previous work using IR systems for candidate proposal, we learn to retrieve from all of Wikipedia directly. Sread (b, s, q) is learned from gold answer deriva- tions, typically from the SQuAD (Rajpurkar et al., 2016) dataset, where the evidence text is given. In work that is more closely related to our ap- proach, the reader is learned entirely from weak supervision (Joshi et al., 2017; Dhingra et al., 2017; Dunn et al., 2017). Spurious ambiguities (see Table 2) are heuristically removed by the re- trieval system, and the cleaned results are treated as gold derivations. The BERT function takes one or two string in- puts (x1 and optionally x2) as arguments. It re- turns vectors corresponding to representations of the CLS pooling token or the input tokens. Retriever component In order for the retriever to be learnable, we define the retrieval score as the inner product of dense vector representations of the question q and the evidence block b. # 3 Open-Retrieval Question Answering (ORQA) hag= W,BERT Q(q)[CLS] hy = WpBERT 2p (b)[CLS] Sretr(D, q) = hyho We propose an end-to-end model where the re- triever and reader components are jointly learned, which we refer to as the Open-Retrieval Question Answering (ORQA) model. An important aspect of ORQA is its expressivity—it is capable of re- trieving any text in an open corpus, rather than be- ing limited to the closed set returned by a black- box IR system. An illustration of how ORQA scores answer derivations is presented in Figure 1. Following recent advances in transfer learn- ing, all scoring components are derived from BERT (Devlin et al., 2018), a bidirectional trans- former that has been pre-trained on unsupervised language-modeling data. We refer the reader to the original paper for details of the architecture. In this work, the relevant abstraction can be de- scribed by the following function: where Wq and Wb are matrices that project the BERT output into 128-dimensional vectors. Reader component The reader is a span-based variant of the reading comprehension model pro- posed in Devlin et al. (2018): hstart = BERTR(q, b)[START(s)] hend = BERTR(q, b)[END(s)] Sread (b, s, q) = MLP([hstart ; hend ]) Following Lee et al. (2016), a span is represented by the concatenation of its end points, which is scored by a multi-layer perceptron to enable start/end interaction. BERT(x1, [x2]) = {CLS : hCLS, 1 : h1, 2 : h2, ...} Inference & Learning Challenges The model described above is conceptually simple. However, inference and learning are challenging since (1) an Example Supportive Evidence Spurious Ambiguity Q: Who is credited with developing the XY coordinate plane? A: Ren´e Descartes ...invention of Cartesian coordinates by Ren´e Descartes revolutionized... ...Ren´e Descartes was born in La Haye en Touraine, France... Q: How many districts are in the state of Alabama? A: seven ...Alabama is currently divided into seven congressional districts, each represented by ... ...Alabama is one of seven states that levy a tax on food at the same rate as other goods... Table 2: Examples of spurious ambiguities arising from the use of weak supervision. Good evidence retrieval is needed to generate a meaningful learning signal. open evidence corpus presents an enormous search space (over 13 million evidence blocks), and (2) how to navigate this space is entirely latent, so standard teacher-forcing approaches do not apply. Latent-variable methods are also difficult to ap- ply naively due to the large number of spuriously ambiguous derivations. For example, as shown in Table 2, many irrelevant passages in Wikipedia would contain the answer string “seven.” We address these challenges by carefully initial- izing the retriever with unsupervised pre-training The pre-trained retriever allows (Section 4). us to (1) pre-encode all evidence blocks from Wikipedia, enabling dynamic yet fast top-k re- trieval during fine-tuning (Section 5), and (2) bias the retrieval away from spurious ambiguities and towards supportive evidence (Section 6). # Inverse Cloze Task The goal of our proposed pre-training procedure is for the retriever to solve an unsupervised task that closely resembles evidence retrieval for QA. Intuitively, useful evidence typically discusses entities, events, and relations from the question. It also contains extra information (the answer) that is not present in the question. An unsupervised analog of a question-evidence pair is a sentence- context pair—the context of a sentence is semanti- cally relevant and can be used to infer information missing from the sentence. Following this intuition, we propose to pre-train our retrieval module with an Inverse Cloze Task (ICT). In the standard Cloze task (Taylor, 1953), the goal is to predict masked-out text based on its context. ICT instead requires predicting the inverse—given a sentence, predict its context (see Sretr (0, q) BERTB(0) [CLS]...Zebras have four gaits: walk, trot, canter and gallop. When chased, a zebra will zig-zag from ...[SEP] side to side... BERTQ(q) Sretr (1, q) BERTB(1) [CLS]They are generally slower than horses, but their great stamina helps them outrun predators.[SEP] [CLS]...Gagarin was further selected for an elite training group known as the Sochi Six...[SEP] Sretr (..., q) BERTB(...) ... Figure 2: Example of the Inverse Cloze Task (ICT), used for retrieval pre-training. A random sentence (pseudo-query) and its context (pseudo evidence text) are derived from the text snippet: “...Zebras have four gaits: walk, trot, canter and gallop. They are gener- ally slower than horses, but their great stamina helps them outrun predators. When chased, a zebra will zig- zag from side to side...” The objective is to select the true context among candidates in the batch. Figure 2). We use a discriminative objective that is analogous to downstream retrieval: exp(Sretr (b, q)) S> exp(Sretr(0’,@)) b’€BATCH Pier (b|q) = where q is a random sentence that is treated as a pseudo-question, b is the text surrounding q, and BATCH is the set of evidence blocks in the batch that are used as sampled negatives. An important aspect of ICT is that it requires learning more than word matching features, since the pseudo-question is not present in the evi- dence. For example, the pseudo-question in Fig- ure 2 never explicitly mentions “Zebras”, but the retriever must still be able to select the context that discusses Zebras. Being able to infer the seman- tics from under-specified language is what sets QA apart from traditional IR. to dissuade from learning to perform word the retriever matching—lexical overlap is ultimately a very useful feature for retrieval. Therefore, we only remove the sentence from its context in 90% of the examples, encouraging the model to learn both abstract representations when needed and low-level word matching features when available. ICT pre-training accomplishes two main goals: 1. Despite the mismatch between sentences dur- ing pre-training and questions during fine- tuning, we expect zero-shot evidence re- trieval performance to be sufficient for boot- strapping the latent-variable learning. 2. There is no such mismatch between pre- trained evidence blocks and downstream ev- idence blocks. We can expect the block en- coder BERTB(b) to work well without fur- ther training. Only the question encoder needs to be fine-tuned on downstream data. As we will see in the following section, these two properties are crucial for enabling computationally feasible inference and end-to-end learning. # 5 Inference Since fixed block encoders already provide a useful representation for retrieval, we can pre- compute all block encodings in the evidence cor- pus. As a result, the enormous set of evidence blocks does not need to be re-encoded while fine- tuning, and it can be pre-compiled into an index for fast maximum inner product search using ex- isting tools such as Locality Sensitive Hashing. With the pre-compiled index, inference follows a standard beam-search procedure. We retrieve the top-k evidence blocks and only compute the ex- pensive reader scores for those k blocks. While we only consider the top-k evidence blocks dur- ing a single inference step, this set dynamically changes during training since the question encoder is fine-tuned according to the weakly supervised QA data, as discussed in the following section. # 6 Learning Learning is relatively straightforward, since ICT should provide non-trivial zero-shot retrieval. We first define a distribution over answer derivations: exp(S(b, s,q)) S- S- exp(S(0', 8’, q)) b/EToP(k) s’Eb! P(b, s|q) = where TOP(k) denotes the top k retrieved blocks based on Sretr . We use k = 5 in our experiments. Given a gold answer string a, we find all (pos- sibly spuriously) correct derivations in the beam, and optimize their marginal log-likelihood: Laun(q,a) = —log Y> > P'(b, sIq) beToP(k) s€b, a=TEXT(s) where a = TEXT(s) indicates whether the answer string a matches exactly the span s. To encourage more aggressive learning, we also include an early update, where we consider a larger set of c evidence blocks but only update the retrieval score, which is cheap to compute: exp(Sreir (d, q) S- exp(Sretr(B', q)) b/E€ToP(c) Learty (4; a) = —log S- Prarty(b|q) bETOP(c), a€TEXT(b) Pearly (bq) where a ∈ TEXT(b) indicates whether answer string a appears in evidence block b. We use c = 5000 in our experiments. The final loss includes both updates: L(q, a) = Learly(q, a) + Lfull(q, a) If no matching answers are found at all, then the example is discarded. While we would expect al- most all examples to be discarded with random ini- tialization, we discard less than 10% of examples in practice due to ICT pre-training. As previously mentioned, we fine-tune all pa- rameters except those in the evidence block en- coder. Since the query encoder is trainable, the model can potentially learn to retrieve any evi- dence block. This expressivity is a crucial differ- ence from blackbox IR systems, where recall can only be improved by retrieving more evidence. # 7 Experimental Setup # 7.1 Open Domain QA Datasets We train and evaluate on data from 5 existing ques- tion answering or reading comprehension datasets. Not all of them are intended as open domain QA datasets in their original form, so we convert them to open formats, following DrQA (Chen et al., 2017). Each example in the open version of the datasets consists of a single question string and a set of reference answer strings. Natural Questions contains question from ag- gregated queries to Google Search (Kwiatkowski et al., 2019). To gather an open version of this dataset, we only keep questions with short answers and discard the given evidence document. An- swers with many tokens often resemble extractive snippets rather than canonical answers, so we dis- card answers with more than 5 tokens. Dataset Train Dev Test Example Question Example Answer Natural Questions WebQuestions CuratedTrec TriviaQA SQuAD 79168 3417 1353 78785 78713 8757 361 133 8837 8886 3610 What does the zip in zip code stand for? 2032 What airport is closer to downtown Houston? 694 What metal has the highest melting point? 11313 What did L. Fran Baum, author of The Wonder- ful Wizard of Oz, call his home in Hollywood? 10570 Other than the Automobile Club of Southern California, what other AAA Auto Club chose to simplify the divide? Zone Improvement Plan William P. Hobby Airport Tungsten Ozcot California State Automo- bile Association Table 3: Statistics and examples for the datasets that we evaluate on. There are slightly differences from the original datasets as described in Section 7.1, since not all of them were intended to be used in the open setting. WebQuestions contains questions that were sampled from the Google Suggest API (Berant et al., 2013). The answers are annotated with re- spect to Freebase, but we only keep the string rep- resentation of the entities. CuratedTrec is a corpus of question-answer pairs derived from TREC QA data curated by Baudis and Sediv´y (2015). The questions come from various sources of real queries, such as MSNSearch or AskJeeves logs, where the ques- tion askers do not observe any evidence docu- ments (Voorhees, 2001). TriviaQA is a collection of trivia question- answer pairs that were scraped from the web (Joshi et al., 2017). We use their unfiltered set and discard their distantly supervised evidence. SQuAD was designed to be a reading com- prehension dataset rather than an open domain QA dataset (Rajpurkar et al., 2016). Answer spans were selected from a Wikipedia paragraph, and the questions were written by annota- tors who were instructed to ask questions that are answered by a given answer in a given context. In the Natural Questions, WebQuestions, and CuratedTrec, the question askers do not already know the answer. This accurately reflects a distri- bution of genuine information-seeking questions. However, annotators must separately find correct answers, which requires assistance from automatic tools and can introduce a moderate bias towards results from the tool. In TriviaQA and SQuAD, automatic tools are not needed since the questions are written with known answers in mind. However, this introduces another set of biases that are arguably more prob- lematic. Question writing is not motivated by an information need. This often results in many hints in the question that would not be present in natu- rally occurring questions, as shown in the exam- ples in Table 3. This is particularly problematic for SQuAD, where the question askers are also prompted with a specific piece of evidence for the answer, leading to artificially large lexical overlap between the question and evidence. Note that these are simply properties of the datasets rather than actionable criticisms—such data collection methods are necessary to scale up, and it is unclear how one could collect a truly un- biased dataset without impractical costs. On datasets where a development set does not exist, we randomly hold out 10% of the training data for development. On datasets where the test set is hidden, we also randomly hold out 10% of the training data for development, and use the original development set for testing (following DrQA). A summary of dataset statistics and examples are shown in Table 3. # Implementation Details We mainly evaluate in the setting where only question-answer string pairs are available for su- pervision. See Section 9 for head-to-head com- parisons with the DrQA setting that uses the same evidence corpus and the same type of supervision. # 7.2 Dataset Biases Evaluating on this diverse set of question-answer pairs is crucial, because all existing datasets have inherent biases that are problematic for open do- main QA systems with learned retrieval. These biases are summarized in Table 4. Evidence Corpus We English Wikipedia snapshot from December 20, 2018 as the evidence corpus.1 The corpus is greedily 1We deviate from DrQA’s 2016 Wikipedia evidence cor- pus because the original snapshot is no longer publicly avail- able. The 12-20-2018 snapshot is available at https:// archive.org/download/enwiki-20181220. Dataset Question Question Tool- writer writer assisted knows knows answer answer evidence Natural Questions ov WebQuestions ov CuratedTrec v TriviaQA v SQuAD v v Table 4: A breakdown of biases in existing QA datasets. These biases are associated with either the question or the answer. split into chunks of at most 288 wordpieces based on BERT’s tokenizer, while preserving sentence boundaries. This results in just over 13 million evidence blocks. The title of the document is included in the block encoder. Hyperparameters In all uses of BERT (both the retriever and reader), we initialize from the uncased base model, which consists of 12 trans- former layers with a hidden size of 768. As mentioned in Section 3, the retrieval repre- sentations, hq and hb, have 128 dimensions. The small hidden size was chosen so that the final QA model can comfortably run on a single machine. We use the default optimizer from BERT. When pre-training the retriever with ICT, we use a learning rate of 10−4 and a batch size of 4096 on Google Cloud TPUs for 100k steps. When fine- tuning, we use a learning rate of 10−5 and a batch size of 1 on a single machine with a 12GB GPU. Answer spans are limited to 10 tokens. We per- form 2 epochs of fine-tuning for the larger datasets (Natural Questions, TriviaQA, and SQuAD), and 20 epochs for the smaller datasets (WebQuestions and CuratedTrec). # 8 Main Results # 8.1 Baselines We compare against other retrieval methods by us- ing alternate retrieval scores Sretr (b, q), but with the same reader. BM25 A de-facto state-of-the-art unsupervised retrieval method is BM25 (Robertson et al., 2009). It has been shown to be robust for both traditional information retrieval tasks, and evidence retrieval for question answering (Yang et al., 2017).2 Since 2We also include the title, which was slightly beneficial. Model BM25 NNLM ELMO ORQA +BERT +BERT +BERT v e D Natural Questions 24.8 20.8 WebQuestions 27.1 CuratedTrec 3.2 9.1 6.0 3.6 17.7 8.3 31.3 38.5 36.8 TriviaQA SQuAD 47.2 28.1 7.3 2.8 6.0 1.9 45.1 26.5 t s e T Natural Questions 26.5 17.7 WebQuestions 21.3 CuratedTrec 4.0 7.3 4.5 4.7 15.6 6.8 33.3 36.4 30.1 TriviaQA SQuAD 47.1 33.2 7.1 3.2 5.7 2.3 45.0 20.2 Table 5: Main results: End-to-end exact match for open-domain question answering from question- answer pairs only. Datasets where question askers know the answer behave differently from datasets where they do not. BM25 is not trainable, the retrieved evidence con- sidered during fine-tuning is static. Inspired by BERTserini (Yang et al., 2019), the final score is a learned weighted sum of the BM25 and reader score. Our implementation is based on Lucene.3 Language Models While unsupervised neural retrieval is notoriously difficult to improve over traditional IR (Lin, 2019), we include them as baselines for comparison. We experiment with unsupervised pooled representations from neural language models (LM), which has been shown to be state-of-the-art unsupervised representa- tions (Perone et al., 2018). We compare with two widely-used 128-dimensional representations: (1) NNLM, context-independent embeddings from a feed-forward LMs (Bengio et al., 2003),4 and (2) ELMO (small), a context-dependent bidirectional LSTM (Peters et al., 2018).5 As with ICT, we use the alternate encoders to pre-compute the encoded evidence blocks hb and to initialize the question encoding hq, which is fine-tuned. Based on existing IR literature and the intuition that LMs do not explicitly optimize for retrieval, we do not expect these to be strong base- lines, but they demonstrate the difficulty of encod- ing blocks of text into 128 dimensions. # 8.2 Results The main results are show in Table 5. The first result to note is that BM25 is a powerful re- trieval system. Word matching is important, and # 3https://lucene.apache.org/ 4https://tfhub.dev/google/nnlm-en-dim128/1 5https://allennlp.org/elmo Model Evidence Retrieved SQuAD DRQA DRQA (DS) DRQA (DS + MTL) 5 documents 5 documents 5 documents 27.1 28.4 29.8 BERTSERINI BERTSERINI BERTSERINI 5 documents 29 paragraphs 100 paragraphs 19.1 36.6 38.6 BM25 + BERT (gold deriv.) 5 blocks 34.7 Table 6: Analysis: Results comparable to previous work in the strongly supervised setting, where models have access to gold derivations from SQuAD. Differ- ent systems segment Wikipedia differently. There are 5.1M documents, 29.5M paragraphs, and 12.1M blocks in the December 12, 2016 Wikipedia snapshot. dense vector representations derived from lan- guage models do not readily capture this. We also show that on questions that were de- rived from real users who are seeking informa- tion (Natural Questions, WebQuestions, and Cu- ratedTrec), our ICT pre-trained retriever outper- forms BM25 by a large marge—6 to 19 points in exact match depending on the dataset. However, in datasets where the question askers already know the answer, i.e. SQuAD and Triv- iaQA, the retrieval problem resembles traditional IR. In this setting, a highly compressed 128- dimensional vector cannot match BM25’s ability to precisely represent every word in the evidence. The notable drop between development and test accuracy for SQuAD is a reflection of an artifact in the dataset—its 100k questions are derived from only 536 documents. Therefore, good retrieval tar- gets are highly correlated between training exam- ples, violating the IID assumption, and making it unsuitable for learned retrieval. We strongly sug- gest that those who are interested in end-to-end open-domain QA models no longer train and eval- uate with SQuAD for this reason. # 9 Analysis # 9.1 Strongly supervised comparison To verify that our BM25 baseline is indeed state of the art, we also provide direct comparisons with DrQA’s setup, where systems have access to gold answer derivations from SQuAD (Rajpurkar et al., 2016). While many systems have been proposed following DrQA’s original setting, we compare only to the original system and the best system that s n o i t s e u Q l a r u t a N h c t a M t c a x E 35 30 25 20 ORQA BM25 + BERT 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ICT masking rate Figure 3: Analysis: Performance on our open version of the Natural Questions dev set with various mask- ing rates for the ICT pre-training. Too much masking prevents the model from learning to exploit exact n- gram overlap. Too little masking makes language un- derstanding unnecessary. we are aware of—BERTserini (Yang et al., 2019). DrQA’s reader is DocReader (Chen et al., 2017), and they use TF-IDF to retrieve the top k documents. They also include distant supervision based on TF-IDF retrieval. BERTserini’s reader is derived from base BERT (much like our reader), and they use BM25 to retrieve the top k paragraphs (much like our BM25 baseline). A major differ- ence is that BERTserini uses true paragraphs from Wikipedia rather than arbitrary blocks, resulting in more evidence blocks due to uneven lengths. For fair comparison with these strongly su- pervised systems, we pre-train the reader on SQuAD data.6 In Table 6, our BM25 baseline, which retrieves 5 evidence blocks, greatly outper- forms 5-document BERTserini and is close to 29- paragraph BERTserini. # 9.2 Masking Rate in the Inverse Cloze Task The pseudo-query is masked from the evidence block 90% of the time, motivated by intuition in Section 4. We empirically verify our intuitions in Figure 3 by varying the masking rate, and com- paring results on our open version of the Natural Questions development set. If we always mask the pseudo-query, the re- triever never learns that n-gram overlap is a pow- erful retrieval signal, losing almost 10 points in end-to-end performance. If we never mask the pseudo-query, the problem is reduced to memo- rization and does not generalize well to question answering. The latter loses 6 points in end-to-end performance, which—perhaps not surprisingly— produces near-identical results to BM25. 6We use DrQA’s December 12, 2016 snapshot of Wikipedia for an apples-to-apples comparison. Example ORQA BM25 + BERT Q: what is the new orleans saints symbol called A: fleur-de-lis ...The team’s primary colors are old gold and black; their logo is a simplified fleur-de-lis. They played their home games in Tulane Stadium through the 1974 NFL season.... ...the SkyDome was owned by Sportsco at the time... the sale of the New Orleans Saints with team owner Tom Benson... the Saints became a symbol for that community... Q: how many senators per state in the us A: two ...powers of the Senate are established in Article One of the U.S. Constitution. Each U.S. state is represented by two senators... ...The Georgia Constitution mandates a maximum of 56 senators, elected from single-member districts... Q: when was germany given a permanent seat on the council of the league of nations A: 1926 ...Under the Weimar Republic, Germany (in fact the “Deutsches Reich” or German Empire) was admitted to the League of Nations through a resolution passed on September 8 1926. An additional 15 countries joined later... ...the accession of the German Democratic Republic to the Federal Republic of Germany, it was effective on 3 October 1990...Germany has been elected as a non-permanent member of the United Nations Security Council... Q: when was diary of a wimpy kid double down published A: November 1, 2016 ...“Diary of a Wimpy Kid” first appeared on FunBrain in 2004, where it was read 20 million times. The abridged hardcover adaptation was released on April 1, 2007... Diary of a Wimpy Kid: Double Down is the eleventh book in the ”Diary of a Wimpy Kid” series by Jeff Kinney... The book was published on November 1, 2016... Table 7: Analysis: Example predictions on our open version of the Natural Questions dev set. We show the highest scoring derivation, consisting of the evidence block and the predicted answer in bold. ORQA is more robust at separating semantically distinct text that have high lexical overlap. However, the limitation of the 128-dimensional vectors is that extremely specific concepts are less precisely represented. # 9.3 Example Predictions For a more intuitive understanding of the improve- ments from ORQA, we compare its predictions with baseline predictions in Table 7. We find that ORQA is more robust at separating semantically distinct text with high lexical overlap, as shown in the first three examples. However, it is ex- pected that there are limits to how much informa- tion can be compressed into 128-dimensional vec- tors. The last example shows that ORQA has trou- ble precisely representing extremely specific con- cepts that sparse representations can cleanly sepa- rate. These errors indicate that a hybrid approach would be promising future work. # 10 Related Work are needed to find positive learning signal while avoiding spurious ambiguities. While we motivate ICT from first principles as an unsupervised proxy for evidence retrieval, it is closely related to existing representation learning literature. ICT can be considered a generalization of the skip-gram objective (Mikolov et al., 2013), with a coarser granularity, deep architecture, and in-batch negative sampling from Logeswaran and Lee (2018). Consulting external evidence sources with la- tent retrieval has also been explored in information extraction (Narasimhan et al., 2016). In compari- son, we are able to learn a much more expressive retriever due to the strong inductive biases from ICT pre-training. Recent progress has been made towards improving evidence retrieval (Wang et al., 2018; Kratzwald and Feuerriegel, 2018; Lee et al., 2018; Das et al., 2019) by learning to aggregate from multiple re- trieval steps. They re-rank evidence candidates from a closed set, and we aim to integrate these complementary approaches in future work. Our approach is also reminiscent of weakly su- pervised semantic parsing (Clarke et al., 2010; Liang et al., 2013; Artzi and Zettlemoyer, 2013; Fader et al., 2014; Berant et al., 2013; Kwiatkowski et al., 2013), with which we share similar challenges—(1) inference and learning are tightly coupled, (2) latent derivations must be discovered, and (3) strong inductive biases # 11 Conclusion We presented ORQA, the first open domain ques- tion answering system where the retriever and reader are jointly learned end-to-end using only question-answer pairs and without any IR system. This is made possible by pre-training the retriever using an Inverse Cloze Task (ICT). Experiments show that learning to retrieve is crucial when the questions reflect an information need, i.e. the question writers do not already know the answer. # Acknowledgements We thank the Google AI Language Team for valu- able suggestions and feedback. # References Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transactions of the Associa- tion for Computational Linguistics, 1(1):49–62. Petr Baudis and Jan Sediv´y. 2015. Modeling of the question answering task in the yodaqa system. In CLEF. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137–1155. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533–1544. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1870–1879. James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from In Proceedings of the four- the world’s response. teenth conference on computational natural lan- guage learning, pages 18–27. Association for Com- putational Linguistics. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. 2019. Multi-step retriever- reader interaction for scalable open-domain question In International Conference on Learn- answering. ing Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question an- arXiv preprint swering by search and reading. arXiv:1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In Proceedings of the extracted knowledge bases. 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1156– 1165. ACM. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), volume 1, pages 1601–1611. Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question an- swering. arXiv preprint arXiv:1808.06528. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545–1556. Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, et al. 2019. Natural ques- tions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain ques- tion answering. arXiv preprint arXiv:1810.00494. Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for ex- arXiv preprint tractive question answering. arXiv:1611.01436. Percy Liang, Michael I Jordan, and Dan Klein. 2013. Learning dependency-based compositional seman- tics. Computational Linguistics, 39(2):389–446. Jimmy Lin. 2019. The neural hype and comparisons against weak baselines. In ACM SIGIR Forum. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence represen- tations. arXiv preprint arXiv:1803.02893. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Karthik Narasimhan, Adam Yala, and Regina Barzilay. 2016. Improving information extraction by acquir- ing external evidence with reinforcement learning. arXiv preprint arXiv:1603.07954. Christian S Perone, Roberto Silveira, and Thomas S Paula. 2018. Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383–2392. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends in Information Re- trieval, 3(4):333–389. Amit Singh. 2012. Entity based q&a retrieval. In Pro- ceedings of the 2012 Joint conference on empirical methods in natural language processing and com- putational natural language learning, pages 1266– 1277. Association for Computational Linguistics. Wilson L Taylor. 1953. “Cloze procedure”: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Ellen M Voorhees. 2001. Overview of the trec 2001 question answering track. In In Proceedings of the Tenth Text REtrieval Conference (TREC. Citeseer. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. R 3: Reinforced ranker-reader for open-domain question In Thirty-Second AAAI Conference on answering. Artificial Intelligence. Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1253–1256. ACM. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718.
{ "id": "1806.06259" }
1906.00091
Deep Learning Recommendation Model for Personalization and Recommendation Systems
With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model parallelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design.
http://arxiv.org/pdf/1906.00091
Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G. Azzolini, Dmytro Dzhulgakov, Andrey Mallevich, Ilia Cherniavskii, Yinghai Lu, Raghuraman Krishnamoorthi, Ansha Yu, Volodymyr Kondratenko, Stephanie Pereira, Xianjie Chen, Wenlin Chen, Vijay Rao, Bill Jia, Liang Xiong, Misha Smelyanskiy
cs.IR, cs.LG, 68T05, I.2.6; I.5.0; H.3.3; H.3.4
10 pages, 6 figures
null
cs.IR
20190531
20190531
9 1 0 2 y a M 1 3 ] R I . s c [ 1 v 1 9 0 0 0 . 6 0 9 1 : v i X r a # Deep Learning Recommendation Model for Personalization and Recommendation Systems Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi∗, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta†, Carole-Jean Wu, Alisson G. Azzolini, Dmytro Dzhulgakov, Andrey Mallevich, Ilia Cherniavskii, Yinghai Lu, Raghuraman Krishnamoorthi, Ansha Yu, Volodymyr Kondratenko, Stephanie Pereira, Xianjie Chen, Wenlin Chen, Vijay Rao, Bill Jia, Liang Xiong and Misha Smelyanskiy Facebook, 1 Hacker Way, Menlo Park, CA 94065 {mnaumov,dheevatsa}@fb.com # Abstract With the advent of deep learning, neural network-based recommendation models have emerged as an important tool for tackling personalization and recommendation tasks. These networks differ significantly from other deep learning networks due to their need to handle categorical features and are not well studied or understood. In this paper, we develop a state-of-the-art deep learning recommendation model (DLRM) and provide its implementation in both PyTorch and Caffe2 frameworks. In addition, we design a specialized parallelization scheme utilizing model paral- lelism on the embedding tables to mitigate memory constraints while exploiting data parallelism to scale-out compute from the fully-connected layers. We compare DLRM against existing recommendation models and characterize its performance on the Big Basin AI platform, demonstrating its usefulness as a benchmark for future algorithmic experimentation and system co-design. # Introduction Personalization and recommendation systems are currently deployed for a variety of tasks at large internet companies, including ad click-through rate (CTR) prediction and rankings. Although these methods have had long histories, these approaches have only recently embraced neural networks. Two primary perspectives contributed towards the architectural design of deep learning models for personalization and recommendation. The first comes from the view of recommendation systems. These systems initially employed content filtering where a set of experts classified products into categories, while users selected their preferred categories and were matched based on their preferences [22]. The field subsequently evolved to use collaborative filtering, where recommendations are based on past user behaviors, such as prior ratings given to products. Neighborhood methods [21] that provide recommendations by grouping users and products together and latent factor methods that characterize users and products by certain implicit factors via matrix factorization techniques [9, 17] were later deployed with success. The second view comes from predictive analytics, which relies on statistical models to classify or predict the probability of events based on the given data [5]. Predictive models shifted from using simple models such as linear and logistic regression [26] to models that incorporate deep networks. In order to process categorical data, these models adopted the use of embeddings, which transform the one- and multi-hot vectors into dense representations in an abstract space [20]. This abstract space may be interpreted as the space of the latent factors found by recommendation systems. ∗Northwestern University, †Harvard University, work done while at Facebook. Preprint. Under review. In this paper, we introduce a personalization model that was conceived by the union of the two perspectives described above. The model uses embeddings to process sparse features that represent categorical data and a multilayer perceptron (MLP) to process dense features, then interacts these features explicitly using the statistical techniques proposed in [24]. Finally, it finds the event probability by post-processing the interactions with another MLP. We refer to this model as a deep learning recommendation model (DLRM); see Fig. 1. A PyTorch and Caffe2 implementation of this model will be released for testing and experimentation with the publication of this manuscript. # 2 Model Design and Architecture In this section, we will describe the design of DLRM. We will begin with the high level components of the network and explain how and why they have been assembled together in a particular way, with implications for future model design, then characterize the low level operators and primitives that make up the model, with implications for future hardware and system design. # 2.1 Components of DLRM The high-level components of the DLRM can be more easily understood by reviewing early mod- els. We will avoid the full scientific literature review and focus instead on the four techniques used in early models that can be interpreted as salient high-level components of the DLRM. # 2.1.1 Embeddings In order to handle categorical data, embeddings map each category to a dense representation in an abstract space. In particular, each embedding lookup may be interpreted as using a one-hot vector ei (with the i-th position being 1 while others are 0, where index i corresponds to i-th category) to obtain the corresponding row vector of the embedding table W ∈ Rm×d as follows i = eT NNs si, L Interactions \ \ NNs > J>o> : x / : : Embedding Lookup dense features sparse features ~ y= # Figure 1: A deep learning recommendation model In more complex scenarios, an embedding can also represent a weighted combination of multiple items, with a multi-hot vector of weights a? = [0,...,ai,,...,@i,,.-.,0], with elements a; 4 0 for i = 1,...,7, and 0 everywhere else, where 71, ..., i, index the corresponding items. Note that a mini-batch of t embedding lookups can hence be written as S = AT W (2) where sparse matrix A = [a1, ..., at] [20]. DLRMs will utilize embedding tables for mapping categorical features to dense representations. However, even after these embeddings are meaningfully devised, how are they to be exploited to produce accurate predictions? To answer this, we return to latent factor methods. # 2.1.2 Matrix Factorization Recall that in the typical formulation of the recommendation problem, we are given a set S of users that have rated some products. We would like to represent the i-th product by a vector wi ∈ Rd for i = 1, ..., n and j-th user by a vector vj ∈ Rd for j = 1, ..., m to find all the ratings, where n and m denote the total number of products and users, respectively. More rigorously, the set S consists of tuples (i, j) indexing when the i-th product has been rated by the j-th user. The matrix factorization approach solves this problem by minimizing min rij − wT i vj (i,j)∈S (3) 2 where rij ∈ R is the rating of the i-th product by the j-th user for i = 1, ..., m and j = 1, ..., n. Then, letting W T = [w1, ..., wm] and V T = [v1, ..., vn], we may approximate the full matrix of ratings R = [rij] as the matrix product R ≈ W V T . Note that W and V may be interpreted as two embedding tables, where each row represents a user/product in a latent factor space2 [17]. The dot product of these embedding vectors yields a meaningful prediction of the subsequent rating, a key observation to the design of factorization machines and DLRM. # 2.1.3 Factorization Machine In classification problems, we want to define a prediction function φ : Rn → T from an input datapoint x ∈ Rn to a target label y ∈ T . As an example, we can predict the click-through rate by defining T = {+1, −1} with +1 denoting the presence of a click and −1 as the absence of a click. Factorization machines (FM) incorporate second-order interactions into a linear model with categori- cal data by defining a model of the form §=b+ wa +a! upper(VV" )x (4) where V € R"*4, w € R”, and b € R are the parameters with d < n, and upper selects the strictly upper triangular part of the matrix [24]. FMs are notably distinct from support vector machines (SVMs) with polynomial kernels [4] because they factorize the second-order interaction matrix into its latent factors (or embedding vectors) as in matrix factorization, which more effectively handles sparse data. This significantly reduces the complexity of the second-order interactions by only capturing interactions between pairs of distinct embedding vectors, yielding linear computational complexity. # 2.1.4 Multilayer Perceptrons Simultaneously, much recent success in machine learning has been due to the rise of deep learning. The most fundamental model of these is the multilayer perceptron (MLP), a prediction function composed of an interleaving sequence of fully connected (FC) layers and an activation function σ : R → R applied componentwise as shown below ˆy = Wkσ(Wk−1σ(...σ(W1x + b1)...) + bk−1) + bk (5) where weight matrix Wl ∈ Rnl×nl−1, bias bl ∈ Rnl for layer l = 1, ..., k. These methods have been used to capture more complex interactions. It has been shown, for example, that given enough parameters, MLPs with sufficient depth and width can fit data to arbitrary precision [1]. Variations of these methods have been widely used in various applications including computer vision and natural language processing. One specific case, Neural Collaborative Filtering (NCF) [15, 25] used as part of the MLPerf benchmark [19], uses an MLP rather than dot product to compute interactions between embeddings in matrix factorization. # 2.2 DLRM Architecture So far, we have described different models used in recommendation systems and predictive analytics. Let us now combine their intuitions to build a state-of-the-art personalization model. Let the users and products be described by many continuous and categorical features. To process the categorical features, each categorical feature will be represented by an embedding vector of the same dimension, generalizing the concept of latent factors used in matrix factorization (3). To handle the continuous features, the continuous features will be transformed by an MLP (which we call the bottom or dense MLP) which will yield a dense representation of the same length as the embedding vectors (5). We will compute second-order interaction of different features explicitly, following the intuition for handling sparse data provided in FMs (4), optionally passing them through MLPs. This is done by taking the dot product between all pairs of embedding vectors and processed dense features. These dot products are concatenated with the original processed dense features and post-processed with another MLP (the top or output MLP) (5), and fed into a sigmoid function to give a probability. 2 This problem is different from low-rank approximation, which can be solved by SVD [11], because not all entries of matrix R are known. 3 We refer to the resulting model as DLRM, shown in Fig. 1. We show some of the operators used in DLRM in PyTorch [23] and Caffe2 [8] frameworks in Table 1. PyTorch Caffe2 Embedding nn.EmbeddingBag SparseLengthSum Interactions matmul/bmm BatchMatMul Table 1: DLRM operators by framework MLP nn.Linear/addmm FC Loss nn.CrossEntropyLoss CrossEntropy # 2.3 Comparison with Prior Models Many deep learning-based recommendation models [3, 13, 27, 18, 28, 29] use similar underlying ideas to generate higher-order terms to handle sparse features. Wide and Deep, Deep and Cross, DeepFM, and xDeepFM networks, for example, design specialized networks to systematically construct higher-order interactions. These networks then sum the results from both their specialized model and an MLP, passing this through a linear layer and sigmoid activation to yield a final probability. DLRM specifically interacts embeddings in a structured way that mimics factorization machines to significantly reduce the dimensionality of the model by only considering cross-terms produced by the dot-product between pairs of embeddings in the final MLP. We argue that higher- order interactions beyond second-order found in other networks may not necessarily be worth the additional computational/memory cost. A key difference between DLRM and other networks is in how these networks treat embedded feature vectors and their cross-terms. In particular, DLRM (and xDeepFM [18]) interpret each feature vector as a single unit representing a single category, whereas networks like Deep and Cross treat each element in the feature vector as a new unit that should yield different cross-terms. Hence, Deep and Cross networks will produce cross-terms not only between elements from different feature vectors as in DLRM via the dot product, but also produce cross-terms between elements within the same feature vector, resulting in higher dimensionality. # 3 Parallelism Modern personalization and recommendation systems require large and complex models to capitalize on vast amounts of data. DLRMs particularly contain a very large number of parameters, up to multiple orders of magnitude more than other common deep learning models like convolutional neural networks (CNN), transformer and recurrent networks (RNN), and generative networks (GAN). This results in training times up to several weeks or more. Hence, it is important to parallelize these models efficiently in order to solve these problems at practical scales. As described in the previous section, DLRMs process both categorical features (with embeddings) and continuous features (with the bottom MLP) in a coupled manner. Embeddings contribute the majority of the parameters, with several tables each requiring in excess of multiple GBs of memory, making DLRM memory-capacity and bandwidth intensive. The size of the embeddings makes it prohibitive to use data parallelism since it requires replicating large embeddings on every device. In many cases, this memory constraint necessitates the distribution of the model across multiple devices to be able satisfy memory capacity requirements. On the other hand, the MLP parameters are smaller in memory but translate into sizeable amounts of compute. Hence, data-parallelism is preferred for MLPs since this enables concurrent processing of the samples on different devices and only requires communication when accumulating updates. Our parallelized DLRM will use a combination of model parallelism for the embeddings and data parallelism for the MLPs to mitigate the memory bottleneck produced by the embeddings while parallelizing the forward and backward propagations over the MLPs. Combined model and data parallelism is a unique requirement of DLRM as a result of its architecture and large model sizes. Such combined parallelism is not supported in either Caffe2 or PyTorch (as well as other popular deep learning frameworks), therefore we design a custom implementation. We plan to provide its detailed performance study in forthcoming work. In our setup, the top MLP and the interaction operator require access to part of the mini-batch from the bottom MLP and all of the embeddings. Since model parallelism has been used to distribute the embeddings across devices, this requires a personalized all-to-all communication [12]. At the end of the embedding lookup, each device has a vector for the embedding tables resident on those devices for all the samples in the mini-batch, which needs to be split along the mini-batch dimension and 4 Device 1 Device 2 Device 3 2 2 jimension nay £ ; dimension : ( Zz zg 7 na Data-parallel a Device 1 Device 2 Device 3 Figure 2: Butterfly shuffle for the all-to-all (personalized) communication communicated to the appropriate devices, as shown in Fig. 2. Neither PyTorch nor Caffe2 provide native support for model parallelism; therefore, we have implemented it by explicitly mapping the embedding operators (nn.EmbeddingBag for PyTorch, SparseLengthSum for Caffe2) to different devices. Then personalized all-to-all communication is implemented using the butterfly shuffle operator, which appropriately slices the resulting embedding vectors and transfers them to the target devices. In the current version, these transfers are explicit copies, but we intend to further optimize this using the available communication primitives (such as all-gather and send-recv). We note that for the data parallel MLPs, the parameter updates in the backward pass are accu- mulated with an allreduce3 and applied to the replicated parameters on each device [12] in a synchronous fashion, ensuring the updated parameters on each device are consistent before every iteration. In PyTorch, data parallelism is enabled through the nn.DistributedDataParallel and nn.DataParallel modules that replicate the model on each device and insert allreduce with the necessary dependencies. In Caffe2, we manually insert allreduce before the gradient update. # 4 Data In order to measure the accuracy of the model, test its overall performance, and characterize the individual operators, we need to create or obtain a data set for our implementation. Our current implementation of the model supplies three types of data sets: random, synthetic and public data sets. The former two data sets are useful in experimenting with the model from the systems perspective. In particular, it permits us to exercise different hardware properties and bottlenecks by generating data on the fly while removing dependencies on data storage systems. The latter allows us to perform experiments on real data and measure the accuracy of the model. # 4.1 Random Recall that DLRM accepts continuous and categorical features as inputs. The former can be modeled by generating a vector of random numbers using either a uniform or normal (Gaussian) distributions with the numpy.random package rand or randn calls with default parameters. Then a mini-batch of inputs can be obtained by generating a matrix where each row corresponds to an element in the mini-batch. To generate categorical features, we need to determine how many non-zero elements we would like have in a given multi-hot vector. The benchmark allows this number to be either fixed or random within a range4 [1, k]. Then, we generate the corresponding number of integer indices, within a range [1, m], where m is the number of rows in the embedding W in (2). Finally, in order to create a mini-batch of lookups, we concatenate the above indices and delineate each individual lookup with lengths (SparseLengthsSum) or offsets (nn.EmbeddingBag)5. 3 4 5 Optimized implementations for the allreduce op. include Nvidia’s NCCL [16] and Facebook’s gloo [7]. see options --num-indices-per-lookup=k and --num-indices-per-lookup-fixed For instance, in order to represent three embedding lookups, with indices {0, 2}, {0, 1, 5} and {3} we use lengths/offsets = {2, 3, 1}/{0, 2, 5} indices = {0, 2, 0, 1, 5, 3} Note that this format resembles Compressed-Sparse Row (CSR) often used for sparse matrices in linear algebra. 5 # 4.2 Synthetic There are many reasons to support custom generation of indices corresponding to categorical features. For instance, if our application uses a particular data set, but we would not like to share it for privacy purposes, then we may choose to express the categorical features through distributions. This could potentially serve as an alternative to the privacy preserving techniques used in applications such as federated learning [2, 10]. Also, if we would like to exercise system components, such as studying memory behavior, we may want to capture fundamental locality of accesses of original trace within synthetic trace. Let us now illustrate how we can use a synthetic data set. Assume that we have a trace of indices that correspond to embedding lookups for a single categorical feature (and repeat the process for all features). We can record the unique accesses and frequency of distances between repeated accesses in this trace (Alg. 1) and then generate a synthetic trace (Alg. 2) as proposed in [14]. # Algorithm 1 Profile (Original) Trace 1: Let tr be input sequence, s stack of distances, u list of unique accesses and p probability distribution 2: Let s.position_from_the_top return d = 0 if the index is not found, and d > 0 otherwise. 3: for i=0; i<length(tr); i++ do 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: end for a = tr[i] d = s.position_from_the_top(a) if d == 0 then u.append(a) # else # s.remove_from_the_top_at_position(d) end if p[d] += 1.0/length(tr) s.push_to_the_top(a) # Algorithm 2 Generate (Synthetic) Trace 1: Let u be input list of unique accesses and p probability distribution of distances, while tr output trace. 2: for s=0, i=0; i<length; i++ do 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: end for d = p.sample_from_distribution_with_support(0,s) if d == 0 then # a = u.remove_from_front() s++ # else # a = u.remove_from_the_back_at_position(d) end if u.append(a) tr[i] = a Note that we can only generate a stack distance up to s number of unique accesses we have seen so far, therefore s is used to control the support of the distribution p in Alg. 2. Given a fixed number of unique accesses, the longer input trace will result in lower probability being assigned to them in Alg. 1, which will lead to longer time to achieve full distribution support in Alg. 2. In order to address this problem, we increase the probability for the unique accesses up to a minimum threshold and adjust support to remove unique accesses from it once all have been seen. A visual comparison of probability distribution p based on original and synthetic traces is shown in Fig. 3. In our experiments original and adjusted synthetic traces produce similar cache hit/miss rates. Alg. 1 and 2 were designed for more accurate cache simulations, but they illustrate a general idea of how probability distributions can be used to generate synthetic traces with desired properties. 6 (a) original (b) synthetic trace (c) adjusted synthetic trace 0.012 0.011 Mal f\ MA 0.010 \ wd Neda Waar) \ 0.009 a a a a a 0.014 - Mi, \ 0.012 AY \ 2030 ey : wy 0.006 0 2 4 6 8 10% 0.012 ' 0.011 nl i 0.010 | \ Wet / l 909 10% 9 9 4 GO 80 100 Figure 3: Probability distribution p based on a sample trace tr = random.uniform(1,100,100K) # 4.3 Public Few public data sets are available for recommendation and personalization systems. The Criteo AI Labs Ad Kaggle6 and Terabyte7 data sets are open-sourced data sets consisting of click logs for ad CTR prediction. Each data set contains 13 continuous and 26 categorical features. Typically the continuous features are pre-processed with a simple log transform log(1 + x). The categorical feature are mapped to its corresponding embedding index, with unlabeled categorical features or labels mapped to 0 or NULL. The Criteo Ad Kaggle data set contains approximately 45 million samples over 7 days. In experiments, typically the 7th day is split into a validation and test set while the first 6 days are used as the training set. The Criteo Ad Terabyte data set is sampled over 24 days, where the 24th day is split into a validation and test set and the first 23 days is used as a training set. Note that there are an approximately equal number of samples from each day. # 5 Experiments Let us now illustrate the performance and accuracy of DLRM. The model is implemented in PyTorch and Caffe2 frameworks and is available on GitHub8. It uses fp32 floating point and int32(Caffe2)/int64(PyTorch) types for model parameters and indices, respectively. The experiments are performed on the Big Basin platform with Dual Socket Intel Xeon 6138 CPU @ 2.00GHz and eight Nvidia Tesla V100 16GB GPUs, publicly available through the Open Compute Project9, shown in Fig. 4. a <G Figure 4: Big Basin AI platform # 5.1 Model Accuracy on Public Data Sets We evaluate the accuracy of the model on Criteo Ad Kaggle data set and compare the performance of DLRM against a Deep and Cross network (DCN) as-is without extensive tuning [27]. We compare with DCN because it is one of the few models that has comprehensive results on the same data set. Notice that in this case the models are sized to accommodate the number of features present in the data set. In particular, DLRM consists of both a bottom MLP for processing dense features consisting of three hidden layers with 512, 256 and 64 nodes, respectively, and a top MLP consisting of two hidden layers with 512 and 256 nodes. On the other hand DCN consists of six cross layers and a deep network with 512 and 256 nodes. An embedding dimension of 16 is used. Note that this yields a DLRM and DCN both with approximately 540M parameters. We plot both the training (solid) and validation (dashed) accuracies over a full single epoch of training for both models with SGD and Adagrad optimizers [6]. No regularization is used. In this experiment, DLRM obtains slightly higher training and validation accuracy, as shown in Fig. 5. We emphasize that this is without extensive tuning of model hyperparameters. 6 https://www.kaggle.com/c/criteo-display-ad-challenge https://labs.criteo.com/2013/12/download-terabyte-click-logs/ https://github.com/facebookresearch/dlrm https://www.opencompute.org 7 (a) SGD (b) Adagrad Figure 5: Comparison of training (solid) and validation (dashed) accuracies of DLRM and DCN # 5.2 Model Performance on a Single Socket/Device To profile the performance of our model on a single socket device, we consider a sample model with 8 categorical features and 512 continuous features. Each categorical feature is processed through an embedding table with 1M vectors, with vector dimension 64, while the continuous features are assembled into a vector of dimension 512. Let the bottom MLP have two layers, while the top MLP has four layers. We profile this model on a data set with 2048K randomly generated samples organized into 1K mini-batches10. 100% ew ™ WeightedSum = = BatchMatMul 70% m™ ReluGradient 60% 50% mRelu 40% "= FCGradient 30% FC 20% ™ SparseLengthsSumGradient* 10% m SparseLengthssum on + cpu GPU 100% — 90% = BmmBackward % 8° = bmm 70% m ReluBackward 60% mrel 50% relu ame "= AddmmBackward 30% addmm 20% m= EmbeddingBagBackward ad ™ embedding_bag* on 4 cPU GPU (a) Caffe2 (b) PyTorch Figure 6: Profiling of a sample DLRM on a single socket/device This model implementation in Caffe2 runs in around 256 seconds on the CPU and 62 seconds on the GPU, with profiling of individual operators shown in Fig. 6. As expected, the majority of time is spent performing embedding lookups and fully connected layers. On the CPU, fully connected layers take a significant portion of the computation, while on the GPU they are almost negligible. # 6 Conclusion In this paper, we have proposed and open-sourced a novel deep learning-based recommendation model that exploits categorical data. Although recommendation and personalization systems still drive much practical success of deep learning within industry today, these networks continue to receive little attention in the academic community. By providing a detailed description of a state-of- the-art recommendation system and its open-source implementation, we hope to draw attention to the unique challenges that this class of networks present in an accessible way for the purpose of further algorithmic experimentation, modeling, system co-design, and benchmarking. 10 For instance, this configuration can be achieved with the following command line arguments --arch-embedding-size=1000000-1000000-1000000-1000000-1000000-1000000-1000000-1000000 --arch-sparse-feature-size=64 --arch-mlp-bot=512-512-64 --arch-mlp-top=1024-1024-1024-1 --data-generation=random --mini-batch-size=2048 --num-batches=1000 --num-indices-per-lookup=100 [--use-gpu] [--enable-profiling] 8 # Acknowledgments The authors would like to acknowledge AI Systems Co-Design, Caffe2, PyTorch and AML team members for their help in reviewing this document. # References [1] Christopher M. Bishop. Neural Networks for Pattern Recognition. The Oxford University Press, 1st edition, 1995. [2] Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloé Kiddon, Jakub Koneˇcný, Stefano Mazzocchi, Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. Towards federated learning at scale: System design. In Proc. 2nd Conference on Systems and Machine Learning (SysML), 2019. [3] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu, and Hemal Shah. Wide & deep learning for recommender systems. In Proc. 1st Workshop on Deep Learning for Recommender Systems, pages 7–10, 2016. [4] Corinna Cortes and Vladimir N. Vapnik. Support-vector networks. Machine Learning, 2:273–297, 1995. [5] Luc Devroye, Laszlo Gyorfi, and Gabor Lugosi. A Probabilistic Theory of Pattern Recognition. New York, Springer-Verlag, 1996. [6] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [7] Facebook. Collective communications library with various primitives for multi-machine training (gloo), https://github.com/facebookincubator/gloo. [8] Facebook. Caffe2, https://caffe2.ai, 2016. [9] Evgeny Frolov and Ivan Oseledets. Tensor methods and recommender systems. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(3):e1201, 2017. [10] Craig Gentry. A fully homomorphic encryption scheme. PhD thesis, Stanford University, 2009. [11] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The John Hopkins University Press, 3rd edition, 1996. [12] Ananth Grama, Vipin Kumar, Anshul Gupta, and George Karypis. Introduction to parallel computing. Pearson Education, 2003. [13] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. DeepFM: a factorization- machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247, 2017. [14] Rahman Hassan, Antony Harris, Nigel Topham, and Aris Efthymiou. Synthetic trace-driven simulation of cache memory. In Proc. 21st International Conference on Advanced Information Networking and Applications Workshops (AINAW’07), 2007. [15] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proc. 26th Int. Conf. World Wide Web, pages 173–182, 2017. [16] Sylvain Jeaugey. Nccl 2.0, 2017. [17] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, (8):30–37, 2009. [18] Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. xDeepFM: Combining explicit and implicit feature interactions for recommender systems. In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1754–1763. ACM, 2018. [19] MLPerf. https://mlperf.org/. [20] Maxim Naumov. On the dimensionality of embeddings for sparse features and data. In arXiv preprint arXiv:1901.02103, 2019. [21] Xia Ning, Christian Desrosiers, and George Karypis. A comprehensive survey of neighborhood-based recommendation methods. In Recommender Systems Handbook, 2015. [22] Pandora. Music genome project https://www.pandora.com/about/mgp. [23] Adam Paszke, Sam Gross, Soumith Chintala, and Gregory Chanan. PyTorch: Tensors and dynamic neural networks in python with strong GPU acceleration https://pytorch.org/, 2017. [24] Steffen Rendle. Factorization machines. In Proc. 2010 IEEE International Conference on Data Mining, pages 995–1000, 2010. 9 [25] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. Autorec: Autoencoders meet collaborative filtering. In Proc. 24th Int. Conf. World Wide Web, pages 111–112, 2015. [26] Strother H. Walker and David B. Duncan. Estimation of the probability of an event as a function of several independent variables. Biometrika, 54:167–178, 1967. [27] Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. Deep & cross network for ad click predictions. In Proc. ADKDD, page 12, 2017. [28] Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. Deep interest evolution network for click-through rate prediction. arXiv preprint arXiv:1809.03672, 2018. [29] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. Deep interest network for click-through rate prediction. In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1059–1068. ACM, 2018. 10
{ "id": "1809.03672" }
1905.12801
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach successfully mitigates gender bias in language models without increasing perplexity. In comparison to existing debiasing strategies, data augmentation, and word embedding debiasing, our method performs better in several aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our approach, and show that it outperforms existing strategies in all bias evaluation metrics.
http://arxiv.org/pdf/1905.12801
Yusu Qian, Urwa Muaz, Ben Zhang, Jae Won Hyun
cs.CL
Accepted at ACL-SRW 2019. To appear in Proceedings of ACL-SRW 2019
null
cs.CL
20190530
20190603
9 1 0 2 n u J 3 ] L C . s c [ arXiv:1905.12801v2 2 v 1 0 8 2 1 . 5 0 9 1 : v i X r a # Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function Yusu Qian* Urwa Muaz* Tandon School Tandon School of Engineering of Engineering New York University | New York University 6 MetroTech Center 6 MetroTech Center Brooklyn, NY, 11201 Brooklyn, NY, 11201 [email protected] [email protected] Ben Zhang Center for Data Science New York University 60 Fifth Avenue New York, NY, 10012 [email protected] Jae Won Hyun Department of Computer Science New York University 251 Mercer St New York, NY, 10012 [email protected] # Abstract Gender bias exists in natural language datasets which neural language models tend to learn, resulting in biased text generation. In this research, we propose a debiasing approach based on the loss function modification. We introduce a new term to the loss function which attempts to equalize the probabilities of male and female words in the output. Using an array of bias evaluation metrics, we provide empirical evidence that our approach success- fully mitigates gender bias in language mod- els without increasing perplexity by much. In comparison to existing debiasing strategies, data augmentation, and word embedding de- biasing, our method performs better in sev- eral aspects, especially in reducing gender bias in occupation words. Finally, we introduce a combination of data augmentation and our ap- proach, and show that it outperforms existing strategies in all bias evaluation metrics. # 1 Introduction Natural Language Processing (NLP) models are shown to capture unwanted biases and stereotypes found in the training data which raise concerns about socioeconomic, ethnic and gender discrimi- nation when these models are deployed for public use (Lu et al., 2018; Zhao et al., 2018). Bordia and Bowman (2019) have shown that this task is vulnerable to gender bias in the training corpus. Two prior works focused on reducing bias in language modelling by data preprocessing (Lu et al., 2018) and word embedding debiasing In this study, we (Bordia and Bowman, 2019). investigate the efficacy of bias reduction during training by introducing a new loss function which encourages the language model to equalize the probabilities of predicting gendered word pairs like he and she. Although we recognize that gender is non-binary, for the purpose of this study, we focus on female and male words. Our main contributions are summarized as fol- lows: i) to our best knowledge, this study is the first one to investigate bias alleviation in text gen- eration by direct modification of the loss func- tion; ii) our new loss function effectively reduces gender bias in the language models during train- ing by equalizing the probabilities of male and iii) we show that female words in the output; end-to-end debiasing of the language model can achieve word embedding debiasing; iv) we pro- vide an interpretation of our results and draw a comparison to other existing debiasing methods. We show that our method, combined with an ex- isting method, counterfactual data augmentation, achieves the best result and outperforms all exist- ing methods. There are numerous studies that identify al- gorithmic bias in NLP applications. Lapowsky (2018) showed ethnic bias in Google autocom- plete suggestions whereas Lambrecht and Tucker (2018) found gender bias in advertisement de- livery systems. Additionally, Zhao et al. (2018) demonstrated that coreference resolution systems exhibit gender bias. task in NLP with downstream applica- tions such as text generation (Sutskever et al., 2011). Recent studies by Lu et al. (2018) and ∗Yusu Qian and Urwa Muaz contributed equally to the paper. # 2 Related Work Recently, the study of bias in NLP applications has received increasing attention from researchers. Most relevant work in this domain can be broadly divided into two categories: word embedding de- biasing and data debiasing by preprocessing. Word Embedding Debiasing Bolukbasi et al. (2016) introduced the idea of gender subspace as low dimensional space in an embedding that captures the gender information. Bolukbasi et al. (2016) and Zhao et al. (2017) defined gender bias as a projection of gender-neutral words on a gen- der subspace and removed bias by minimizing this projection. Gonen and Goldberg (2019) proved that bias removal techniques based on minimiz- ing projection onto the gender space are insuffi- cient. They showed that male and female stereo- typed words cluster together even after such debi- asing treatments. Thus, gender bias still remains in the embeddings and is easily recoverable. Bordia and Bowman (2019) introduced a co- occurrence based metric to measure gender bias in texts and showed that the standard datasets used for language model training exhibit strong gender bias. They also showed that the models trained on these datasets amplify bias measured on the model-generated texts. Using the same defini- tion of embedding gender bias as Bolukbasi et al. (2016), Bordia and Bowman (2019) introduced a regularization term that aims to minimize the pro- jection of neutral words onto the gender subspace. Throughout this paper,we refer to this approach as REG. They found that REG reduces bias in the generated texts for some regularization coefficient values. But, this bias definition is shown to be in- complete by Gonen and Goldberg (2019). Instead of explicit geometric debiasing of the word em- bedding, we implement a loss function that mini- mizes bias in the output and thus adjust the whole network accordingly. For each model, we analyze the generated word embedding to understand how it is affected by output debiasing. Data Debiasing Lu et al. (2018) showed that gender bias in coreference resolution and language modelling can be mitigated through a data aug- mentation technique that expands the corpus by swapping the gender pairs like he and she, or fa- ther and mother. They called this Counterfactual Data Augmentation (CDA) and concluded that it outperforms the word embedding debiasing strat- egy proposed by Bolukbasi et al. (2016). CDA doubles the size of the training data and increases time needed to train language models. In this study, we intend to reduce bias during training without requiring an additional data preprocessing step. # 3 Methodology # 3.1 Dataset For the training data, we use Daily Mail news articles released by Hermann et al. (2015). This dataset is composed of 219,506 articles covering a diverse range of topics including business, sports, travel, etc., and is claimed to be biased and sen- sational (Bordia and Bowman, 2019). For man- ageability, we randomly subsample 5% of the text. The subsample has around 8.25 million tokens in total. # 3.2 Language Model We use a pre-trained 300-dimensional word em- bedding, GloVe, by Pennington et al. (2014). We apply random search to the hyperparameter tuning of the LSTM language model. The best hyperpa- rameters are as follows: 2 hidden layers each with 300 units, a sequence length of 35, a learning rate of 20 with an annealing schedule of decay start- ing from 0.25 to 0.95, a dropout rate of 0.25 and a gradient clip of 0.25. We train our models for 150 epochs, use a batch size of 48, and set early stopping with a patience of 5. # 3.3 Loss Function Language models are usually trained using cross- entropy loss. Cross-entropy loss at time step t is LCE(t) = − X w∈V yw,t log (ˆyw,t) , where V is the vocabulary, y is the one hot vector of ground truth and ˆy indicates the output softmax probability of the model. We introduce a loss term LB, which aims to equalize the predicted probabilities of gender pairs such as woman and man. P(t) = S log Hit PGES Gt f and m are a set of corresponding gender pairs, G is the size of the gender pairs set, and ˆy indicates the output softmax probability. We use gender pairs provided by Zhao et al. (2017). By consider- ing only gender pairs we ensure that only gender information is neutralized and distribution over se- mantic concepts is not altered. For example, it will try to equalize the probabilities of congress- man with congresswoman and actor with actress but distribution of congressman, congresswoman versus actor, actress will not be affected. Overall loss can be written as F 1 P(w|m) BN == YO flog =——], e N °6 P(wlf) |? weN L = 1 T T X t=1 LCE(t) + λLB(t) , where P (w|g) = c(w, g) c(g) . where λ is a hyperparameter and T is the corpus size. We observe that among the similar minima of the loss function, LB encourages the model to converge towards a minimum that exhibits the lowest gender bias. BN is less affected by the disparity in the general c distribution of male and female words in the text. The disparity between the occurrences of the two genders means that text is more inclined to men- tion one over the other, so it can also be considered a form of bias. We report the ratio of occurrence of male and female words in the model generated text, GR, as # 3.4 Model Evaluation Language models are evaluated using perplexity, which is a standard measure of performance for unseen data. For bias evaluation, we use an array of metrics to provide a holistic diagnosis of the model behavior under debiasing treatment. These metrics are discussed in detail below. In all the evaluation metrics requiring gender pairs, we use gender pairs provided by Zhao et al. (2017). This list contains 223 pairs, all other words are consid- ered gender-neutral. GR = c(m) c(f ) . 3.4.2 Causal Bias Another way of quantifying bias in NLP models is based on the idea of causal testing. The model is exposed to paired samples which differ only in one attribute (e.g. gender) and the disparity in the out- put is interpreted as bias related to that attribute. Zhao et al. (2018) and Lu et al. (2018) applied this method to measure bias in coreference resolution and Lu et al. (2018) also used it for evaluating gen- der bias in language modelling. 3.4.1 Co-occurrence Bias Co-occurrence bias is computed from the model- generated texts by comparing the occurrences of all gender-neutral words with female and male words. A word is considered to be biased to- wards a certain gender if it occurs more frequently with words of that gender. This definition was first used by Zhao et al. (2017) and later adapted by Bordia and Bowman (2019). Using the def- inition of gender bias similar to the one used by Bordia and Bowman (2019), we define gender bias as Following the approach similar to Lu et al. (2018), we limit this bias evaluation to a set of gender-neutral occupations. We create a list of sentences based on a set of templates. There are two sets of templates used for evaluating causal occupation bias (Table 1). The first set of tem- plates is designed to measure how the probabilities of occupation words depend on the gender infor- mation in the seed. Below is an example of the first set of templates: c( UW, m) ~N >» oe -(w, f) [Gendered word] is a | [occupation] . where N is a set of gender-neutral words, and c(w, g) is the occurrences of a word w with words of gender g in the same window. This score is designed to capture unequal co-occurrences of neutral words with male and female words. Co- occurrences are computed using a sliding window of size 10 extending equally in both directions. Furthermore, we only consider words that occur more than 20 times with gendered words to ex- clude random effects. Here, the vertical bar separates the seed sequence that is fed into the language models from the target occupation, for which we observe the output soft- max probability. We measure causal occupation bias conditioned on gender as CBlg= aed _ Plol fi). a plolm;) We also evaluate a normalized version of BN which we denote by conditional co-occurrence bias, BN where O is a set of gender-neutral occupations and G is the size of the gender pairs set. For exam- ple, P (doctor|he) is the softmax probability of s1 He is a | s2 She is a | t doctor log P (t|s1) P (t|s2) s The doctor is a | t1 man t2 woman (a) Occupation bias conditioned on gendered words (b) Occupation bias conditioned on occupations # log P (t1|s) P (t2|s) Table 1: Example templates of two types of occupation bias the word doctor where the seed sequence is He is a. The second set of templates like below, aims to capture how the probabilities of gendered words depend on the occupation words in the seed. T he [occupation] is a | [gendered word] . Causal occupation bias conditioned on occupation is represented as (filo) a G LS i p(mi|o) 0€O i CBlo= — where O is a set of gender-neutral occupations and G is the size of the gender pairs set. For example, P (man|doctor) is the softmax probability of man where the seed sequence is The doctor is a. # 3.5 Existing Approaches We apply CDA where we swap all the gendered words using a bidirectional dictionary of gender pairs described by Lu et al. (2018). This creates a dataset twice the size of the original data, with exactly the same contextual distributions for both genders and we use it to train the language models. the bias regularization method of Bordia and Bowman (2019) which debiases the word embedding during language model training by minimizing the projection of neutral words on the gender axis. We use hyper- parameter tuning to find the best regularization co- efficient and report results from the model trained with this coefficient. We later refer to this strategy as REG. We believe that both CB|g and CB|o contribute to gender bias in the model-generated texts. We also note that CB|o is more easily influenced by the general disparity in male and female word probabilities. # 3.4.3 Word Embedding Bias Our debiasing approach does not explicitly ad- dress the bias in the embedding layer. Therefore, we use gender-neutral occupations to measure the embedding bias to observe if debiasing the output layer also decreases the bias in the embedding. We define the embedding bias, EBd, as the difference between the Euclidean distance of an occupation word to male words and the distance of the occu- pation word to the female counterparts. This defi- nition is equivalent to bias by projection described by Bolukbasi et al. (2016). We define EBd as # 4 Experiments Initially, we measure the co-occurrence bias in the training data. After training the baseline model, we implement our loss function and tune for the λ hyperparameter. We test the existing debias- ing approaches, CDA and REG, as well but since Bordia and Bowman (2019) reported that results fluctuate substantially with different REG regu- larization coefficients, we perform hyperparame- ter tuning and report the best results in Table 2. Additionally, we implement a combination of our loss function and CDA and tune for λ. Finally, bias evaluation is performed for all the trained models. Causal occupation bias is measured di- rectly from the models using template datasets dis- cussed above and co-occurrence bias is measured from the model-generated texts, which consist of 10,000 documents of 500 words each. G EBd = X o∈O |kE(o) − E(mi)k2 X i −kE(o) − E(fi)k2| , where O is a set of gender-neutral occupations, G is the size of the gender pairs set and E is the word-to-vector dictionary. # 4.1 Results Results for the experiments are listed in Table 2. It is interesting to observe that the baseline model amplifies the bias in the training data set as mea- sured by BN and BN c . From measurements us- ing the described bias metrics, our method effec- tively mitigates bias in language modelling with- BN 0.340 0.531 0.381 0.208 0.492 0.459 0.312 0.226 0.218 0.221 λ0.5 + CDA 0.205 BN c 0.213 0.282 0.329 0.149 0.245 0.208 0.173 0.151 0.153 0.157 0.145 Model Dataset Baseline REG CDA λ0.01 λ0.1 λ0.5 λ0.8 λ1 λ2 GR P pl. - 117.845 114.438 117.976 118.585 118.713 120.344 119.792 120.973 123.248 117.971 CB|o - 1.447 1.861 0.703 0.111 0.013 0.000 0.001 0.000 0.000 0.000 CB|g - 97.762 108.740 56.82 9.306 2.326 1.159 1.448 0.999 0.471 0.153 1.415 1.028 1.037 1.445 1.463 1.252 1.096 1.049 1.020 1.012 EBd - 0.528 0.373 0.268 0.077 0.018 0.006 0.002 0.002 0.000 0.000 Table 2: Evaluation results for models trained on Daily Mail and their generated texts out a significant increase in perplexity. At λ value of 1, it reduces BN by 58.95%, BN c by 45.74%, CB|o by 100%, CB|g by 98.52% and EBd by 98.98%. Compared to the results of CDA and REG, it achieves the best results in both occupa- tion biases, CB|g and CB|o, and EBd. We notice that all methods result in GR around 1, indicat- ing that there are near equal amounts of female and male words in the generated texts. In our ex- periments we note that with increasing λ, the bias steadily decreases and perplexity tends to slightly increase. This indicates that there is a trade-off between bias and perplexity. REG is not very effective in mitigating bias when compared to other methods, and fails to achieve the best result in any of the bias metrics that we used. But REG results in the best perplex- ity and even does better than the baseline model in this respect. This indicates that REG has a slight regularization effect. Additionally, it is interesting to note that our loss function outperforms REG in EBd even though REG explicitly aims to re- duce gender bias in the embeddings. Although our method does not explicitly attempt geomet- ric debiasing of the word embedding, the results show that it results in the most debiased embed- ding as compared to other methods. Furthermore, Gonen and Goldberg (2019) emphasizes that ge- ometric gender bias in word embeddings is not completely understood and existing word embed- ding debiasing strategies are insufficient. Our ap- proach provides an appealing end-to-end solution for model debiasing without relying on any mea- sure of bias in the word embedding. We believe this concept is generalizable to other NLP appli- cations. Our method outperforms CDA in CB|g, CB|o, and EBd. While CDA achieves slightly better re- sults for co-occurrence biases, BN and BN c , and results in a better perplexity. With a marginal differences, our results are comparable to those of CDA and both models seem to have similar bias mitigation effects. However, our method does not require a data augmentation step and allows training of an unbiased model directly from bi- ased datasets. For this reason, it also requires less time to train than CDA since its training data has a smaller size without data augmentation. Fur- thermore, CDA fails to effectively mitigate occu- pation bias when compared to our approach. Al- though the training data for CDA does not con- tain gender bias, the model still exhibits some gen- der bias when measured with our causal occupa- tion bias metrics. This reinforces the concept that some model-level constraints are essential to debi- asing a model and dataset debiasing alone cannot be trusted. Finally, we note that the combination of CDA and our loss function outperforms all the methods in all measures of biases without compromising perplexity. Therefore, it can be argued that a cas- cade of these approaches can be used to optimally debias the language models. # 5 Conclusion and Discussion In this research, we propose a new approach for mitigating gender bias in neural language models and empirically show its effectiveness in reducing bias as measured with different evaluation metrics. Our research also highlights the fact that debias- ing the model with bias penalties in the loss func- tion is an effective method. We emphasize that loss function based debiasing is powerful and gen- eralizable to other downstream NLP applications. The research also reinforces the idea that geomet- ric debiasing of the word embedding is not a com- plete solution for debiasing the downstream appli- cations but encourages end-to-end approaches to debiasing. Conference on Neural Systems, pages 1693–1701. Information Processing Anja Lambrecht and Catherine E. Tucker. 2018. Algorithmic bias? an empirical study into apparent gender-based discrimination in the display of stem career ads. Lapowsky. Google autocomplete still makes vile suggestions. All the debiasing techniques experimented in this paper rely on a predefined set of gender pairs in some way. CDA used gender pairs for flipping, REG uses it for gender space definition and our technique uses them for computing loss. This re- liance on pre-defined set of gender pairs can be considered a limitation of these methods. It also results in another concern. There are gender asso- ciated words which do not have pairs, like preg- nant. These words are not treated properly by techniques relying on gender pairs. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Pree- and Anupam Datta. 2018. tam Amancharla, Gender bias in neural natural language processing. ArXiv:1807.11714v1. Jeffrey and Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing, page 15321543. Association for Computational Linguistics. Future work includes designing a context-aware version of our loss function which can distinguish between the unbiased and biased mentions of the gendered words and only penalize the biased ver- sion. Another interesting direction is exploring the application of this method in mitigating racial bias which brings more challenges. # 6 Acknowledgment and Generating text with recurrent neural networks. the 28th Interna- In ICML’11 Proceedings of tional Conference on International Conference on Machine Learning, pages 1017–1024. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vi- 2017. and Kai-Wei Chag. cente Ordonez, Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Conference on Empirical Methods in Natural Language Processing. We are grateful to Sam Bowman for helpful ad- vice, Shikha Bordia, Cuiying Yang, Gang Qian, Xiyu Miao, Qianyi Fan, Tian Liu, and Stanislav Sobolevsky for discussions, and reviewers for de- tailed feedback. Li, 2018. Wei Wang, Chang Kaiwei. In Learning gender-neutral word embeddings. Proceedings of the 2018 Conference on Empir- ical Methods in Natural Language Processing, page 48474853. Association for Computational Linguistics. # References James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In NIPS’16 Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 4356–4364. Shikha Bordia and Samuel R. Bowman. 2019. Identifying and reducing gender bias in word-level language models. ArXiv:1904.03035. # Hila # Gonen # and # Yoav # Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. ArXiv:1903.03862. Karl Hermann, Tom Koisk, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa 2015. Suleyman, Phil Teaching machines to read and comprehend. In the 28th International NIPS’15 Proceedings of
{ "id": "1807.11714" }
1905.12688
Choosing Transfer Languages for Cross-Lingual Learning
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages. However, given a particular task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative for each different NLP tasks, which may inform future ad hoc selection even without use of our method. Code, data, and pre-trained models are available at https://github.com/neulab/langrank
http://arxiv.org/pdf/1905.12688
Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, Graham Neubig
cs.CL
Proceedings of ACL 2019
null
cs.CL
20190529
20190607
9 1 0 2 n u J 7 ] L C . s c [ 2 v 8 8 6 2 1 . 5 0 9 1 : v i X r a # Choosing Transfer Languages for Cross-Lingual Learning Yu-Hsiang Lin∗, Chian-Yu Chen∗, Jean Lee∗, Zirui Li∗, Yuyan Zhang∗, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell†, Graham Neubig Language Technologies Institute, Carnegie Mellon University †National Research Council, Canada # Abstract Cross-lingual transfer, where a high-resource transfer language is used to improve the accu- racy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low- resource languages. However, given a particu- lar task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usu- ally the intuition of the experimenter. Since a large number of features contribute to the suc- cess of cross-lingual transfer (including phylo- genetic similarity, typological properties, lex- ical overlap, or size of available data), even the most enlightened experimenter rarely con- siders all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer lan- guages as a ranking problem, and build mod- els that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines consider- ing single features in isolation, and glean in- sights on what features are most informative for each different NLP tasks, which may in- form future ad hoc selection even without use of our method.1 1 # 1 Introduction A common challenge in applying natural language processing (NLP) techniques to low-resource lan- guages is the lack of training data in the languages in question. It has been demonstrated that through cross-lingual transfer, it is possible to leverage one or more similar high-resource languages to im- prove the performance on the low-resource lan- guages in several NLP tasks, including machine ∗Equal contribution 1Code, data, and pre-trained models are available at https://github.com/neulab/langrank Generate Training Data { L,1: Transfer Language 1 } Lt,2: Transfer Language 2 {Lye Task Language | Lyc Task Language | = Transfer Transfer Learning Learning NLP Model 1 score(Lt,4, Ltx) Train Transfer Language Ranker score(Li4, Ltk) | score(Li¢.2, Ltx) Learning to Rank Transfer Language Ranker NLP Model 2 score(Li¢2, Ltx) Figure 1: Workflow of learning to select the transfer (1) train a set of NLP languages for an NLP task: models with all available transfer languages and collect evaluation scores, (2) train a ranking model to predict the top transfer languages. translation (Zoph et al., 2016; Johnson et al., 2017; Nguyen and Chiang, 2017; Neubig and Hu, 2018), parsing (T¨ackstr¨om et al., 2012; Ammar et al., 2016; Ahmad et al., 2019; Ponti et al., 2018), part- of-speech or morphological tagging (T¨ackstr¨om et al., 2013; Cotterell and Heigold, 2017; Malaviya et al., 2018; Plank and Agi´c, 2018), named entity recognition (Zhang et al., 2016; Mayhew et al., 2017; Xie et al., 2018), and entity linking (Tsai and Roth, 2016; Rijhwani et al., 2019). There are many methods for performing this transfer, includ- ing joint training (Ammar et al., 2016; Tsai and Roth, 2016; Cotterell and Heigold, 2017; John- son et al., 2017; Malaviya et al., 2018), annota- tion projection (T¨ackstr¨om et al., 2012; T¨ackstr¨om et al., 2013; Zhang et al., 2016; Ponti et al., 2018; Plank and Agi´c, 2018), fine-tuning (Zoph et al., 2016; Neubig and Hu, 2018), data augmentation (Mayhew et al., 2017), or zero-shot transfer (Ah- mad et al., 2019; Xie et al., 2018; Neubig and Hu, 2018; Rijhwani et al., 2019). The common thread is that data in a high-resource transfer language is used to improve performance on a low-resource task language. transfer lan- guage for any particular task language remains an open question – the choice of transfer language has traditionally been done in a heuristic manner, often based on the intuition of the experimenter. A common method of choosing transfer languages involves selecting one that belongs to the same language family or has a small phylogenetic dis- tance in the language family tree to the task lan- guage (Dong et al., 2015; Johnson et al., 2017; Cotterell and Heigold, 2017). However, it is not always true that all languages in a single language family share the same linguistic properties (Ah- mad et al., 2019). Therefore, another strategy is to select transfer languages based on the typologi- cal properties that are relevant to the specific NLP task, such as word ordering for parsing tasks (Am- mar et al., 2016; Ahmad et al., 2019). With sev- eral heuristics available for selecting a transfer lan- guage, it is unclear a priori if any single attribute of a language will be the most reliable criterion in determining whether cross-lingual learning is likely to work for a specific NLP task. Other fac- tors, such as lexical overlap between the training datasets or size of available data in the transfer language, could also play a role in selecting an appropriate transfer language. Having an empir- ical principle regarding how to choose the most promising languages or corpora to transfer from has the potential to greatly reduce the time and ef- fort required to find, obtain, and prepare corpora for a particular language pair. In this paper, we propose a framework, which we call LANGRANK, to empirically answer the question posed above: given a particular task low- resource language and NLP task, how can we de- termine which languages we should be performing transfer from? We consider this language predic- tion task as a ranking problem, where each po- tential transfer language is represented by a set of attributes including typological information and corpus statistics, such as word overlap and dataset size. Given a task language and a set of candidate transfer languages, the model is trained to rank the transfer languages according to the performance achieved when they are used in training a model to process the task low-resource language. These models are trained by performing a computation- and resource-intensive exhaustive search through the space of potential transfer languages, but at test time they can rapidly predict optimal transfer lan- guages, based only on a few dataset and linguistic features, which are easily obtained. In experiments, we examine cross-lingual trans- fer in four NLP tasks: machine translation (MT), entity linking (EL), part-of-speech (POS) tagging and dependency parsing (DEP). We train gradient boosted decision trees (GBDT; Ke et al. (2017)) to select the best transfer languages based on the aforementioned features. We compare our rank- ing models with several reasonable baselines in- spired by the heuristic approaches used in previ- ous work, and show that our ranking models sig- nificantly improve the quality of the selection of the top languages for cross lingual transfer. In addition, through an ablation study and examin- ing the learned decisions trees, we glean insights about which features were found to be useful when choosing transfer languages for each task. This may inform future attempts for heuristic selection of transfer languages, even in the absence of direct use of LANGRANK. # 2 Problem Formulation We define the task language t as the language of interest for a particular NLP task, and the trans- fer language a as the additional language that is used to aid in training models. Formally, during the training stage of transfer learning, we perform a model training step: Mia = tain((2{!""), yf"), (all yf!™)), where x(trn) and y(trn) indicate input and output training data for each training language, and Mt,a indicates the resulting model trained on languages t and a. The actual model and training procedure will vary from task to task, and we give several dis- parate examples in our experiments in §5.1. The model can then be evaluated by using it to predict outputs over the test set, and evaluating the results: t,a = predict(x(tst) ˆy(tst) ct,a = evaluate(y(tst) t t ; Mt,a) , ˆy(tst) t,a ), where ct,a is the resulting test-set score achieved by using a as an transfer language. Assuming we want to get the highest possible performance on task language t, one way to do so is to exhaustively enumerate over every single po- tential transfer language a, train models, and eval- uate the test set. In this case, the optimal transfer language for task language t can be defined as: # a∗ t = argmaxact,a. However, as noted in the introduction, this brute- force method for finding optimal transfer lan- guages is not practical: if resources for many lan- guages are available a priori, it is computationally expensive to train all of the models, and in many cases these resources are not-available a priori and need to be gathered from various sources before even starting experimentation. Thus, we turn to formulating our goal as a rank- ing task: given an NLP task, a low-resource task language ¢, and a list of J available high-resource transfer languages ay1,q,.. attempt to predict their ranking according to their expected Scores Ct,ay5 Ct,azs++ without actually calculating the scores themselves. To learn this ranker, we need to first create training data for the ranker, which we create by doing an exhaustive sweep over a set of training task languages ti, te,...,tz, which results in sets of scores {Cty ars ++ +9 Ctryays bee LCty ars ++ +s Cty ay} These scores that can be used to train a ranking system, using standard methods for learning to rank (see, e.g., Liu et al. (2009)). Specifically, these methods work by extracting features from the pair of languages (t;, a;): +, QJ, +> Ctay φti,aj = feat extract(ti, aj) and then using these features to predict a relative score for each pair of task and transfer languages rti,aj = rank score(φti,aj ; θ) where θ are the parameters of the ranking model. These parameters θ are learned in a way such that the order of the ranking scores rti,a1, . . . , rti,aJ match as closely as possible with those of the gold- standard evaluation scores cti,a1, . . . , cti,aJ . Now that we have described the overall formu- lation of the problem, there are two main ques- tions left: how do we define our features φti,aj , and how do we learn the parameters θ of the rank- ing model? # 3 Ranking Features We represent each language pair/corpus by a set of features, split into two classes: dataset-dependent and dataset-independent. # 3.1 Data-dependent Features Dataset-dependent features are statistical features of the particular corpus used, such as dataset size and the word overlap between two corpora. Impor- tantly, these features require the dataset to already be available for processing and thus are less con- ducive to use in situations where resources have not yet been acquired. Specifically, we examine the following categories: Dataset Size: We denote the number of training examples in the transfer and task languages by stf and stk, respectively. For MT, POS and DEP, this is the number of sentences in a corpus, and for EL the dataset size is the number of named enti- ties in a bilingual entity gazetteer. In our exper- iments, we also consider the ratio of the dataset size, stf /stk, as a feature, since we are interested in how much bigger the transfer-language corpus is than the task-language corpus. Type-Token Ratio (TTR): The TTR of the transfer- and task-language corpora, t;¢ and ty, respectively, is the ratio between the number of types (the number of unique words) and the num- ber of tokens (Richards, 1987). It is a measure for lexical diversity, as a higher TTR represents higher lexical variation. We also consider the dis- tance between the TTRs of the transfer- and task- language corpora, which may very roughly indi- cate their morphological similarity: 2 t, 2 t, ditr = (: _— 7s) . tek Transfer and task languages that have similar lex- ical diversity are expected to have dttr close to 0. The data for the entity linking task consists only of named entities, so the TTR is typically close to 1 for all languages. Therefore, we do not include TTR related features for the EL task. Word Overlap and Subword Overlap: We measure the similarity between the vocabularies of task- and transfer-language corpora by word over- lap ow, and subword overlap osw: ow = |Ttf ∩ Ttk| |Ttf | + |Ttk| , osw = |Stf ∩ Stk| |Stf | + |Stk| , where Ttf and Ttk are the sets of types in the transfer- and task-language corpora, and Stf and Stk are their sets of subwords. The subwords are obtained by an unsupervised word segmen- tation algorithm (Sennrich et al., 2016; Kudo, 2018). Note that for EL, we do not consider sub- word overlap, and the word overlap is simply the count of the named entities that have exactly the same representations in both transfer and task lan- guages. We also omit subword overlap in the POS and DEP tasks, as some low-resource languages do not have enough data for properly extracting subwords in the corpora used for training the POS and DEP models in our experiments. # 3.2 Dataset-independent Features Dataset-independent features are measures of the similarity between a pair of languages based on phylogenetic or typological properties established by linguistic study. Specifically, we leverage six different linguistic distances queried from the URIEL Typological Database (Littell et al., 2017): Geographic distance (dgeo): The orthodromic distance between the languages on the surface of the earth, divided by the antipodal distance, based primarily on language location descriptions in Glottolog (Hammarstr¨om et al., 2018). Genetic distance (dgen): The genealogical dis- tance of the languages, derived from the hypothe- sized tree of language descent in Glottolog. Inventory distance (dinv): The cosine distance between the phonological feature vectors derived from the PHOIBLE database (Moran et al., 2014), a collection of seven phonological databases. Syntactic distance (dsyn): The cosine distance between the feature vectors derived from the syn- tactic structures of the languages (Collins and Kayne, 2011), derived mostly from the WALS database (Dryer and Haspelmath, 2013). Phonological distance (dpho): The cosine dis- tance between the phonological feature vectors de- rived from the WALS and Ethnologue databases (Lewis, 2009). Featural distance (df ea): The cosine distance between feature vectors combining all 5 features mentioned above. # 4 Ranking Model Having defined our features, the next question is what type of ranking model to use and how to learn its parameters θ. As defined in §2, the problem is a standard learning-to-rank problem, so there are a myriad of possibilities for models and learning al- gorithms (Liu et al., 2009), and any of them would be equally applicable to our task. We opt to use the GBDT (Ke et al., 2017) model with LambdaRank as our training method (Burges, 2010). This method works by learning an en- semble of decision-tree-based learners using gra- dient boosting, and specifically in our setting here has two major advantages. First, its empirical performance – it is currently one of the state-of- the-art methods for ranking, especially in settings that have few features and limited data. Second, but perhaps more interesting, is its interpretabil- ity. Decision-tree based algorithms are relatively interpretable, as it is easy to visualize the learned tree structure. One of our research goals is to un- derstand what linguistic or statistical features of a dataset play important roles in transfer learning, so the interpretable nature of the tree-based model can provide valuable insights, which we elaborate further in §6.2. # 5 Experimental Settings # 5.1 Testbed Tasks We investigate the performance of LANGRANK on four common NLP tasks: machine translation, en- tity linking, POS tagging, and dependency pars- ing. We briefly outline the settings for all four NLP tasks, which are designed based on previous work on transferring between languages in these settings (Neubig and Hu, 2018; Rijhwani et al., 2019; Kim et al., 2017; Ahmad et al., 2019). Machine Translation We train a standard sequence-to-sequence model attention-based (Bahdanau et al., 2015), using the XNMT toolkit (Neubig et al., 2018). We perform training on the multilingual TED talk corpus of Qi et al. (2018), using 54 task and 54 transfer languages, always translating into English, which results in 2,862 task/transfer pairs and 54 single-source training settings. Transfer is performed by joint training over the concatenated task and transfer corpora, and subwords are learned over the concatenation of both corpora (Sennrich et al., 2016). Entity Linking The cross-lingual EL task in- volves linking a named entity mention in the task language to an English knowledge base. We train two character-level LSTM encoders, which are trained to maximize the cosine similarity be- linked) entities (Rijhwani tween parallel (i.e., et al., 2019). We use the same dataset as Rijh- wani et al. (2019), which contains language-linked Wikipedia article titles from 9 low-resource task languages and 53 potential transfer languages, re- sulting in 477 task/transfer pairs. We perform training in a zero-shot setting, where we train on corpora only in the transfer language, and test en- tity linking accuracy on the task language without joint training or fine-tuning. POS Tagging We train a bi-directional LSTM- CNNs-CRF model (Ma and Hovy, 2016) on word sequences without using pre-trained word em- beddings. The implementation is based on the NCRF++ toolkit (Yang and Zhang, 2018). We per- form training on the Universal Dependencies v2.2 dataset (Nivre et al., 2018), using 26 languages that have the least training data as task languages, and 60 transfer languages,2 resulting in 1,545 pairs of transfer-task languages. Transfer is performed by joint training over the concatenated task and transfer corpora if the task language has training data, and training only with transfer corpora oth- erwise. The performance is measured by POS tag- ging accuracy on the task language. Dependency Parsing For the dependency pars- ing task, we utilize a deep biaffine attentional graph-based model (Dozat and Manning, 2016). We select 30 languages from Universal Dependen- cies v2.2 (Nivre et al., 2018), resulting in 870 pairs of transfer-task languages. The selection basically follows the settings of Ahmad et al. (2019), but we exclude Japanese (ja) since we observe unstable results on it. For this task, transfer is performed in the zero-shot setting where no task language an- notations are available in training. We rely on the multi-lingual embeddings which are mapped into the same space with the offline method of Smith et al. (2017) and directly adopt the model trained with the transfer language to task languages. The performance is measured by LAS (Labeled At- tachment Accuracy) excluding punctuation. # 5.2 Evaluation Protocol We evaluate all our models on all NLP tasks with leave-one-out cross validation. For each cross- validation fold, we leave one language ¢“*) out from the NV languages we have as the test set, and train our ranking model @,s¢) using all remaining >For each language, we choose the treebank that has the least number of training instances, which results in 60 lan- guages with training data and 11 without training data. (trn) or n) During training, each ae ”) is treated as the task language in turn, and the other N — 2 languages in the training set as transfer languages. We then test the learned model 0,11) by taking ¢*") as the task language, and fer) oun) a as the set of transfer languages, and predict the ranking scores {Tpcest) eens T p(tst) een} We repeat this pro- languages, {ot 1}. as the training set. cess with each language | in all N languages as the test language ¢(*"), and collect N learned models. cess with each language | in all N languages as the test language ¢(*"), and collect N learned models. We use Normalized Discounted Cumulative Gain (NDCG) (Jarvelin and Kekiilainen, 2002) to evaluate the performance of the ranking model. The NDCG at position p is defined as: NDCG @p = DCG @p IDCG @p , where the Discounted Cumulative Gain (DCG) at position p is p yi] DCG @p = > joi) logs(i + 1) Here γi is the relevance of the language ranked at position i by the model being evaluated. We keep only the top-γmax transfer languages as our learn- ing signal: the true best transfer language has γ = γmax, and the second-best one has γ = γmax − 1, and so on until γ = 1, with the remaining lan- guages below the top-γmax ones all sharing γ = 0. The Ideal Discounted Cumulative Gain (IDCG) uses the same formula as DCG, except it is cal- culated over the gold-standard ranking. When the predicted ranking matches the “true” ranking, then NDCG is equal to 1. # 5.3 Method Parameters and Baselines We use GBDT to train our LANGRANK models. For each LANGRANK model, we train an ensem- ble of 100 decision trees, each with 16 leaves. We use the LightGBM implementation (Ke et al., 2017) of the LambdaRank algorithm in our train- ing. In our experiments, we set γmax = 10, and evaluate the models by NDCG@3. The thresh- old of 3 was somewhat arbitrary, but based on our intuition that we would like to test whether LANGRANK can successfully recommend the best transfer language within a few tries, instead of testing its ability to accurately rank all avail- able transfer languages. The results in Table 1 Method MT EL POS DEP t word overlap ow e s a t a d subword overlap osw size ratio stf /stk type-token ratio dttr 28.6 29.2 3.7 2.5 30.7 – 0.3 – 13.4 – 9.5 7.4 52.3 – 24.8 6.4 e genetic dgen c n a t s i d . g n i l syntactic dsyn featural df ea phonological dpho inventory dinv geographic dgeo 24.2 14.8 10.1 3.0 8.5 15.1 50.9 46.4 47.5 4.0 41.3 49.5 14.8 4.1 5.7 9.8 2.4 15.7 32.0 22.9 13.9 43.4 23.5 46.4 LANGRANK (all) LANGRANK (dataset) LANGRANK (URIEL) 51.1 53.7 32.6 63.0 17.0 58.1 28.9 26.5 16.6 65.0 65.0 59.6 Table 1: Our LANGRANK model leads to higher av- erage NDCG@3 over the baselines on all four tasks: machine translation (MT), entity linking (EL), part-of- speech tagging (POS) and dependency parsing (DEP). report the average NDCG@3 across all cross- validation folds. For LANGRANK (all) we in- clude all available features in our models, while for LANGRANK (dataset) and LANGRANK (ling) we include only the subsets of dataset-dependent and dataset-independent features, respectively. We consider the following baseline methods: • Using a single dataset-dependent feature: While dataset-dependent features have not typically been used as criteria for select- ing transfer languages, they are a common feature in data selection methods for cross- domain transfer (Moore and Lewis, 2010). In view of this, we include selecting the transfer languages by sorting against each single one of ow, osw, and stf /stk in descending order, and sorting against dttr in ascending order, as baseline methods. • Using a single linguistic distance feature: More common heuristic criteria of selection the transfer languages are choosing ones that have small phylogenetic distance to the task language (Dong et al., 2015; Cotterell and Heigold, 2017). We therefore include select- ing the transfer languages by sorting against each single one of dgen, dsyn, df ea, dpho, dinv, and dgeo in ascending order as our base- line methods. # 6 Results and Analysis # 6.1 Main Results The performance of predicting transfer languages for the four NLP tasks using single-feature base- 1.00 el open cc no a ———— - ’ 0.95 f - i} MT LangRank —#— EL LangRank —® POS LangRank + DEP LangRank -™- MT Subword Overlap “k- EL Genetic -@- POS Geographic -%- DEP Word Overlap 0.90 a y - ; / Max evaluation score 0.85 0.80 1 2 3 4 5 6 7 8 9 10 K, number of recommended transfer languages Figure 2: The best evaluation score (BLEU for MT, accuracy for EL and POS, and LAS for DEP) attain- able by trying out the top K transfer languages rec- ommended by the LANGRANK models and the single- feature baselines. lines and LANGRANK is shown in Table 1. First, using LANGRANK with either all features or a subset of the features leads to substantially higher NDCG than using single-feature heuristics. Al- though some single-feature baselines manage to the pre- achieve high NDCG for some tasks, dictions of LANGRANK consistently surpass the baselines on all tasks. In fact, for the MT and POS tagging tasks, the ranking quality of the best LAN- GRANK model is almost double that of the best single-feature baseline. Furthermore, using dataset-dependent features on top of the linguistic distance ones enhances the quality of the LANGRANK predictions. The best results for EL and POS tagging are obtained us- ing all features, while for MT the best model is the one using dataset-only features. The best per- formance on DEP parsing is achieved with both settings. LANGRANK with only dataset features outperforms the linguistics-only LANGRANK on It is, however, the MT and POS tagging tasks. severely lacking in the EL task, likely because EL datasets lack most dataset features as discussed in the previous section; the EL data only consists of pairs of corresponding entities and not complete sentences as in the case of the other tasks’ datasets. In addition, it is important to note that LAN- GRANK with only linguistic database informa- tion still outperforms all heuristic baselines on all tasks. This means that our model is potentially useful even before any resources for the language and task of interest have been collected, and could inform the data creation process. Finally, from a potential user’s point of view, Task Lang LANG RANK Best Dataset Best URIEL True Best MT aze tur (1) fas (3) hun (4) ow tur (1) hrv (5) ron (31) df ea ara (32) fas (3) sqi (22) tur (1) kor (2) fas (3) MT ben hun (1) tur (2) fas (4) dgeo ow mya (30) vie (3) ita (20) hin (27) por (18) mar (41) hun (1) tur (2) vie (3) EL tel amh (6) orm (40) msa (7) ow amh (6) swa (32) jav (9) dinv pan (2) hin (1) ben (5) hin (1) pan (2) mar (3) Table 2: Examples of predicted top-3 transfer lan- guages (and true ranks). The languages are denoted by the ISO 639-2 Language Codes. The first two task languages (aze, ben) are on the MT task, and the last one (tel) is on the EL task. a practical question is: If we train models on the top K transfer languages suggested by the rank- ing model and pick the best one, how good is the best model expected to be? If a user could obtain a good transfer model by trying out only a small number of transfer languages as suggested by our ranking model, the overhead of searching for a good transfer language is immensely reduced. Figure 2 compares the BLEU score (for MT), accuracy (for EL and POS) and LAS (for DEP) of the best transfer model attainable by using one of the top K transfer languages recommended by LANGRANK (all) and by the best single feature baseline. We plot the ratio of the best score to that of the ground-truth best transfer model ct,a∗ t , aver- aged over all task languages. On the MT task, the best transfer models obtained by the suggestions of our LANGRANK (all) model constantly outper- forms the models obtained from the best baseline. On the POS tagging task, the best transfer models obtained by our ranking model are generally com- parable to those using baseline suggestions. We note that in the EL task, after looking be- yond the top 3 LANGRANK predictions, the best baseline models on average seem to give more rel- evant transfer language suggestions than our LAN- GRANK models. However, this is a case where av- eraging is possibly misleading. In fact, the LAN- GRANK model manages to select the correct top-1 language for 7 of the 9 task languages. The other two languages (Telugu and Uyghur) do not have any typologically similar languages in the small 0.00 0.05 0.10 0.15 0.20 0.25 Normalized Importance Figure 3: Normalized feature importance for the MT, EL, POS and DEP tasks. training set, and hence the learned model fails to generalize to these languages. In Table 2 we include a few representative ex- amples of the top-3 transfer languages selected by LANGRANK and the baselines.3 In the first case (aze) LANGRANK outperforms the already strong baselines by being able to consider both instead of con- dataset and linguistic features, sidering them in isolation. In the second case (ben) where no baselines provide useful recom- mendations, LANGRANK still displays good per- interestingly Turkish and Hungarian formance; proved good transfer languages for a large num- ber of task languages (perhaps to large data size and difficulty as tasks), and LANGRANK was able to learn to fall back to these when it found no good typological or dataset-driven matches otherwise – behavior that would have be inconceivable with- out empirical discovery of transfer languages. The final failure case (tel), as noted above, can be at- tributed to overfitting the small EL dataset, and may be remedied by either creating larger data or training LANGRANK jointly over multiple tasks. # 6.2 Towards Better Educated Guesses for Choosing Transfer Languages Our transfer language rankers are trained on a few languages for the particular tasks. It is possible that our models will not generalize well on a dif- ferent set of languages or on other NLP tasks. However, generating training data for ranking with exhaustive transfer experiments on a new task or set of languages will not always be feasible. It could, therefore, be valuable to analyze the learned models and extract “rules of thumb” that can be 3Detailed results are in the supplementary material. used as educated guesses in choosing transfer lan- guages. They might still be ad-hoc, but they may prove superior to the intuition-based heuristic ap- proaches used in previous work. To elucidate how LANGRANK determines the best transfer lan- guages for each task, Figure 3 shows the feature importance for each of the NLP tasks. The fea- ture importance is defined as the number of times a feature is chosen to be the splitting feature in a node of the decision trees. For the MT task, we find that dataset statis- tics features are more influential than the linguis- tic features, especially the dataset size ratio and the word overlap. This indicates that a good trans- fer language for machine translation depends more on the dataset size of the transfer language cor- pus and its word and subword overlap with the task language corpus. This is confirmed by re- sults of the LANGRANK (dataset) model in Table 1, which achieves the best performance by only using the subset of dataset statistics features. At the same time, we note that the dataset size ra- tio and TTR distance, although of high importance among all features, when used alone result in very poor performance. This phenomenon may be un- derstood by looking at an example of a small de- cision tree in Figure 4: a genetic distance of less than 0.4 would produce a high ranking regardless of dataset size. The dataset feature in this tree pro- vides a smaller gain than two typological features, although it still informs the decision. For POS tagging, the two most important fea- tures are dataset size and the TTR distance. On the other hand, the lack of rich dataset-dependent fea- tures for the EL task leads to the geographic and syntactic distance being most influential. There are several relatively important features for the DEP parsing task, with geographic and genetic distance standing out, as well as word overlap. These are features that also yield good scores on their own (see Table 1) but LANGRANK is able to combine them and achieve even better results. # 7 Related Work Cross-lingual transfer has been extensively used in several NLP tasks. In Section 1, we provided a (non-exhaustive) list of examples that employ cross-lingual transfer across several tasks. Other work has performed large-scale studies on the im- portance of appropriately selecting a transfer lan- guage, such as Paul et al. (2009), which performed yes dgen ≤ 0.43 output: 0 no yes dsyn > 0.56 output: 2 no stf stk > 1.61 yes output: 3 no output: 1 Figure 4: An example of the decision tree learned in the machine translation task for Galician as task language. an extensive search for a “pivot language” in sta- tistical MT, but without attempting to actually learn or predict which pivot language is best. Typologically-informed models are another vein of research that is relevant to our work. The relationship between linguistic typology and sta- tistical modeling has been studied by Gerz et al. (2018) and Cotterell et al. (2018), with a focus on language modeling. Tsvetkov et al. (2016b) used typological information in the target lan- guage as additional input to their model for pho- netic representation learning. Ammar et al. (2016) and Ahmad et al. (2019) used similar ideas for dependency parsing, incorporating linguistically- informed vectors into their models. O’Horan et al. (2016) survey typological resources available and their utility in NLP tasks. Although not for cross-lingual transfer, there has been prior work on data selection for train- ing models. Tsvetkov et al. (2016a) and Ruder and Plank (2017) use Bayesian optimization for data selection. van der Wees et al. (2017) study the ef- fect of data selection of neural machine transla- tion, as well as propose a dynamic method to se- lect relevant training data that improves translation performance. Plank and van Noord (2011) design a method to automatically select domain-relevant training data for parsing in English and Dutch. # 8 Conclusion We formulate the task of selecting the optimal transfer languages for an NLP task as a rank- ing problem. For machine translation, entity linking, part-of-speech tagging, and dependency parsing, we train ranking models to predict the most promising transfer languages to use given a task language. We show that by taking multi- ple dataset statistics and language attributes into consideration, the learned ranking models recom- mend much better transfer languages than the ones suggested by considering only single language or dataset features. Through analyzing the learned ranking models, we also gain some insights on the types of features that are most influential in select- ing transfer languages for each of the NLP tasks, which may inform future ad hoc selection even without using our method. # Acknowledgments This project was supported in part by NSF Award No. 1761548 “Discovering and Demon- strating Linguistic Features for Language Docu- mentation,” and the Defense Advanced Research Projects Agency Information Innovation Office (I2O) Low Resource Languages for Emergent In- cidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this doc- ument are those of the au- thors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copy- right notation here on. # References Wasi Uddin Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency pars- ing. In Proceedings of NAACL. Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016. Many lan- guages, one parser. Transactions of the Association for Computational Linguistics, 4:431–444. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Neural machine translation by ICLR 2015 Bengio. 2015. jointly learning to align and translate. (arXiv:1409.0473). Chris J.C. Burges. 2010. From RankNet to Lamb- daRank to LambdaMART: An overview. Technical report, Microsoft Research. Chris Collins and Richard Kayne. 2011. Syntactic structures of the world’s languages. Ryan Cotterell and Georg Heigold. 2017. Cross- tag- lingual character-level neural morphological In Proceedings of the 2017 Conference on ging. Empirical Methods in Natural Language Process- ing, pages 748–759, Copenhagen, Denmark. Asso- ciation for Computational Linguistics. Ryan Cotterell, Sebastian J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard In Proceedings of the 2018 to language-model? Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536–541. Association for Computational Lin- guistics. Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- In Proceedings of the tiple language translation. 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1723–1732, Beijing, China. Association for Computational Linguistics. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency pars- ing. arXiv preprint arXiv:1611.01734. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evo- lutionary Anthropology, Leipzig. Daniela Gerz, Ivan Vuli´c, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the rela- tion between linguistic typology and (limitations of) multilingual language modeling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 316–327. Associ- ation for Computational Linguistics. Harald Hammarstr¨om, Robert Forkel, and Martin Haspelmath. 2018. Glottolog 3.3. Max Planck In- stitute for the Science of Human History. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cu- mulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Melvin Johnson, Mike Schuster, Quoc Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernand a Vigas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google’s multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339–351. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 30, pages 3146–3154. Curran Associates, Inc. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual re- sources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2832–2838, Copenhagen, Denmark. As- sociation for Computational Linguistics. Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75. Association for Computational Linguistics. Paul M Lewis. 2009. Ethnologue: Languages of the World Sixteenth edition. Dallas, Texas: SIL Interna- tional. Patrick Littell, David R Mortensen, Ke Lin, Kather- ine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as ty- pological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 8–14. Tie-Yan Liu et al. 2009. Learning to rank for informa- tion retrieval. Foundations and Trends®) in Infor- mation Retrieval, 3(3):225-331. Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064–1074. Chaitanya Malaviya, Matthew R. Gormley, and Gra- ham Neubig. 2018. Neural factor graph models for In Proceed- cross-lingual morphological tagging. ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2653–2663, Melbourne, Australia. As- sociation for Computational Linguistics. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 2536–2545. Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220–224, Uppsala, Sweden. Association for Computational Linguistics. Steven Moran, Daniel McCloy, and Richard Wright, editors. 2014. PHOIBLE Online. Max Planck In- stitute for Evolutionary Anthropology, Leipzig. Graham Neubig and Junjie Hu. 2018. Rapid adaptation of neural machine translation to new languages. In Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), Brussels, Belgium. Graham Neubig, Matthias Sperber, Xinyi Wang, Matthieu Felix, Austin Matthews, Sarguna Pad- manabhan, Ye Qi, Devendra Singh Sachan, Philip Arthur, Pierre Godard, John Hewitt, Rachid Riad, and Liming Wang. 2018. XNMT: The extensible In Conference neural machine translation toolkit. of the Association for Machine Translation in the Americas (AMTA) Open Source Software Showcase, Boston. Toan Q. Nguyen and David Chiang. 2017. Transfer learning across low-resource, related languages for neural machine translation. In Proc. IJCNLP, vol- ume 2, pages 296–301. ˇZeljko Agi´c, and et al. 2018. Universal dependencies 2.2. LIN- DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Helen O’Horan, Yevgeni Berzak, Ivan Vulic, Roi Re- ichart, and Anna Korhonen. 2016. Survey on the use of typological information in natural language In Proceedings of COLING 2016, the processing. 26th International Conference on Computational Linguistics: Technical Papers, pages 1297–1308, Osaka, Japan. The COLING 2016 Organizing Com- mittee. Michael Paul, Hirofumi Yamamoto, Eiichiro Sumita, and Satoshi Nakamura. 2009. On the impor- tance of pivot language selection for statistical ma- In Proceedings of Human Lan- chine translation. guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 221–224. Association for Com- putational Linguistics. Barbara Plank and ˇZeljko Agi´c. 2018. Distant super- vision from disparate sources for low-resource part- of-speech tagging. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 614–620. Association for Com- putational Linguistics. Barbara Plank and Gertjan van Noord. 2011. Effective measures of domain similarity for parsing. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 1566–1576, Portland, Oregon, USA. Association for Computational Lin- guistics. Edoardo Maria Ponti, Roi Reichart, Anna Korhonen, Isomorphic transfer of syn- and Ivan Vuli. 2018. In Proceed- tactic structures in cross-lingual nlp. ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1531–1542, Melbourne, Australia. As- sociation for Computational Linguistics. Ye Qi, Devendra Sachan, Matthieu Felix, Sarguna Pad- manabhan, and Graham Neubig. 2018. When and why are pre-trained word embeddings useful for neural machine translation? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 529–535. Association for Computa- tional Linguistics. Brian Richards. 1987. Type/token ratios: What do they really tell us? Journal of child language, 14(2):201– 209. Shruti Rijhwani, Jiateng Xie, Graham Neubig, and Jaime Carbonell. 2019. Zero-shot neural transfer for In Thirty-Third AAAI cross-lingual entity linking. Conference on Artificial Intelligence (AAAI), Hon- olulu, Hawaii. Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian Opti- mization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 372–382, Copenhagen, Denmark. Asso- ciation for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Association for Computational Linguistics. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszko- reit. 2012. Cross-lingual word clusters for direct In Proceedings of transfer of linguistic structure. the 2012 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 477– 487, Montr´eal, Canada. Association for Computa- tional Linguistics. Chen-Tse Tsai and Dan Roth. 2016. Cross-lingual wik- In Pro- ification using multilingual embeddings. ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 589–598, San Diego, California. Association for Computational Linguistics. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016a. Learning the Curriculum with Bayesian Optimization for Task- In Pro- Specific Word Representation Learning. ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 130–139, Berlin, Germany. Associa- tion for Computational Linguistics. Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, David Guillaume Lample, and Mortensen, Alan W Black, Lori Levin, Chris Dyer. 2016b. language models: A case study in cross-lingual phonetic representation learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1357–1366, San Diego, California. Association for Computational Linguistics. Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2017. Dynamic Data Selection for Neural In Proceedings of the 2017 Machine Translation. Conference on Empirical Methods in Natural Lan- guage Processing, pages 1400–1410, Copenhagen, Denmark. Association for Computational Linguis- tics. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 369–379. Association for Computational Linguistics. Jie Yang and Yue Zhang. 2018. Ncrf++: An open- source neural sequence labeling toolkit. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics. Dongxu Zhang, Boliang Zhang, Xiaoman Pan, Xi- aocheng Feng, Heng Ji, and Weiran XU. 2016. Bi- text name tagging for cross-lingual entity annota- In Proceedings of COLING 2016, tion projection. the 26th International Conference on Computational Linguistics: Technical Papers, pages 461–470, Os- aka, Japan. The COLING 2016 Organizing Commit- tee. Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource In Proceedings of the neural machine translation. 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568–1575, Austin, Texas. Association for Computational Linguistics.
{ "id": "1611.01734" }
1905.12616
Defending Against Neural Fake News
Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias -- and sampling strategies that alleviate its effects -- both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news.
http://arxiv.org/pdf/1905.12616
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, Yejin Choi
cs.CL, cs.CY
NeurIPS 2019 camera ready version. Project page/code/demo at https://rowanzellers.com/grover
null
cs.CL
20190529
20201211
0 2 0 2 c e D 1 1 ] L C . s c [ 3 v 6 1 6 2 1 . 5 0 9 1 : v i X r a # Defending Against Neural Fake News Rowan Zellers♠, Ari Holtzman♠, Hannah Rashkin♠, Yonatan Bisk♠ Ali Farhadi♠♥, Franziska Roesner♠, Yejin Choi♠♥ ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ♥Allen Institute for Artificial Intelligence https://rowanzellers.com/grover # Abstract Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying tech- nology also might enable adversaries to generate neural fake news: targeted propa- ganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary’s point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like ‘Link Found Between Vaccines and Autism,’ Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias – and sampling strategies that alleviate its effects – both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news. # 1 Introduction Online fake news – news designed to intentionally deceive – has recently emerged as a major societal problem. Malicious actors spread fallacious viral stories in order to gain advertising revenue, influence opinions, and even tip elections (Faris et al., 2017; Wardle and Derakhshan, 2017). As such, countering the spread of disinformation online presents an urgent technical and political issue. To the best of our knowledge, most disinformation online today is manually written (Vargo et al., 2018). However, as progress continues in natural language generation, malicious actors will increasingly be 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. = Q science The Newest Dark Cimes News Verification Link Found Between Vaccines and Autism o™ By Paul Waldman May 29, 2019 Those who have been vaccinated against measles have a more than 5-fold higher chance of developing autism, researchers at the University of California San Diego School of Medicine and the Centers for Disease Control and Prevention report today in the Journal of Epidemiology and Community Health. (continued) Fake News Generation Figure 1: In this paper, we explore Grover, a model which can detect and generate neural fake news. Humans find the articles difficult to distinguish from “real news” without high levels of scrutiny. able to controllably generate realistic-looking propaganda at scale. Thus, while we are excited about recent progress in text generation (Józefowicz et al., 2016; Radford et al., 2018; 2019), we are also concerned with the inevitability of AI-generated ‘neural’ fake news.1 With this paper, we seek to understand and respond to neural fake news before it manifests at scale. We draw on the field of computer security, which relies on threat modeling: analyzing the space of potential threats and vulnerabilities in a system to develop robust defenses. To scientifically study the risks of neural disinformation, we present a new generative model called Grover.2 Our model allows for controllable yet efficient generation of an entire news article – not just the body, but also the title, news source, publication date, and author list. This lets us study an adversary with controllable generations (e.g. Figure 1, an example anti-vaccine article written in the style of the New York Times). Humans rate the disinformation generated by Grover as trustworthy, even more so than human- written disinformation. Thus, developing robust verification techniques against generators such as Grover is an important research area. We consider a setting in which a discriminator has access to 5000 Grover generations, but unlimited access to real news. In this setting, the best existing fake news discriminators are, themselves, deep pretrained language models (73% accuracy) (Peters et al., 2018; Radford et al., 2018; 2019; Devlin et al., 2018). However, we find that Grover, when used in a discriminative setting, performs even better at 92% accuracy. This finding represents an exciting opportunity for defense against neural fake news: the best models for generating neural disinformation are also the best models at detecting it. Next, we investigate how deep pretrained language models distinguish between real and machine- generated text. We find that key artifacts are introduced during generation as a result of exposure bias: the generator is not perfect, so randomly sampling from its distribution results in generations that fall increasingly out-of-distribution as length increases. However, sampling strategies that alleviate these effects also introduce artifacts that strong discriminators can pick up on. We conclude with a sketch of the ethical territory that must be mapped out in order to understand our responsibilities as researchers when studying fake news, and the potential negative implications of releasing models (Hecht et al., 2018; Zellers, 2019; Solaiman et al., 2019). Accordingly, we suggest a provisional policy of how such models should be released and why we believe it to be safe – and perhaps even imperative – to do so. We believe our proposed framework and accompanying models provide a concrete initial proposal for an evolving conversation about ML-based disinformation threats and how they can be countered. # 2 Fake News in a Neural and Adversarial Setting We present a framework – motivated by today’s dynamics of manually created fake news – for understanding what adversaries will attempt with deep models, and how verifiers should respond. Scope of fake news. There are many types of false news, ranging from satire to propaganda (Wardle, 2017). In this paper, we focus on text-only documents formatted as news articles: stories and their corresponding metadata that contain purposefully false information. Existing fake news is predominantly human-written, for two broad goals: monetization (ad revenue through clicks) and propaganda (communicating targeted information) (Bradshaw and Howard, 2017; Melford and Fagan, 2019). Achieving either goal requires the adversary to be selective about the news that they make, whether by producing only viral content, or content that advances a given agenda. Fact checking and verification: related work. There is considerable interest in fighting online disinformation. Major platforms such as Facebook prioritize trustworthy sources and shut down accounts linked to disinformation (Mosseri, 2018; Dwoskin and Romm, 2018). Some users of these platforms avoid fake news with tools such as NewsGuard and Hoaxy (Shao et al., 2016) and websites like Snopes and PolitiFact. These services rely on manual fact-checking efforts: verifying the accuracy of claims, articles, and entire websites. Efforts to automate fake news detection generally point out stylistic biases that exist in the text (Rashkin et al., 2017; Wang, 2017; Pérez-Rosas et al., 1We thank past work, such as OpenAI’s Staged Release Policy for GPT2 for drawing attention to neural disinformation, alongside other dual-use implications. 2Short for Generating aRticles by Only Viewing mEtadata Records. 2 2018). These efforts can help moderators on social media platforms shut down suspicious accounts. However, fact checking is not a panacea – cognitive biases such as the backfire effect and confirmation bias make humans liable to believe fake news that fits their worldview (Swire et al., 2017). Framework. We cast fake news generation and detection as an adversarial game, with two players: • Adversary. Their goal is to generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must read realistically to both human users as well as the verifier. • Verifier. Their goal is to classify news stories as real or fake. The verifier has access to unlimited real news stories, but few fake news stories from a specific adversary. This setup matches the existing landscape: when a platform blocks an account or website, their disinformative stories provide training for the verifier; but it is difficult to collect fake news from newly-created accounts. The dual objectives of these two players suggest an escalating “arms race” between attackers and defenders. As verification systems get better, so too will adversaries. We must therefore be prepared to deal with ever-stronger adversarial attacks, which is the focus of the next section. # 3 Grover: Modeling Conditional Generation of Neural Fake News Given existing online disinformation, we have reason to believe adversaries will try to generate targeted content (e.g. clickbait and propaganda). Recently introduced large-scale generative models produce realistic-looking text (Radford et al., 2019), but they do not lend themselves to producing controllable generations (Hu et al., 2017).3 Therefore, to probe the feasibility of realistic-looking neural fake news, we introduce Grover, which produces both realistic and controlled generations. The current state-of-the-art in unconditional text generation views it as a language modeling problem (Bengio et al., 2003), in which the probability of a document x is the product of the conditional probability of generating each token xi given previous tokens: # Nź ppxq “ ppxi|x1 . . . xi´1q. (1) i“1 The document is typically treated as a single unstructured text field, beginning with a <start> token and ending with an <end> token. The latter, <end>, is particularly important because it indicates the end of the field, and when to should stop generating. However, a news article has necessary structure beyond the running text, or body field. Metadata fields include the domain where the article is published (indirectly marking the style), the date of publication, the names of the authors, and the headline of the article itself. Not only does generating a news article require producing all of these components, these fields also allow significant control over the generations (e.g. specifying a headline helps control the generated body). An article can be modeled by the joint distribution: ppdomain, date, authors, headline, bodyq. (2) However, it is not immediately obvious how to sample from Equation 2. One option is to define a canonical order among the article’s fields F : ( f1ă f2ă. . .ă f|F |), and model the article left-to-right in that order using Equation 1: x f1 . However, this ordering would forbid sampling certain fields without prohibitively expensive marginalization. Alternatively, one could generate fields in any order, but this requires the model to learn to handle |F |! potential orderings during inference time. Our solution is Grover, a new approach for efficient learning and generation of multi-field docu- ments. We adopt the language modeling framework of Equation 1 in a way that allows for flexible decomposition of Equation 2. During inference time, we start with a set of fields F as context, with each field f containing field-specific start and end tokens. We sort the fields using a standard order4 and combine the resulting tokens together. To generate a target field τ, we append the field-specific start token <start´τ> to the context tokens; then, we sample from the model until we hit <end´τ>. 3A common workaround is to have a human seed the text to provide context. However, this a) is a heavy handed technique for biasing which may not capture the desired attributes, and b) leaves in place a human-written beginning (as tokens are only generated left-to-right), which may create distributional artifacts. 4Our ordering is the following field types in order: domain, date, authors, headline, and then the body. 3 Context domain date authors headline Target body z "New Research Shows that few research from the University of California, ) a ee | Vacces Cate Aslam | —*|_ Sree etc mes domain date headline body authors NEE) b . New Research Shows that iniversity of California, Davis, . “ ) ( wired.com inal May 29, 2019 r ‘Vaccines Cause Autism > [University of California, Davis, Justin Furillo domain date authors body headline : a New research from the Vaccines Might Be a Bigger Threat to c) | wired.com + May 29, 2019 laa Justin Furillo posse oF Caria, Davi —>| Jour childs Future Than You Reelized Figure 2: A diagram of three Grover examples for article generation. In row a), the body is generated from partial context (the authors field is missing). In b), the model generates the authors. In c), the model uses the new generations to regenerate the provided headline to one that is more realistic. Figure 2 shows an example of using Grover to generate an anti-vaccine article. Here, the adversary specifies a domain, date, and headline. After Grover generates the body, it can be used to generate a fake author, before finally generating a new and more appropriate headline. During training, we simulate inference by randomly partitioning an article’s fields into two disjoint sets F1 and F2. We also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. We sort the metadata fields in each set using our standard order, and concatenate the underlying tokens. The model is then trained to minimize the cross-entropy of predicting the tokens in F1 followed by the tokens in F2.5 Architecture. We draw on recent progress in training large Transformers for language modeling (Vaswani et al., 2017), building Grover using the same architecture as for GPT2 (Radford et al., 2019). We consider three model sizes. Our smallest model, Grover-Base, has 12 layers and 124 million parameters, on par with GPT and BERT-Base (Radford et al., 2018; Devlin et al., 2018). Our next model, Grover-Large, has 24 layers and 355 million parameters, on par with BERT-Large. Our largest model, Grover-Mega, has 48 layers and 1.5 billion parameters, on par with GPT2. Dataset. We present RealNews, a large corpus of news articles from Common Crawl. Training Grover requires a large corpus of news articles with metadata, but none currently exists. Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News. We used the Newspaper Python library to extract the body and meta- data from each article. News from Common Crawl dumps from December 2016 through March 2019 were used as training data; articles published in April 2019 from the April 2019 dump were used for evaluation. After deduplication, RealNews is 120 gigabytes without compression. Learning. We trained each Grover model on randomly-sampled sequences from RealNews with length 1024. Other optimization hyperparameters are in Appendix A. We trained Grover-Mega for 800k iterations, using a batch size of 512 and 256 TPU v3 cores. Training time was two weeks. # 3.1 Language Modeling results: measuring the importance of data, context, and size We validate Grover, versus standard unconditional language models, on the April 2019 test set. We consider two evaluation modes: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. In both cases, we calculate the perplexity only over the article body. Our results, shown in Figure 3, show several conclusions. First, Grover noticeably improves (between .6 to .9 perplexity points) when conditioned on metadata. Second, perplexity decreases with size, with Grover-Mega obtaining 8.7 perplexity in the conditional setting. Third, the data distribution is still important: though the GPT2 models with 124M parameters and 355M parameters respectively match our Grover-Base and Grover-Large architectures, our model is over 5 perplexity points lower in both cases, possibly because the OpenAI WebText corpus also contains non-news articles. 5All tokens use the same vocabulary. By using a standard order, but partitioning the fields into two sets, the model can generate any field conditioned on others while only needing to learn 2|F | orderings, versus |F |!. 4 mmm Grover-Base (124M) mmm Grover-Large (355M) mmm Grover-Mega (1.5B) mms GPT2 (124M) | mms GPT2 (355M) Unconditional Perplexity a 5 Conditional 3.0 - (best) 1.5- Mmm Style lm Content mm Overall 10 - = (worst) Human Machine Human Machine News News # Propaganda # Propaganda Figure 3: Language Modeling results on the body field of April 2019 articles. We evaluate in the Unconditional setting (without provided metadata) as well as in the Conditional setting (with all metadata). Grover sees over a 0.6 point drop in perplexity when given metadata. Figure 4: Human evaluation. For each article, three annotators evaluated style, content, and the overall trustworthiness; 100 articles of each category were used. The results show that propa- ganda generated by Grover is rated more plausi- ble than the original human-written propaganda. # 3.2 Carefully restricting the variance of generations with Nucleus Sampling Sampling from Grover is straightforward as it behaves like a left-to-right language model during decoding. However, the choice of decoding algorithm is important. While likelihood-maximization strategies such as beam search work well for closed-ended generation tasks where the output contains the same information as the context (like machine translation), these approaches have been shown to produce degenerate text during open-ended generation (Hashimoto et al., 2019; Holtzman et al., 2019). However, as we will show in Section 6, restricting the variance of generations is also crucial. In this paper, we primarily use Nucleus Sampling (top-p): for a given threshold p, at each timestep we sample from the most probable words whose cumulative probability comprises the top-p% of the entire vocabulary (Holtzman et al., 2019).6 # 4 Humans are Easily Fooled by Grover-written Propaganda We evaluate the quality of disinformation generated by our largest model, Grover-Mega, using p“.96. We consider four classes of articles: human-written articles from reputable news websites (Human News), Grover-written articles conditioned on the same metadata (Machine News), human-written arti- cles from known propaganda websites (Human Propaganda), and Grover-written articles conditioned on the propaganda metadata (Machine Propaganda).7 The domains used are in Appendix B; examples are in Appendix F. We asked a pool of qualified workers on Amazon Mechanical Turk to rate each article on three dimensions: stylistic consistency, content sensibility, and overall trustworthiness.8 Results (Figure 4) show a striking trend: though the quality of Grover-written news is not as high as human-written news, it is adept at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by Grover.9 6In early experiments, we found Nucleus Sampling produced better and less-detectable generations than alternatives like top-k sampling, wherein the most probable k tokens are used at each timestep (Fan et al., 2018). 7We use the technique described in Figure 2 to rewrite the propaganda: given the metadata, generate the article first, and then rewrite the headline. 8With these guidelines, we tried to separate style versus content. Overall trustworthiness asks ‘Does the article read like it comes from a trustworthy source?’ which emphasizes style, while content sensibility asks whether the content is believable on a semantic level. 9This difference is statistically significant at p “ 0.01. One possible hypothesis for this effect is that Grover ignores the provided context. To test this hypothesis, we did a human evaluation of the consistency of the article body with the headline, date, and author. We found that human-written propaganda articles are consistent with the headline with an average score of 2.85 of 3 on the same 1-3 scale, while machine-written propaganda is consistent with 2.64 of 3. 5 # 5 Neural Fake News Detection The high quality of neural fake news written by Grover, as judged by humans, makes automatic neural fake news detection an important research area. Using models (below) for the role of the Verifier can mitigate the harm of neural fake news by classifying articles as Human or Machine written. These decisions can assist content moderators and end users in identifying likely (neural) disinformation. a. Grover. We consider a version of our model adapted for discrimination. Similar to GPT (Radford et al., 2018), we place a special [CLS] token at the end of each article, and extract the final hidden state at that point. The hidden state is fed to a linear layer to predict the label Human or Machine. To simulate real conditions, and ensure minimal overlap between the generator and discriminator parameters, we initialize Grover for discrimination using the checkpoint at iteration 700k, whereas the generator uses the checkpoint at iteration 800k. b. GPT2, a 124M or 355M parameter pretrained Transformer language model. Similar to Grover, we follow the GPT approach and extract the hidden state from a newly-added [CLS] token. c. BERT, a 110M parameter (BERT-Base) or 340M parameter (BERT-Large) bidirectional Trans- former encoder commonly used for discriminative tasks. We perform domain adaptation to adapt BERT to the news domain, as well as to account for long articles; details in Appendix C. d. FastText, an off-the-shelf library for bag-of-ngram text classification (Joulin et al., 2017). Though not pretrained, similar models do well at detecting human-written fake news. All models are trained to minimize the cross-entropy loss of predicting the right label. Hyperparame- ters used during discrimination are in Appendix D. # 5.1 A semi-supervised setting for neural fake news detection While there are many human-written articles online, most are from the distant past, whereas articles to be detected will likely be set in the present. Likewise, there might be relatively few neural fake news articles from a given adversary.10 We thus frame neural fake news detection as a semi-supervised problem. A neural verifier (or discriminator) has access to many human-written news articles from March 2019 and before – the entire RealNews training set. However, it has limited access to generations, and more recent news articles. Using 10k news articles from April 2019, we generate article body text; another 10k articles are used as a set of human-written news articles. We split the articles in a balanced way, with 10k for training (5k per label), 2k for validation, and 8k for testing. We consider two evaluation modes. In the unpaired setting, a discriminator is provided single news articles, and must classify each independently as Human or Machine. In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The discriminator must assign the machine-written article a higher Machine probability than the human-written article. We evaluate both modes in terms of accuracy. 5.2 Discrimination results: Grover performs best at detecting Grover’s fake news We present experimental results in Table 1 for all generator and discriminator combinations. For each pair, we show the test results using the most adversarial generation hyperparameters (top-p) as judged on the validation set.11 The results show several trends. First, the paired setting appears much easier than the unpaired setting, suggesting that it is difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using Grover to discriminate Grover’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Third, other discriminators perform worse than Grover overall, even when controlling for architecture size and (for both BERT models) the domain. That Grover is the best discriminator is possibly surprising: being unidirectional, it is less expressive than deep bidirectional models such as BERT.12 That the more expressive model here is not the best at 10Moreover, since disinformation can be shared on a heterogeneous mix of platforms, it might be challenging to pin down a single generated model. 11For each discriminator/generator pair, we search over p P t.9, .92, .94, .96, .98, 1.0u. 12Indeed, bidirectional approaches perform best on leaderboards like GLUE (Wang et al., 2018). 6 Table 1: Results of discriminators versus gener- ators, in both the paired and unpaired settings and across architecture sizes. We also vary the generation hyperparameters for each generator- discriminator pair, reporting the discrimination test accuracy for the hyperparameters with the lowest validation accuracy. Compared with other models such as BERT, Grover is the best at de- tecting its own generations as neural fake news. # Unpaired Accuracy Generator size # Paired Accuracy Generator size r=} is} No weak supervision ~~ —@ +Grover-Base Geneations _ 90 +Grover-Large Generations 80 —~ ~ 3 fi Grover-Mega Unpaired Accuracy 8 f a 3 4 64 256 1024 4096 #Grover-Mega examples given Chance 50.0 1.5B Grover-Mega 91.6 98.7 355M Grover-Large 79.5 91.0 BERT-Large 68.0 78.9 70.1 77.2 GPT2 124M Grover-Base 71.3 79.4 67.2 75.0 BERT-Base 67.7 73.2 GPT2 11M FastText 63.8 65.4 99.8 98.7 93.7 88.0 90.0 82.0 81.8 70.0 50.0 98.8 100.0 100.0 88.7 98.4 75.3 90.4 79.1 86.8 99.9 99.5 95.0 80.8 88.5 84.7 90.9 72.9 80.6 97.0 96.6 87.1 73.0 73.0 79.0 Figure 5: Exploring weak supervision for dis- criminating Grover-Mega generations. With no weak supervision, the discriminator sees x machine-written articles (from Grover Mega). For `Grover-Base and `Grover-Mega, the dis- criminator sees 5000´x machine-written articles given by the weaker generator in question. See- ing weaker generations improves performance when few in-domain samples are given. discriminating between real and generated news articles suggests that neural fake news discrimination requires having a similar inductive bias as the generator.13 5.3 Weak supervision: what happens if we don’t have access to Grover-Mega? These results suggest that Grover is an effective discriminator when we have a medium number of fake news examples from the exact adversary that we will encounter at test time. What happens if we relax this assumption? Here, we consider the problem of detecting an adversary who is generating news with Grover-Mega and an unknown top-p threshold.14 In this setup, during training, we have access to a weaker model (Grover-Base or Grover-Large). We consider the effect of having only x examples from Grover-Mega, and sampling the missing 5000´x articles from one of the weaker models, where the top-p threshold is uniformly chosen for each article in the range of r0.9, 1.0s. We show the results of this experiment in Figure 5. The results suggest that observing additional generations greatly helps discrimination performance when few examples of Grover-Mega are available: weak supervision with between 16 and 256 examples from Grover-Large yields around 78% accuracy, while accuracy remains around 50% without weak supervision. As the portion of examples that come from Grover-Mega increases, however, accuracy converges to around 92%.15 # 6 How does a model distinguish between human and machine text? In this section, we explore why Grover performs best at detecting fake news generated by other Grover models. We find that there is a double-bind between exposure bias and variance-reduction algorithms that alleviate these biases while at the same time creating other artifacts. Exposure Bias. Models maximizing Equation 1 are trained only conditioned on human-written text, never on its own generations, creating a problem known as exposure bias (Ranzato et al., 2016). We investigate the importance of exposure bias towards creating artifacts. In Figure 6 we plot the perplexities given by Grover-Mega over each position for body text at top-p thresholds of 0.96 and 1, as well as over human text. Generating the first token after <startbody> results in high 13This matches findings on the HellaSwag dataset (Zellers et al., 2019b). Given human text and machine text written by a finetuned GPT model, a GPT discriminator outperforms BERT-Base at picking out human text. 14The top-p threshold used was p“0.96, but we are not supposed to know this! 15In additional experiments we show that accuracy increases even more – up to 98% – when the number of examples is increased (Zellers et al., 2019c). We also find that Grover when trained to discriminate between real and fake Grover-generated news can detect GPT2-Mega generated news as fake with 96% accuracy. 7 2 Fa 7 a g 3 5, 5 é 0 50 100 150 200 250 300 350 400 Position 100 ~ s B 90- Bd = 3° --N g 3 4 g 70 Discriminator 2 —@ Grover-Mega (1.5B) g60- —® Grover-Large (355M) 5 —@ BERT-Large (340M) 0.90 0.92 0.94 0.96 0.98 1.00 Nucleus Sampling p Figure 6: Perplexities of Grover-Mega, averaged over each position in the body (after conditioning on meta- data). We consider human-written with Grover-Mega generated text at p“1 (random sampling) and p“.96. The perplexity of randomly sampled text is higher than human-written text, and the gap increases with position. This suggests that sampling without variance reduction increasingly falls out-of-distribution. __ — _ | —_ Figure 7: Unpaired validation accuracy, telling apart generated news articles (from Grover Mega) from real articles, at differ- ent variance reduction thresholds p (for Nucleus Sampling). Results varying p show a sweet spot (p “ 0.94 – 0.98) wherein discrimination is hardest. perplexity. However, the rest of the positions show a curious pattern: the perplexity of human-written text is lower than randomly sampled text, and this gap increases with sequence length, suggesting that random sampling causes Grover to fall increasingly out of the distribution of human language. However, limiting the variance (p“0.96) lowers the resulting perplexity and limits its growth. Limiting the variance of a model also creates artifacts On the other hand, clipping the model’s variance also leaves an artifact, as prior work has observed for top-k sampling (Strobelt and Gehrmann, 2019). A similar phenomenon holds for Nucleus (top-p) sampling. The probability of observing a human-written article where all tokens are drawn from the top-p% of the distribution is pn, where n is the document’s length. This probability goes to zero as n increases. However, for Nucleus Sampled text – in which the final 1´p is cut off – all tokens come from the top-p. The visibility of the artifacts depends on the choice of discriminator. The top-p at each timestep is calculated under the generator’s worldview, meaning that if the discriminator models text in a different way, it might have a harder time pinpointing the empty 1´p tail. This could explain BERT’s lower performance during discrimination. A sweet spot of careful variance reduction Not reducing the variance, as well as significantly reducing the variance, both cause problems. Might there be a sweet spot for how much to truncate the variance, to make discrimination maximally hard? In Figure 7, we show results varying the top-p threshold for the discrimination task applied to Grover-Mega’s generations. The results indeed show a sweet spot, roughly between p“0.94 and p“0.98 depending on the discriminator, wherein discrimination is hardest. Interestingly, we note that the most adversarial top-p threshold for BERT- Large is considerably lower than the corresponding top-p for Grover-Large of the same size. This supports our hypothesis that BERT’s view of language differs markedly from Grover; using a lower top-p threshold does not seem to give it much more information about the missing tail. Overall, our analysis suggests that Grover might be the best at catching Grover because it is the best at knowing where the tail is, and thus whether it was truncated. # 7 Conclusion: a Release Strategy for Grover This paper investigates the threats posed by adversaries seeking to spread disinformation. Our sketch of what these threats might look like – a controllable language model named Grover – suggests that these threats are real and dangerous. Grover can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, there are defenses to these models – notably, in the form of Grover itself. We conclude with a discussion of next steps and ethical considerations. 8 The Era of Neural Disinformation. Though training Grover was challenging, it is easily achiev- able by real-world adversaries today. Obtaining the data required through Common Crawl cost $10k in AWS credits and can be massively parallelized over many CPUs. Training Grover-Mega is relatively inexpensive: at a cost of $0.30 per TPU v3 core-hour and two weeks of training, the total cost is $25k. Spending more money and engineering time could yield even more powerful generators. Release of generators is critical. At first, it would seem like keeping models like Grover private would make us safer. However, Grover serves as an effective detector of neural fake news, even when the generator is much larger (Section 5). If generators are kept private, then there will be little recourse against adversarial attacks. We thus released our models to researchers (Zellers, 2019). Future of progress in generation. Models like BERT are strong discriminators for many NLP tasks, but they are not as good at detecting Grover’s generations as left-to-right models like Grover, even after domain adaptation. One hypothesis is that the artifacts shown in Section 6 are most visible to a left-to-right discriminator. This also suggests that recent progress on generating text in any order (Gu et al., 2019; Stern et al., 2019; Ghazvininejad et al., 2019) may lead to models that evade a Grover discriminator. Likewise, models that are trained conditioned on their own predictions might avoid exposure bias, however, these objectives often lead to low performance on language tasks (Caccia et al., 2018). One additional possibility is the use of Adversarial Filtering (Zellers et al., 2018; 2019b) to oversample and then select a subset of generations. However, we found this didn’t work well for very long sequences (up to 1024 BPE tokens), possibly as these are far from the ‘Goldilocks Zone’ wherein discrimination is hard for machines. Additional threat models. In this paper, we studied the threat model whereby an adversary gener- ates an entire news article from scratch, given minimal context. Other threat models are possible: for instance, an adversary might generate comments or have entire dialogue agents, they might start with a human-written news article and modify a few sentences, and they might fabricate images or video. These threat models ought to be studied by researchers also so that we can create better defenses. Machine-generated real news? Our study focused on detecting machine-written fake news, though the same Grover approach can be used for spotting human-written fake news as well (Zellers et al., 2019c). However, machines can also generate truthful news using templated systems. Domains with templated news articles exist in our dataset,16 and are easy for Grover to spoof convincingly. Future of progress in discrimination. Our discriminators are effective, but they primarily leverage distributional features rather than evidence. In contrast, humans assess whether an article is truthful by relying on a model of the world, assessing whether the evidence in the article matches that model. Future work should investigate integrating knowledge into the discriminator (e.g. for claim verification in FEVER; Thorne et al., 2018). An open question is to scale progress in this task towards entire news articles, and without paired evidence (similar to open-domain QA; Chen et al., 2017). What should platforms do? Video-sharing platforms like YouTube use deep neural networks to scan videos while they are uploaded, to filter out content like pornography (Hosseini et al., 2017). We suggest platforms do the same for news articles. An ensemble of deep generative models, such as Grover, can analyze the content of text – together with more shallow models that predict human- written disinformation. However, humans must still be in the loop due to dangers of flagging real news as machine-generated, and possible unwanted social biases of these models. # Acknowledgments We thank the anonymous reviewers, as well as Dan Weld, for their helpful feedback. Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure. This work was supported by the National Science Foundation through a Graduate Research Fellowship (DGE- 1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artificial Intelligence, the NVIDIA Artificial Intelligence Lab, Samsung through a Samsung AI research grant, and gifts by Google and Facebook. Computations on beaker.org were supported in part by credits from Google Cloud. 16An example is https://americanbankingnews.com. 9 # References Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. Samantha Bradshaw and Philip Howard. Troops, trolls and troublemakers: A global inventory of organized social media manipulation. Technical report, Oxford Internet Institute, 2017. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. Language gans falling short. arXiv preprint arXiv:1811.02549, 2018. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870–1879, 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Rachel Dicker. Avoid These Fake News Sites at All Costs. https: //www.usnews.com/news/national-news/articles/2016-11-14/ avoid-these-fake-news-sites-at-all-costs, 2016. [Online; accessed 22-May-2019]. Elizabeth Dwoskin and Tony Romm. Facebook says it has uncovered a coordinated disinformation operation ahead of the 2018 midterm elections. The Washington Post, 2018. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889–898, 2018. Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler. Partisanship, propaganda, and disinformation: Online media and the 2016 us presidential election. Berkman Klein Center Research Publication 2017-6., 2017. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Constant-time machine translation with conditional masked language models. arXiv preprint arXiv:1904.09324, 2019. Jiatao Gu, Qi Liu, and Kyunghyun Cho. Insertion-based decoding with automatically inferred generation order. arXiv preprint arXiv:1902.01370, 2019. Xiaochuang Han and Jacob Eisenstein. Unsupervised domain adaptation of contextualized embed- dings: A case study in early modern english. arXiv preprint arXiv:1904.02817, 2019. Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. arXiv preprint arXiv:1904.02792, 2019. Brent Hecht, Lauren Wilcox, Jeffrey P. Bigham, Johannes Schöning, Ehsan Hoque, Jason Ernnst, Yonatan Bisk, Luigi De Russis, Lana Yarosh, Bushra Anjum, Danish Contractor, and Cathy Wu. It’s time to do something: Mitigating the negative impacts of computing through a change to the peer review process. ACM Future of Computing Blog, 2018. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degenera- tion. arXiv preprint arXiv:1904.09751, 2019. Hossein Hosseini, Baicen Xiao, Andrew Clark, and Radha Poovendran. Attacking automatic video analysis algorithms: A case study of google cloud video intelligence api. In Proceedings of the 2017 on Multimedia Privacy and Security, pages 21–32. ACM, 2017. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 1587–1596. JMLR. org, 2017. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 427–431, 2017. 10 Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. CoRR, abs/1602.02410, 2016. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. Clare Melford and Craig Fagan. Cutting the funding of disinformation: The ad-tech solution. Technical report, The Global Disinformation Index, 2019. Adam Mosseri. News feed fyi: Helping ensure news on facebook is from trusted sources. Facebook Newsroom, 19, 2018. Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T Hancock. Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 309–319. Association for Computational Linguistics, 2011. Verónica Pérez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. Automatic detection of fake news. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3391–3401, 2018. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237, 2018. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018. URL https: //blog.openai.com/language-unsupervised/. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Technical report, OpenAI, 2019. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In ICLR. ICLR, 2016. Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937, 2017. Chengcheng Shao, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. Hoaxy: A platform for tracking online misinformation. In Proceedings of the 25th international conference companion on world wide web, pages 745–750. International World Wide Web Conferences Steering Committee, 2016. Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4603–4611, 2018. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203, 2019. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. Insertion transformer: Flexible sequence generation via insertion operations. arXiv preprint arXiv:1902.03249, 2019. Hendrik Strobelt and Sebastian Gehrmann. Catching a unicorn with gltr: A tool to detect automatically generated text. Technical report, Harvard, 2019. Briony Swire, Ullrich KH Ecker, and Stephan Lewandowsky. The role of familiarity in correcting inaccurate information. Journal of experimental psychology: learning, memory, and cognition, 43 (12):1948, 2017. 11 James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. Fever: a large- scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, 2018. Chris J Vargo, Lei Guo, and Michelle A Amazeen. The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New Media & Society, 20(5): 2028–2049, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000–6010. Curran Associates Inc., 2017. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018. William Yang Wang. “liar, liar pants on fire”: A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 422–426, 2017. Claire Wardle. Fake news. it’s complicated. First Draft News, 16, 2017. Claire Wardle and Hossein Derakhshan. Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report, DGI (2017), 9, 2017. Rowan Zellers. Why we released grover. Technical report, 2019. URL https://thegradient. pub/why-we-released-grover/. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019a. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019b. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Counteracting neural disinformation with URL https://medium.com/ai2-blog/ Franziska Roesner, 2019c. grover. counteracting-neural-disinformation-with-grover-6cf6690d463b. and Yejin Choi. Technical report, 12 # Supplemental Material # A Optimization Hyperparameters For our input representation, we use the same BPE vocabulary as (Radford et al., 2019). We use Adafactor (Shazeer and Stern, 2018) as our optimizer. Common optimizers such as Adam (Kingma and Ba, 2014) tend to work well, but the memory cost scales linearly with the number of parameters, which renders training Grover-Mega all but impossible. Adafactor alleviates this problem by factoring the second-order momentum parameters into a tensor product of two vectors. We used a maximum learning rate of 1e-4 with linear warm-up over the first 10,000 iterations, and decay over the remaining iterations. We set Adafactor’s β1 “ 0.999 and clipped updates for each parameter to a root-mean-squared of at most 1. Last, we applied weight decay with coefficient 0.01. We used a batch size of 512 on 256 TPU v3 cores. which corresponds to roughly 20 epochs through our news dataset. The total training time required roughly two weeks. # B Real News and Propaganda Websites In our generation experiments (Section 4), we consider a set of mainstream as well as propaganda web- sites. We used the following websites as ‘real news’: theguardian.com, reuters.com, nytimes.com, theatlantic.com, usatoday.com, huffingtonpost.com, and nbcnews.com. For propaganda sites, we chose sites that have notably spread misinformation (Dicker, 2016) or propaganda17. These were breitbart.com, infowars.com, wnd.com, bigleaguepolitics.com, and naturalnews.com. # C Domain Adaptation of BERT BERT (Devlin et al., 2018) is a strong model for most classification tasks. However, care must be taken to format the input in the right way, particularly because BERT is pretrained in a setting where it is given two spans (separated by a special [SEP] token). We thus use the following input format. The first span consists of the metadata, with each field prefixed by its name in brackets (e.g. ‘[title]’). The second span consists of the body. Because the generations are cased (with capital and lowercase letters), we used the ‘cased’ version of BERT. Past work (e.g. Zellers et al. (2019a); Han and Eisenstein (2019)) has found that BERT, like other language models, benefits greatly from domain adaptation. We thus perform domain adaptation on BERT, adapting it to the news domain, by training it on RealNews for 50k iterations at a batch size of 256. Additionally, BERT was trained with a sequence length of at most 512 WordPiece tokens, but generations from Grover are much longer (1024 BPE tokens). Thus, we initialized new position embeddings for positions 513-1024, and performed domain adaptation at a length of 1024 WordPiece tokens. # D Hyperparameters for the Discriminators For our discrimination experiments, we limited the lengths of generations (and human-written articles) to 1024 BPE tokens. This was needed because our discriminators only handle documents up to 1024 words. However, we also found that the longer length empirically discrimination easier for models (see Section 6). For our discrimination experiments, we used different hyperparameters depending on the model, after an initial grid search. For BERT, we used the Adam (Kingma and Ba, 2014) optimizer with a learning rate of 2e ´ 5 and a batch size of 64. We trained BERT models for 5 epochs, with a linear warm-up of the learning rate over the initial 20% iterations. For GPT2 and Grover, we used the Adam actor optimizer (Shazeer and Stern, 2018) optimizer with a learning rate of 2e ´ 5 for all models, and a batch size of 64. We applied an auxiliary language modeling loss for these models with a coefficient of 0.5. These models were trained for 10 epochs, with a linear warm-up over the initial 20% iterations. 17For more information, see the Media Bias Chart at adfontesmedia.com/. 13 # E Human Evaluation Prompt # E.1 Evaluating Quality For evaluating the quality of Grover-written versus human-written news articles, we asked workers the following questions (shown exactly). The answer choices are shown next to the rating under our 1-3 Likert scale (3 being the best, 1 being the worst for each attribute). (a) (Style) Is the style of this article consistent? 3. Yes, this sounds like an article I would find at an online news source. 2. Sort of, but there are certain sentences that are awkward or strange. 1. No, it reads like it’s written by a madman. (b) (Content) Does the content of this article make sense? 3. Yes, this article reads coherently. 2. Sort of, but I don’t understand what the author means in certain places. 1. No, I have no (or almost no) idea what the author is trying to say. (c) (Overall) Does the article read like it comes from a trustworthy source? 3. Yes, I feel that this article could come from a news source I would trust. 2. Sort of, but something seems a bit fishy. 1. No, this seems like it comes from an unreliable source. # E.2 Evaluating consistency To measure consistency between the article and the metadata, we asked the following questions: (a) (Headline) How well does the article body match the following headline? [headline] 3. Yes, the article makes sense as something that I would see given the headline. 2. Sort of, the article is somewhat related to the headline, but seems slightly off. 1. No, the article is completely off-topic. (b) (Authors) How well does the article body match the following author(s)? [authors] 3. Yes, the article makes sense as something that could be written by the author(s). 2. Sort of, the article might have been written by the author(s) above, but it sounds unlikely. 1. No, the article body contains information that says it was written by someone else. (c) (Date) How well does the article body match the following date? [date] 3. Yes, the article makes sense as something that could have been written on [date]. 2. Sort of, the article might have been written on [date], but it sounds unlikely. 1. No, there’s information in the article that conflicts the proposed date. # F Examples In Figures 8 and 9, we include examples of articles with the average scores given by human raters, who were asked to evaluate the style, content, and overall trustworthiness. In Figure 8, we show a real article (Human News) posted by the Guardian along with an article from Grover (Machine News) made using the same metadata. Figure 9 shows a real propaganda article from the Natural News (Human Propaganda) and an article made with Grover (Machine Propaganda) with the original headline and the style of Huffington Post (Grover was used to re-write the title to be more stylistically similar to the Huffington Post, as well). We also present several other generated examples, generated from Grover-Mega with a top-p threshold of p“0.95. All of the examples are cut off to 1024 generated BPE tokens, since this is our setup for discrimination. a. Grover can generate controlled propaganda. In Figure 10, we show the continuation from Figure 1, about a link found between autism and vaccines. 14 # Original Headline: Timing of May’s ‘festival of Britain’ risks Irish anger # Human-written News Article # Machine-written News Article Timing of May’s ‘festival of Britain’ risks Irish anger April 13, 2019 theguardian.com It was meant to be a glimmer of positivity to unite a divided nation — a festival to celebrate the best of British, bring communities together and strengthen “our precious union”. Yet Theresa May is being warned that her plan for a Festival of Great Britain and Northern Ireland risks doing the opposite. The planned 2022 event, announced at last year’s Conservative conference, was criticised as a headline-grabbing distraction. But May now faces concerns that the timing clashes with the centenary of Irish partition and the civil war. Arts industry figures in Northern Ireland and some of those involved in the peace process are also understood to have concerns. These worries are revealed in a report by the thinktank British Future, which examined the potential for arts and heritage to bring the nation together. The study calls on the festival to be delayed by at least three years. What is now the Irish republic became the Irish Free State in 1922, while Northern Ireland remained part of the UK. A civil war erupted among Irish nationalists over the remaining links with Britain and raged for a year. Sunder Katwala, the report's author, said: “Holding a festival of Great Britain and Northern Ireland in 2022, on the centenary of Ireland’s partition and civil war, would be the worst possible timing. It is only likely to heighten tensions between communities — and that’s before we know Brexit’s implications for the border. Right across the UK, a festival so closely associated with Brexit may only reinforce divides when it could be bridging them.” Jonathan Powell, Tory Blair's former chief of staff and one of the architects of the Good Friday agreement, also warned against anything that could inflame tensions, which have worsened given the prominent role of the Irish border in the Brexit debate. “There is going to be a lot of rebuilding to do in Anglo-lrish relations, which had been put on a completely new footing over the last 20 years and were then trashed by the Brexiters over the last couple of years,” he said. “An example is their constant attacks on [Irish prime minister Leo] Varadkar for hostility when all he is doing is protecting the interests of the Irish people. | fear it will take years to put this right.” The idea of a nationwide event was modelled on the Great Exhibition of 1851 and the Festival of Britain a century later, held to boost the nation’s spirits after the second world war. May was mocked on social media when she announced she was setting aside £120m to prepare for the event, with some suggesting the festival would only feature damp tents on village greens, an exhibition of the remaining avocados left in Britain and morris dancers. However, after holding focus groups, British Future found it could be more popular than critics think, should organisers get the tone and timing right. More than six in 10 thought the festival a good idea, with only one in 10 opposed, according to new ICM research. “The festival could be much more popular with the public than its social media critics think, and could help bridge social divides if done well," Katwala said. “If not, it could be a second Millennium Dome. It risks being neglected, with the government stuck over Brexit and the prime minister who thought of it on her way out. The next prime minister will have to decide how to rescue this idea.” Ratings Style: 3.0 Content: 3.0 Overall: 3.0 Timing of May’s ‘festival of Britain’ risks Irish anger April 13, 2019 theguardian.com The prime minister’s decision to launch her much-hyped “great Britain” conference on Monday — the same day as a Lisbon treaty event paving the way for Brexit-free member states to leave the European Union — will be seen as provocative by some, according to senior Tories. Jo Johnson said May’s statement in Edinburgh was “instrumental” in chipping away at the strength of domestic opposition to Brexit. He added that the prime minister had also “churned” the membership of the ERG back into service. Tom Pursglove, another Tory MP involved in the campaign to prevent Brexit, said: “By lifting the gagging order on ERG members from the Liaison Committee and starting an intensification of the ethnic profiling of Remainers, the prime minister is doing herself and the ERG proud.” Announcing that the conference would launch her vision for the country, May will call for more global Britain to fight for global trade. Although still committed to leaving the single market and customs union, the Conservatives want to highlight the importance of these deals - as well as tackling climate change, tackling modern slavery and tackling poverty. The event will be on Monday 29 April, the day before the EU’s 2019 budget is agreed. May's Treasury chief secretary, Liz Truss, is to try to convince European finance ministers that there is no alternative plan to Brexit. EU officials and political leaders are scheduled to decide the EU’s £1.2tn budget in mid-October. The Northern Ireland-based DUP, which failed to back May in the no confidence vote she suffered earlier this month, will be encouraged by the event. The DUP said it would be “easy to ignore” the motions at the conference, but would vote against any effort to transfer powers to Brussels. Labour MP Sir Keir Starmer, who now chairs the cross-party Brexit negotiations committee, said: “The timing of her conference announcement raises some worrying issues. We cannot allow the UK’s terms of exit to be dictated by no confidence votes. “These checks cannot be on the superficial level, where some make noises on the hill but are wholly unwilling to set out detailed proposals. Tighter controls at Heathrow are essential, and if May really wants to celebrate ‘all change’, then she should close Britain's borders for a week and see how workable it is to stop EU nationals from flying in on the same visa system as Brits. “Brexit would be fantastic for the business world if you measure economic value only on the quality of the deal. But - and when we say ‘if’ the prime minister doesn't care that she is still far short of securing that ‘good deal’ - she needs to work harder to deliver that for her negotiators.” Other critics, including party member James Ball, drew parallels with Brexit minister Dominic Raab’s similar focus on trade deals to stop other EU states leaving the bloc. They said Raab's speech last week was “the latest Labour-held ploy to quietly delay Brexit, run out the clock or blame everyone except the UK for not being willing to walk away". ¢ Follow Guardian Opinion on Twitter at @gdnopinion Ratings Style: 3.0 Content: 3.0 Overall: 2.3 Figure 8: Example of human-written news and machine-written news articles about the same headline from The Guardian with the average ratings from human rating study. b. Grover can spoof the identity of writers. In Figure 11 we show a realistic-looking editorial seemingly from New York Times columnist Paul Krugman. c. Grover can generate fake political news. In Figure 12 we show an article generated about Trump being impeached, written in the style of the Washington Post. d. Grover can generate fake movie reviews (opinion spam; Ott et al. (2011)). In Figure 13 we show a movie review, generated in the style of LA Times Movie Critic Kenneth Turan, for Sharknado 6, ‘The Last Sharknado: It’s About Time’ e. Grover can generate fake business news. In Figure 14, we show an article generated about an ‘Uber for Dogs’ startup. 15 # Original Headline: Don’t drink the water: The dark side of water fluoridation # Human-written Propaganda # Machine-written Propaganda # Don't drink the water: The dark side of water fluoridation March 13. 2019 naturalnews.com # Fluoride in Your Water Isn’t Healthy, Even When It’s Recommended March 13, 2019 huffingtonpost.com (Natural News) There are 7.7 billion people on this planet (as at March 2019). Only about 5 percent of them drink fluoridated water. Why? Because their governments recognize that fluoride in large amounts becomes a toxic chemical that is not fit for human consumption. The 328,000,000 citizens of the United States drink more fluoridated water than all other countries combined. Why? Because the U.S. government continues to doggedly insist that it is safe and improves dental health. But what do the facts say? As reported by Waking Times, dozens of peer-reviewed studies published in prestigious journals like The Lancet, have confirmed that fluoride is in fact toxic - especially to the developing brains of children. These chemicals are derived from unprocessed toxic waste which is not purified in any way before being pumped into the water supply. How could it possibly be anything but harmful? Fluoride is regarded by researchers around the world as the “gold standard” in tooth care, and a safe, common ingredient that has almost been universally found to be safe in past studies of health effects. It’s absorbed without interference from the body's natural minerals. Researchers at the U.S. Geological Survey (USGS) published the results of a multi-state environmental health study last month. It showed that during the first three decades of fluoridation of tap water systems, fluoride produced from the process alone increased rates of dental caries (the biggest contributor to tooth decay) by 16 percent in Mississippi and a whopping 45 percent in Arizona, which implemented fluoridation systems back in 1942. This increase was seen after a decade when fluoride levels didn't change. The history of water fluoridation in the United States So, what prompted the government to start adding something so obviously harmful to our precious water supply? Waking Times, quoting from an article by The Children’s Health Defense Team, explains a little about the history of this practice: During World War Il, fluoride (a compound formed from the chemical element fluorine) came into large-scale production and use as part of the Manhattan Project. According to declassified government documents summarized by Project Censored, Manhattan Project scientists discovered early on that fluoride was a “leading health hazard to bomb program workers and surrounding communities.” In order to stave off lawsuits, government scientists “embarked on a campaign to calm the social panic about fluoride...by promoting its usefulness in preventing tooth decay.” USGS also found that fluoridation increased rates of other toxicants and petrochemicals, as well as deaths from brain, lung, kidney and bladder cancer. It bears noting that there is no clear proof that these specific contaminants were caused by fluoridation, but the USGS study at least hints that this was the case. The epidemic of brain cancers across the U.S. — especially in teenagers — has confounded researchers for decades. The USGS study points to links to numerous studies that have linked water fluoridation with increased risks of cancer. Even though the majority of studies on water fluoridation have not produced such alarming results, the mainstream medical community is, The power of the elements: Discover Colloidal Silver Mouthwash with quality, natural ingredients like Sangre de Drago sap, black walnut hulls, menthol crystals and more. Zero artificial sweeteners, colors or alcohol. Learn more at the Health Ranger Store and help support this news site. To back up its decision, the government embarked on a series of flawed and poorly designed “scientific” studies, which an expert later lambasted as “especially rich in fallacies, improper design, invalid use of statistical methods, omissions of contrary data, and just plain muddleheadedness and hebetude.” They then used these sham studies to enforce a national policy of water fluoridation. Studies confirm fluoride lowers IQ and harms children in other ways Interestingly, even government-backed studies have confirmed the dangers of fluoride in drinking water. For example, a study published in 2017, which was largely funded by the government's National Institutes of Health and National Institute of Environmental Health Sciences, uncovered a “strong relationship” between fluoride exposure in the womb and reduced cognitive function. apparently, still skeptical. Two years ago, doctors from Harvard and Duke universities suggested that fluoride is associated with lower IQ scores and autoantibodies to water. The results of a recent study that followed more than 700 children over a period of four years demonstrated that the kids were more likely to have symptoms of illness, more likely to have higher blood pressure and sleep problems, had higher mean energy expenditure, more struggles with attention and poorer concentration and performance on educational tasks. The data also showed that the children were not more vulnerable to fluoride when it was administered by toothpaste. water fluoridation have not # Even # the # of studies though majority on produced such alarming results, the mainstream medical community is, apparently, still skeptical. The link between bacteria and tooth decay is legendary. Experts agree that fluoride erases a cavity’s effect on surface-level bacteria and increases via to tissue. Fluoride # decay # dangerous pathways # deeper # actually In addition, Natural News previously reported: More than 50 peer-reviewed studies have linked the consumption of fluoridated water to lower IQ in children. A joint metanalysis by Harvard School of Public Health and China Medical University, which examined 27 studies on the subject, found “strong indications that fluoride may adversely affect cognitive development in children.” Fluoridation has also been linked to countless other devastating health effects in children, including premature birth, impaired neurological development, autism and preeclampsia. A recent study also confirmed a significant link between fluoridation and ADHD. quarantines surface-level bacteria to caries-causing toxins in the plaque and oral cavity. The longer the fluoride is in contact with those toxins, the more damage and damage can occur. Evaluations of the impacts of water fluoridation are well underway, but we've seen too many conflicts of interest and dirty laundry in the past few years. One of the largest of those is the Water Fluoridation Corporation (WFC), an industry network that carries on and even finances the work of paid “expert witnesses” in both domestic and international water Death rates up to 26 percent higher in the most fluoridated states And it gets worse. As reported by Waking Times, death rates among people who live in the 10 states with the highest fluoridation levels are between 5 and 26 percent higher than among people living in the 10 least fluoridated states. Furthermore, studies have linked fluoridated water to cancer, damaging effects on the brain, and dental fluorosis, which experts agree is the first visible sign of fluoride toxicity. The National Research Council of the National Academies also issued a warning in 2006 that fluoride exposure might be a risk factor in the development of Alzheimer’s disease. And to top it all off, there is literally no evidence that fluoride even prevents cavities! Even if it did, would that small benefit really be worth the massive potential for serious health problems that fluoridation has been scientifically linked to? Learn more at Fluoride.news or TapWater.news. Sources include: WakingTimes.com NaturalNews.com ChildrensHealthDefense.org # Ratings Style: 2.3 Content: 2.3 Overall: 1.7 fluoridation litigation. The WFC boasts 38 former and current members of the American Dental Association, including board members, former ADA commissioners, and representatives of the ADA-funded American Academy of Oral Medicine and American Dental Hygienists. It is arguably the second largest source of fluoride in the U.S. right behind the ADA. Despite denials, none of this new water research paints a rosy picture of water fluoridation. We can blame these new findings on ongoing conflicts of interest and blatant biases. Fluoride in your drinking water isn’t safe, even when it’s recommended. There are three risks that we as consumers need to ensure that are understood: 1) the hidden contaminants that may be causing harm; 2) the effects of over-medicating the fluoride vaccine; and 3) low-dose levels of fluoride in very old bones, causing unnecessary bone damage and irreparable nerve damage. # Ratings Style: 2.3 Content: 2.7 Overall: 2.7 Figure 9: Example of human-written and machine-written articles arguing against fluoride with the average ratings from human rating study. 16 Provided headline: ‘Link found between autism and vaccines’ Generated body: Those who have been vaccinated against measles have a more than 5-fold higher chance of developing autism, researchers at the University of California San Diego School of Medicine and the Centers for Disease Control and Prevention report today in the Journal of Epidemiology and Community Health. Those who have been vaccinated against measles have a more than 5-fold higher chance of developing autism. Earlier studies had linked past vaccines with the condition, but it was not clear whether those links were causal or indirect. Among research groups, the UCSD and CDC teams used new methods that enabled them to determine the associations between the virus and the disease. While researchers in the previous studies had observed associations between those vaccinated against measles and autism in children whose mothers had received the virus, the current UCSD/CDC team focused on childhood vaccines given during the first two years of life because it is commonly believed those vaccines begin to clear the body’s immune system before the onset of autism. The UCSD/CDC team compared children who had been vaccinated against measles, mumps and rubella (MMR) and those who had not. “Because this is the first study to focus on MMR-vaccine syndrome, it was also the first time that this syndrome had been scientifically characterized,” said study lead author William Chan, MD, MPH, a researcher in the Department of Pediatrics in the UC San Diego School of Medicine. “We were able to identify certain subgroups of those vaccinated against MMR and compare them to those not vaccinated.” Mumps, measles and rubella vaccines are administered in either three or five doses, depending on the age. A recent study by the Public Health Agency of Canada found only 10 percent of children vaccinated against measles had a developing autism disorder when compared to 73 percent of children vaccinated with one dose, 50 percent of those vaccinated with two doses and 45 percent of those vaccinated with three doses. Those results do not surprise the UC San Diego researchers, who found those vaccinated with three or more doses were at an additional 5- to 8-fold increased risk. The current UCSD/CDC analysis followed the same years of data as earlier studies, but used new methods to accurately pinpoint the timing of those immunity-shifting epidemics. For example, previous studies showed that autism was on the rise in 1995-96, but never topped 50 cases until 2001-02, when measles overtook mumps as the leading cause of infant death due to disease. In 2003, several measles outbreaks were triggered by parents who had ignored warnings about the vaccine. “Our research was critical in providing details of when outbreaks occurred in regions where there was significant vaccination efforts,” said study co-author Louis Cooley, MD, PhD, professor of Pediatrics and professor of Communicable Diseases and Epidemiology at UCSD School of Medicine. “We were able to show that if you get your MMR vaccine in early childhood, you should reduce your risk of autism by 33 percent, with every passing year bringing a slightly lower risk.” Researchers found that low vaccination rates — 42 percent in 2003-04 and 75 percent in 2007-08 — and more frequent, and severe, cases of measles predicted higher rates of autism in later years. In 2004-05, the annual rate of autism and other disorders — including intellectual disability — increased by 20 percent, far more than the 5- to 7-fold increase in their model predicted, suggesting there may be a more subtle link between vaccine exposure and autism than previously understood. Preliminary findings from a separate analysis of the 2006-07 data suggest that autism risk spikes occurred during times when children had fewer routine vaccinations and received older doses of vaccines. Autism spectrum disorders include autism spectrum disorders (ASD), a group of neurodevelopmental disorders that can occur with or without symptoms of the immune system disorder, including Asperger’s syndrome, pervasive developmental disorder not otherwise specified (PDD-NOS) and pervasive developmental disorder-not otherwise specified (PDD-ENG). The study was supported by the National Institutes of Health’s National Institute of Child Health and Human Development (CDER), the NIH Program on Smoking and Health and the San Diego Veterans Affairs (SDVA) Healthcare System. About the University of California, San Diego School of Medicine Founded in 1897, the School of Medicine has been recognized nationally for outstanding achievement in educating clinical and research physicians. Nobel laureates and more than 170 physicians who practice in the United States and abroad are the graduates of UC San Diego. The school has been recognized as the first medical school in the world to offer its entire physician-practitioner curriculum online, currently serving more than 1,000 medical students worldwide. The school also trains the next generation of scientists, physicians and public health leaders and ranks first in the nation for its graduates’ ability to train other healthcare professionals. UC San Diego is associated with UC San Diego Health System, which has a faculty of more than 9,000 and an annual Figure 10: The rest of the example from the teaser figure, about a hypothetical link found between autism and vaccines. 17 Provided domain: nytimes.com Provided author: Paul Krugman Provided date: May 23, 2019 Generated headline: Why Trump could get impeached Generated body: President Donald Trump could get impeached. Or he could get fired. Or he could be indicted for special-counsel matters involving Stormy Daniels, or the Russians, or his own campaign. But, at the end of the day, what would matter is what House Speaker Nancy Pelosi decides. And here’s why: First, Pelosi seems open to the idea of impeachment, to the point where last week she endorsed the use of the word in a speech. And this is not to claim that “impeachment” is a possible solution to the current impasse on immigration. Nobody seriously thinks that. Rather, it’s to say that the dividing line between Trump’s impeachment and, on the other hand, the notion that he’s being treated adequately in his courtship of the Republican base, is soft. If Trump runs a replay of his 2016 campaign, the focus on immigration will fade as an issue. Then his campaign’s fiery rhetoric on the economy and taxes will dominate, and get bigger. But as we pointed out yesterday, the “soul” of the Trump campaign was white nationalist sloganeering and authoritarianism. It seems unlikely that this will change. If it doesn’t change, impeachment seems like the only way to constrain him. Yet the power of the impeachment path is out of Trump’s hands. The recent passage of the Defense Authorization Act, to cite just one example, which codifies within the law the requirement that presidents get an honest break from investigations, seems to have guaranteed that Bob Mueller will be working hard into the fall. And “executive privilege” — the right of a president to withhold certain materials from the government — is clearly limited by the extent to which federal judges have looked at and rejected it. So impeachment is less a decision than a historical fact: If you think the election of a president who promised to enrich himself through corruption has happened, a court-ordered investigation, or both, is probably necessary to correct the damage. But it can’t happen while he’s president, or until he’s gone. Even then, there are a few ways that a president can claim executive privilege, or whatever other arguments he can throw out to minimize the pressure on him. So Congress must seize the moment. The first and most urgent must be an investigation of the Trump campaign’s possible ties to the Russians, and the subsequent question of whether the president knew what was going on. (For full coverage of the Russia story, see here.) The second, more abstract but no less important issue is whether the president’s own actions constitute obstruction of justice. And third, Trump needs to be impeached because his rhetoric has helped put members of his own party who could back his impeachment into near paralysis. If Republicans don’t take a bold stand, and appoint a special prosecutor to investigate all this — just as Democrats did during Watergate — then their caucus will become dependent on impeachment proceedings to hold its seats. While that might be a good outcome in the short run, it could also signal a likely government shutdown — just as it signaled the risk of a government shutdown during the Clinton impeachment. So it’s a bad decision for the country, or at least for the country’s self-interest. That said, it could be an even worse one for Trump. True, other presidents have been impeached in part for behavior far more extreme than Trump’s — for cases like Bill Clinton’s loveless marriage, and, more frequently, for their handling of women in private. But never before has the impeachment authority come at the point where a man at the pinnacle of power and influence has cozied up to a hostile foreign power, and openly attacked not just the judiciary, but the rule of law itself. For those who would recommend removal now for that alone, it’s worth noting that on a serious level, that kind of behavior isn’t akin to the brinkmanship that some U.S. presidents have brought to bear during a confrontation with Cuba. It’s as close as Trump has come to laying out some of the first lines of battle for our uncertain and uneasy world. And to me, that is a form of obstruction of justice. Figure 11: An example of spoofing someone’s identity using Grover-Mega. We generate a convincing New York Times Op-Ed that is seemingly from columnist Paul Krugman. 18 Provided domain: washingtonpost.com Provided date: May 29, 2019 Provided headline: US House Votes to Impeach Donald Trump After New Evidence Released from Mueller Report Generated body: WASHINGTON — The House voted to impeach President Donald Trump Wednesday after releasing hundreds of pages of text messages that point to clear evidence of obstruction of justice and communication with the head of the Trump Organization about a potential business deal in Russia. The 220-197 vote came after weeks of debate over whether new evidence released by special counsel Robert Mueller’s office signaled sufficient grounds for Trump’s removal from office. The president personally denounced the move, announcing his intent to veto the resolution and accusing Democrats of plotting to remove him from office through a “con job.” “This is all a big con job,” he said during an appearance in Texas. “They got caught.” A number of Democratic lawmakers went further, calling for Trump’s impeachment on the grounds that he obstructed justice. Democrats hold the House majority, and if the Senate fails to convict Trump, the president can be removed from office by a two-thirds vote. But even with Trump’s political allies organizing from the White House, Republicans in the House stood united behind their leader, noting the historical difficulty of making the case for impeachment. “This is not a decision for the president,” said House Speaker Paul Ryan, R-Wis. “It’s not one for this body, and I think it’s a dereliction of duty for the minority party to be making these calls.” Democrats on the House Judiciary Committee on Tuesday said they were not bound by Ryan’s defense. They released an 11-page joint report with Republicans on the House Oversight and Government Reform Committee that took a different tack, emphasizing that Mueller had not decided whether the case warranted impeachment. Rep. Jerrold Nadler, D-N.Y., the committee’s chairman, told reporters before the vote that he hoped the two reports would “send a clear message to the president that we in this body are not going to stand for obstruction of justice.” But when asked whether the report would build the case for impeachment, he said: “What’s far more likely to build a case for impeachment are the indictments and evidence produced through the special counsel’s work.” On Tuesday night, the White House issued a statement proclaiming that the book by a former FBI deputy director, which has been cited by Democrats in support of impeachment, had been “made up and defamatory” to the president. That statement drew swift and heavy rebukes from a number of Democrats, who said they had found the statement “beyond astonishing.” The Democratic report, citing redacted text messages sent by former FBI counterintelligence agent Peter Strzok, revealed what the party called “a pattern of bias” and “an affinity for the Russian regime.” The report did not include a full, uncensored text message Strzok sent to a friend after Trump was elected that said: “I’ll stop it.” That led Democrats to bemoan the language used, questioning what is or is not in the special counsel’s investigative file. The report also quoted a review of emails and witness testimony to Mueller that said Trump “has obstructed justice in the Russia investigation” by firing FBI Director James Comey, by urging Attorney General Jeff Sessions to recuse himself from the Russia investigation and by urging his son, Donald Trump Jr., to meet with a Kremlin-connected Russian lawyer at Trump Tower in 2016. “This has never happened in the history of our country,” Nadler said of the president. “No president has ever tried to interfere with an investigation into his own conduct.” The vote was also led by Democrats who have been outspoken against Trump, including Rep. Al Green, D-Texas, who took out newspaper ads calling for the president’s impeachment. Rep. Maxine Waters, D-Calif., another outspoken Trump critic, said Trump’s ties to Russia were “becoming more and more ominous,” and said “as we go on, we are going to see if we can’t get the articles of impeachment out.” Rep. Tony Cárdenas, D-Calif., a member of the Judiciary Committee, said many Democrats would be eager to see whether the newly discovered evidence could be used to support impeachment. “I think it will be the nail in the coffin of the ‘non-con,’” he said. Generated headline: Congress voted to impeach President Trump Wednesday; the President vowed to veto the resolution. Figure 12: An example of using Grover-Mega to generate fake political news, written in the style of the Washington Post. Here, it generates a mostly-realistic-looking article about Trump being impeached. 19 Provided domain: latimes.com Provided date: August 1, 2018 Provided authors: Kenneth Turan Provided headline: Review: "The Last Sharknado: It’s About Time" is a gem of an action movie Generated body: Some heroes take a vacation every now and then, even temporarily, a couple of beachfront condos in a movie world they hope, a la Rick James in “The Beautiful Daze,” won’t have to live through. Unfortunately, that has never happened to one of the most exceptional directors in the business, Anthony & Joe Russo, so the moment is altogether too good to spoil, at least for us Southern Californians. In “The Last Sharknado: It’s About Time,” which had its Los Angeles premiere at the Tower Theatre on Tuesday, we can expect the San Diego-born Anthony Russo to somehow make it through seven years without being inside a sharknado. Oh, he has a couple of other movies during that time, mostly that stupid’d-up New York “Avengers” picture, but they have nothing to do with sharks or serious action-film plotting. Don’t worry, he lives to fight another day. Written by Jason Friedberg and Aaron Seltzer (“Vampires Suck”) and directed by John Hlavin (“Code Name Verity”), “The Last Sharknado” is exactly what you would expect: very silly and thrilling. Once again the Ryan Newman- and Ian Ziering-costumed stars of the YouTube smash “Sharknado” have a lot of fun poking fun at themselves and the silly film business (“I don’t know if being caged for a few days after we get out is too much to ask”) along with helping a once-beleaguered government figure out what to do. We start on a lake in North Carolina as Newman, now an exotic dancer, does a good-natured impersonation of Seltzer’s Agent Slutz. He’s not a bad actor, though, and by the end of the first reel “it’s clear he’s never a lunatic.” So Slutz has just been rehired by the Trump Administration, even though the agency has been effectively taken over by the tag team of the Governator and Alex Jones (the viewer is never told why they are being hired). As Slutz, Newman is working with his deputy, Ziering, the impetuous marine officer no longer needled by his loyalty to his best friend, Tara Reid, who showed up in the third “Sharknado” and is absent this time. The three are doing their best to fix things, with Newman looking in particular for more information on how to deal with the five remaining sharks in New York who seem to think they should go with the Trump administration. Svelt’s plotting goes way beyond the call of duty, of course, for despite the presence of Anthony Mackie and Juliette Lewis, for example, there are many disappointments here, especially in the casting of a grouchy Jeff Goldblum as the villain of the piece. Still, like the original “Sharknado,” all the elements are exactly right, including some fancy effects (stunt coordinator Zak Penn, “The Avengers”) and some deeply, bizarrely funny dialogue (try spending a few days in a sharknado and after a while questioning your sanity). Knowing that everything is going to be fine, Ziering tells Newman: “Whether we go sharknado-free is on you.” Come again? Might we welcome it? If by a miracle we’re not sated by our popcorn, that would be the best way to look at it. [email protected] Follow me on Twitter @KennethTuran Generated headline: ’The Last Sharknado’ returns with the dramatic thriller you expected — and some unexpected laughs Figure 13: An example of using Grover-Mega to generate a fake movie review. Here, we spoof LA Times Film Critic Kenneth Turan and generate a positive movie review for ‘The Last Sharknado: It’s About Time,’ the sixth installment in the Sharknado series of movies. 20 Provided domain: techcrunch.com Provided date: May 29, 2019 Provided headline: Rowan Zellers’ new startup raises 17M in Series A round for "Uber for Dogs" Generated body: In today’s competitive eco-system of startups trying to convince consumers to eschew popular apps in favor of their own, you would think that the dog-walking market would be the easiest to crack. Well, it’s not. But at least one startup believes it has a new and clever solution: apply its core offering of mobile-based recommendation engine algorithms to locate dog owners near them with the end goal of making dog walking more efficient. The company, owned by Australian startup Yfrog, is called DogSpotter. It is co-founded by entrepreneur Rowan Zellers, along with King’s College London-trained dogsman Craig Osbaldoon. Launched in Melbourne in November, DogSpotter has now launched in Sydney, Canberra, Adelaide, Perth, and is set to be introduced to Brisbane and Hobart in June, with a platform the company expects to be available in all capital cities in Australia this year. In addition, it is now prepared to accept payment via credit cards in Australia and New Zealand. In exchange for its name recognition and the unique proposition that it is aiming to turn into a software-as-a- service, DogSpotter has raised $17 million in a Series A round of funding led by Plug and Play Tech Center, with participation from Open Ocean Capital and previous investors, including the Cahill Foundation. So what does DogSpotter offer, beyond a fairly basic service that could theoretically be used by anyone, which is at the base of a human-side algorithm that determines the best potential location for a dog owner’s pet — a problem that dogs are notoriously impatient for in cities where dog walking is often unprofitable or a lost art for many? Well, DogSpotter leverages its core recommendation engine to process the thousands of now readily available reviews of dog care providers via its app that users can find by searching for their own city or a curated selection of recommendations. That way, it’s much quicker than traditional alternatives, where you have to wade through a fair number of reviews to find the best-rated providers in your city, and does it with far less hassle. From there, the DogSpotter app makes use of the data it provides to recommend relevant walking services, among them a variety of preferred types of walks for your dog, based on everything from their preferences and sorts of paths to the weather, geographic differences, and degree of safety. DogSpotter’s founders also contend that their software can — and in some cases already has been — used to make sure that the actual human responsible for dog walking is nearby, by using the location-based data to facilitate long-distance walkings or picking out a dog for a veterinarian who can be contacted. Speaking of which, they’re not just operating on the basis that it’s an effective way to walk dogs without suffering losses. Although I won’t deny that I’ve seen some big cities suffer from too many small businesses advertising purely online and with little regard for actual human location. As it happens, DogSpotter tells me it has its own network of paying businesses in places like U.S. cities, but it claims to be seeing a company like DogBuddy.com.au as a market that it is disrupting. My hope is that the company ends up developing a number of other application-specific verticals such as veterinary procedures — again, I have not seen this yet but that would be a great and easy way to monetize. The business model is to offer DogSpotter services for a set fee per user, per location per month. It includes complimentary services for canine community or adoption groups, dog run assessment and mapping, dog boarding facilities, dog walking packages, and a myriad of other services. “DogSpotter is different from other dog walking platforms because we have the leading mobile app which is the most effective tool for location-based visual targeting for relevant content and experiences,” said Rowan Zellers, founder and CEO of DogSpotter, in a statement. “This means that DogSpotter is able to provide more relevant products and services and, for early users, we can allow companies to target only the users they want to engage with.” The problem, of course, is that not all dog walkers have the best apps. DogSpotter says it will compete on performance of its offering, not with other dog walking apps but rather with making it easier for dog owners to find the people they are looking for and, importantly, for those individual dog walkers to save a small bit of money on their operating costs, by saving time and effort with a website. And so on. Generated headline: DogSpotter raises $17M to improve your search for walking services Figure 14: An example of using Grover-Mega to generate fake business news. This generates an article about a fake startup for ‘Uber for Dogs’, ostensibly created by the first author of this paper. 21
{ "id": "1904.02817" }
1905.12334
Mixed Precision Training With 8-bit Floating Point
Reduced precision computation for deep neural networks is one of the key areas addressing the widening compute gap driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline.
http://arxiv.org/pdf/1905.12334
Naveen Mellempudi, Sudarshan Srinivasan, Dipankar Das, Bharat Kaul
cs.LG, stat.ML
null
null
cs.LG
20190529
20190529
9 1 0 2 y a M 9 2 ] G L . s c [ 1 v 4 3 3 2 1 . 5 0 9 1 : v i X r a # Mixed Precision Training With 8-bit Floating Point # Naveen Mellempudi Parallel Computing Lab, Intel Labs [email protected] # Sudarshan Srinivasan Parallel Computing Lab, Intel Labs [email protected] # Dipankar Das Parallel Computing Lab, Intel Labs [email protected] # Bharat Kaul Parallel Computing Lab, Intel Labs [email protected] # Abstract Reduced precision computation for deep neural networks is one of the key areas addressing the widening ’compute gap’ driven by an exponential growth in model size. In recent years, deep learning training has largely migrated to 16-bit precision, with significant gains in performance and energy efficiency. However, attempts to train DNNs at 8-bit precision have met with significant challenges because of the higher precision and dynamic range requirements of back-propagation. In this paper, we propose a method to train deep neural networks using 8-bit floating point representation for weights, activations, errors, and gradients. In addition to reducing compute precision, we also reduced the precision requirements for the master copy of weights from 32-bit to 16-bit. We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50, GNMT, Transformer) than previously reported. We propose an enhanced loss scaling method to augment the reduced subnormal range of 8-bit floating point for improved error propagation. We also examine the impact of quantization noise on generalization and propose a stochastic rounding technique to address gradient noise. As a result of applying all these techniques, we report slightly higher validation accuracy compared to full precision baseline. # Introduction The unprecedented success of Deep Learning models in a variety of tasks including computer vision[12], machine translation[26] and speech recognition[9],[11] has led to the proliferation of deeper and more complex models. Algorithmic innovations such as large batch training[15] and neural architecture search[28] have enabled models to scale on large compute cluster to accelerate training. This enhanced performance has enabled the adoption of larger neural networks. As a consequence, the computational requirements for training Deep Learning models have been growing at an exponential rate[3] over the past few years, outperforming Moore’s Law and hardware capabilities by a wide margin. One of the promising areas of research to address this growing compute gap is to reduce the numeric precision requirements for deep learning. Reduced precision methods exploit the inherent noise resilient properties of deep neural networks to improve compute efficiency, while minimizing the loss of model accuracy. Recent studies[21],[5] have shown that, deep neural networks can be trained using 16-bits of precision without any noticeable impact on validation accuracy across a wide range of networks. Today, state-of-the-art training platforms support 16-bit precision in the form of high-performance systolic array or GEMM engine (General Matrix Multiply) implementations[20], [16]. Preprint. Under review. There have been numerous attempts [13],[27],[6],[25],[4] to train deep neural networks at lower precision (< 16-bits) with varying degrees of success. With the abundance of 8-bit integer deep learning ‘ops’ deployed to accelerate inference tasks, much of the research into training methods have also focused on integer based fixed-point numeric formats[27],[6],[25]. Training with 8-bit integers has been significantly more challenging because the dynamic range of such formats is not sufficient to represent error gradients during back-propagation. More recently, Wang et al.[24] have shown that 8-bit floating representation can be used to train convolutional neural networks, with the help of specialized chunk-based accumulation and stochastic rounding hardware. While this method has shown promising results, it requires expensive stochastic rounding hardware built into the critical compute path making it unattractive for systolic array and GEMM accelerator implementations. Our paper extends the state of the art in 8-bit floating point (FP8) training with the following key contributions: • We propose a simple and scalable solution for building FP8 compute primitives, eliminating the need for stochastic rounding hardware in the critical compute path, as proposed in [24], thereby reducing the cost and complexity of the MAC unit. • Demonstrate state-of-the-art accuracy using 8-bit floating point representation for weights, activations, errors and weight gradients, across multiple data sets (Imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50[12], GNMT[26], Transformer[23]) than previously reported[24]. We also reduce the precision requirements for the master copy of weights from 32-bit to 16-bit reducing memory footprint of the model by half. • Propose enhanced loss scaling method to compensate for the reduced subnormal range of 8-bit floating point representation for improved error propagation leading to better model accuracy. • Present a detailed study of the impact of quantization noise on model generalization and propose a stochastic rounding technique to address the gradient noise in the early epochs leading to better generalization. As a result of this technique, we even report slightly higher validation accuracy compared to our full precision baseline. # 2 Related Work The study of reduced precision methods for deep learning training is an active area of research. In the pursuit of improving compute efficiency, researchers have experimented with various numeric formats and hardware implementations. Gupta et al.[10] demonstrated that deep neural networks can be trained with minimal loss in accuracy, using 16-bit fixed point representation. This was followed by studies employing other numeric formats such as, half-precision floating point[21] and dynamic fixed point [17], [5], demonstrating state of the art results across residual[12], recurrent[26] and generative networks. Today most of the neural network training in a production deployment has migrated to 16-bit hardware, resulting in significant improvements[20] in performance. There have been several attempts to further reduce the precision requirements of DNNs to boost train- ing performance. DoReFa-Net[27], a derivative of AlexNet[18] was trained using bit-convolutions with 1-bit and 2-bits to represent weights and activations respectively, while the gradients are quantized to 6-bits of precision. Wu et al.[25] have trained AlexNet[18] using 8-bit precision for acti- vations, errors and weight gradients, while the weights are quantized to 2-bits of precision. However, both these methods have reported significant loss in validation accuracy. More recently, Wang et al.[24] have successfully trained Resnet-50[12] using 8-bit floating point numeric format with the help of a specialized hardware to compute chunk-based dot-product compu- tation and stochastic rounding on a 16-bit accumulator. The authors of this study have focused on reducing the accumulator precision and based on studies on smaller networks (AlexNet Resnet-18), attributed training issues related to error propagation and generalization on the choice of accumu- lator size. However, our studies on larger networks (Resnet-34/50) using 32-bit accumulator for dot-product computations indicate that, these issues are not related to the choice of accumulator size and should be addressed independently. We discuss these issues and our proposed solutions in greater detail in Sections3.1and 3.2. Guided by these results, we decided to focus on studying the impact of using FP8 numeric format on training, while maintaining a high precision accumulator(FP32). We further believe that modern GEMM engine designs implementing progressive multiplier reduction[14] 2 techniques can effectively amortize the cost of a larger final accumulator, and do not benefit sig- nificantly from 16-bit solutions[24] with additional overheads of chunk-based accumulation and stochastic rounding in the critical path. # 3 Training Method The choice of bit-level representation of floating point (sign, exponent, mantissa), has a significant impact on the effectiveness of the numerical format – the trade-off between the dynamic range and precision is especially tricky at low bit-width representations. While it is important to maintain higher dynamic range for effective propagation of error gradients[21], it leads to having values that are too few and scattered to maintain fidelity required for gradient computations. After careful consideration of these facts and several failed experiments with other formats (for example with more exponent bits), we decided to use s=1,e=5,m=2 numeric format for representing 8-bit floating point. We also decided to use a 32-bit floating point accumulator; therefore each tensor GEMM/convolution operation takes two input tensors in 8-bit floating point format and produces a 32-bit single precision floating point output. At this stage the 32-bit output must be down-converted to a 8-bit value in order to be used by the next operation. Here, we believe rounding plays an extremely important role and helps recover the numeric accuracy of key compute primitives used by deep learning applications. We present the results from the study of different rounding modes applied to this format and their impact on training in Section.3.2 Figure.1 shows the precision settings of various compute operations used in our mixed precision training setup. The ’GEMM’(matrix multiply) operator shown in Figure.1a represents the key compute kernel used by deep neural networks during forward, backward, and gradient computation passes. Quantization nodes identified with the letter ’Q’ perform down-conversion and rounding operations on the 32-bit floating point output to convert to 8-bit format before passing on to the next layer. For our experiments, we convert the weights, activations, error and weight gradients of all convolution and GEMM kernels to 8-bit floating point format for forward, backward and weight update paths. Figure.1b shows the data flow during optimization and weight update steps. In the optimization path the L2-regularization term is added to the cross entropy. Then the loss value is scaled with loss scaling factor before initiating back propagation. When the back propagation is complete the weight gradients are computed and stored in 8-bit floating point format. To perform weight update operation, first the 8-bit weight gradients need to be scaled back by dividing the weight gradients with ’loss scale’ parameter. This step is performed in full precision to prevent underflow. The gradients are then passed to the momentum optimizer, the final gradients are then applied to the master copy of the weights. For our experiments, we use half-precision floating point format to store master copy of weights. During the update step, these half precision values are up-converted to 32-bit while they are loaded into the compute unit. The weight update operation is performed as a 32-bit operation. After the update, the master weights are converted back to 16-bit format before they are stored back into memory. Since this is a bandwidth bound operation, performing the update operation in FP32 will not have any noticeable impact on the performance. FPS. cross activations &7 Ey FPS [— weight decay led ALA master weights scale PON a pe weights* {a)- weights’. |—— loss pt (16b) regularization FPa __WGGEMM |. loss scale learning rate weight f ‘ = gradientst [2 4 L x weight ~ weight update nm Fp32 [ enim | gradients (2b) a ee FPS BWD GEMM gradients!" error S mo i x gradients w/ greens X}—{*) rnomentum ease FPS. activations &7 FPS weights* {a)- weights’. |—— FPa __WGGEMM |. weight f ‘ gradientst [2 4 L x Fp32 [ enim | FPS BWD GEMM gradients!" error S mo i x greens X}—{*) cross Ey [— weight decay led ALA master weights scale PON a pe loss pt (16b) regularization loss scale learning rate = weight ~ weight update nm gradients (2b) a ee gradients w/ rnomentum ease (a) (b) Figure 1: Mixed precision data flow for FP8 training. (left) precision settings for key compute kernels in Forward, Backward, and Weight Update passes, (right) flow diagram of the weight update rule. 3 # 3.1 Enhanced Loss Scaling Previous studies[21] on half-precision floating point have shown that loss scaling technique can be used to push smaller error gradients into representable range and train neural networks successfully. The full range of numeric values represented by a floating point format include the ’subnormal’ values, the range of which is determined by the number of mantissa bits. Because of this property of floating point numbers, the proposed 8-bit floating point format will have significantly smaller subnormal range compared to a half-precision floating point with the same number of exponent bits. Table.1 shows the dynamic range comparison between full-precision(FP32), half-precision(FP16) and the proposed 8-bit floating point formats. Table 1: Dynamic range comparison between proposed FP8 and other existing floating point formats. Data Type Bit Format (s, e, m) Max Normal Min Normal Min Subnormal IEEE-754 float IEEE-754 half-float FP8 (proposed) 1, 8, 23 1, 5, 10 1, 5, 2 3.40e38 65 535 57 344 1.17e−38 6.10e−5 6.10e−5 1.40e−45 5.96e−8 1.52e−5 Half-precision training for convolution networks has been shown to converge using a constant loss scaling parameter of 1000[21]. Other networks such as GNMT[26] and Transformer[23] use a more robust dynamic loss scaling method[19]. However, the reduced subnormal range of 8-bit floating point presents a few additional challenges to these methods. For convolution networks, simply increasing the scaling factor addresses the issue of convergence. Figure.2a shows results from our convergence studies on Resnet-50 using different loss scaling values. The model failed to converge with a scaling factor of 1000, and progressively performed better with increasing loss scale values, converging at 10 000. Recurrent networks like GNMT[26] experience significant variations in gradient distributions through the training cycle and are more sensitive to numerical errors. We trained GNMT using ’back-off’ dynamic loss scaling method[19] which updates the scaling factor every few iterations. While this method is effective in preventing overflows, it proved less effective in handling underflow that occurs more frequently during FP8 training. Our experiments with more frequent updates to scaling factor led to unstable loss behaviour resulting in divergence. We addressed this by gradually increasing the ’minimum threshold’ value of the scaling factor by observing the loss function as the training progressed. Figure.2b shows the loss scaling schedule that worked for GNMT – we set the minimum threshold to 8K after the first 40K iterations, then increased it to 32K at around 150K iterations. — baseline —8-bit, loss scale=1K 45 300k 8-bit, loss scale=4K ——GNMT, 8-bit, training loss —8-bit, loss scale=10K 6 —loss scaling factor z 2 2 & & loss sclaling factor i Validation error 2.3% loss ° g Ed min=32768 min=8192 2 6 ° 1 nu 21 31 al 51 61 n 81 250 51551 103447 148551 190388 epochs Iterations (a) (b) — baseline —8-bit, loss scale=1K 8-bit, loss scale=4K —8-bit, loss scale=10K 6 2 2 & & Validation error 2.3% loss ° 2 6 1 nu 21 31 al 51 61 n 81 epochs 45 300k ——GNMT, 8-bit, training loss —loss scaling factor z loss sclaling factor i g Ed min=32768 min=8192 ° 250 51551 103447 148551 190388 Iterations (a) (b) Figure 2: Convergence behaviour of FP8 training using enhanced loss scaling. (left) Resnet-50[12] failed to converge with loss scale=1000, performed better with 2.3% accuracy loss at loss scale=4000 and showed full convergence at loss scale=10 000, (right) Dynamic loss scaling with gradually increasing minimum threshold for the scaling factor. 4 # 3.2 Quantization noise and Generalization Reduced precision methods introduce significant amount of noise that can adversely effect conver- gence and accuracy of deep neural networks. Rounding techniques applied to quantization methods can be effective in regulating some of this noise. For extremely low precision representations with large rounding errors such as the one proposed here(e = 0.125), the choice of rounding method can have significant influence on the numeric accuracy and overall applicability of the numeric format. Previous studies[10] have shown that stochastic rounding can be effective for training neural networks using low-precision fixed point formats. The most widely supported rounding method in hardware today is RNE (round to nearest even), because it is easier to implement and requires smaller silicon area. In this section, we explore the impact of both RNE and stochastic rounding methods on model convergence and generalization. Our early experiments showed that, for smaller networks such as Resnet-18[12], RNE proved quite effective when trained on Imagenet-1K[7] data set. However, when we trained ResNet-50[12] we observed some interesting results. Figure.3 shows the convergence plots for Resnet-50[12] using RNE rounding method applied to quantized weights, activations and gradients. The model displayed significant over-fitting behaviour as indicated by the increased validation error, while the training error mostly follows the baseline as shown in as shown in Figure.3b, and 3a. Multiple experiments indicated that this behaviour is caused by the noisy error gradients during early epochs which lead to unconstrained growth in model parameters. This is indicated by steep increase in L2 regularization parameter as shown in Figure.3c. Regularization loss is computed using the formula shown in Equation.1. Increased regularization loss leads to more noisy gradients, which further exacerbates this behaviour. An interesting observation about the L2 regularization loss is that for ResNet-18, the L2-loss is low at the beginning and increases with gradually with iterations. On the other hand for ResNet-50, the L2-loss is high at the beginning due to the initialization of low fan-in 1x1 [8] convolutions, and needs to dip a little before gradually rising again. We suspect that this property of the initialization leads to more noisy behavior of ResNet-50 in the earlier iterations as compared to ResNet-18. Therefore for the ResNet-50 model stochastic rounding is essential. w L2_loss = Xx > we qd) i=0 Where, λ is the weight decay parameter and W is the total number of weights. In order to understand the issue of regularization independent of the choice of rounding method, we conducted additional experiments using RNE with other forms of regularization. Figure.4a compares the ’Dropout’ method with ’no regularization’ method which uses quantization noise as implicit regularizer with no explicit regularization term. In both these cases, the models performed much better than using L2 regularization with RNE, leading us to the conclusion that RNE was ineffective in regulating quantization noise in gradients causing unconstrained growth in model parameters. (a) (b) (c) 20% —FP32 baseline training error cox pe, ene, walning error 5 20s © oon 20% on Sot so0s6i oats zosise—_sont2t Iterations 1x — FP 32, validation enor FPS, RNE, validation error 10 B wx Eo 20% o« Loa 8 Hoe a nom a epochs 30 —FP32, L2_loss 25 — Fe, RNE, L2_loss par) Sas sos 00 Soi 200561 2ooNs1 29sis3soniat Iterations: Figure 3: Impact of quantization (with RNE rounding) noise on model convergence with Resnet-50 (a) comparison of training error, (b) validation error, and (c) L2 regularization loss with FP32 baseline. Unlike deterministic rounding techniques, stochastic rounding computes the probability of rounding using information from several discarded bits of the input making it less prone to introducing large 5 rounding errors. We studied the error behaviour of Resnet-50[12] by applying stochastic rounding on activations and gradients to regulate quantization noise in the gradients, which in-turn can improve the effectiveness of explicit regularization methods. Our stochastic rounding method is defined as follows: [x], +6 with probability P = @-l#J)+" round(«x,k) = . rare with probability 1 — P Where, k is the target precision, € is machine epsilon, and r is random value generated by a pseudo random number generator. Figure.4b shows the results from Resnet-50[12] training experiment using a combination stochastic rounding and explicit L2 regularization. The convergence plots show the good generalization behavior that tracks with the full precision training. As a positive side effect, we have also observed that this method consistently outperformed leading to slightly better validation accuracy across convolution networks. The accuracy numbers are summarized in Section.4. —FP32 baseline —FP32, baseline —FP8, RNE, no regularization 20% FP8, RNE, dropout —FP8, Stochastic rounding, L2 regularization validation error s 40% 0% 0% 1 21 41 61 81 1 21 41 61 81 epochs epochs —FP32, baseline —FP8, RNE, no regularization 20% FP8, RNE, dropout validation error 0% 1 21 41 61 81 epochs —FP32 baseline —FP8, Stochastic rounding, L2 regularization s 40% 0% 1 21 41 61 81 epochs (a) (b) Figure 4: (a) Comparing validation performance with ’dropout’ and noise-based implicit regulariza- tion techniques using RNE(round to nearest even) (b) model performance with stochastic rounding with L2 regularization. # 4 Experiments and Results We built a TensorFlow based training platform[2], that can accurately emulate the numeric properties of 8-bit floating point on the current generation floating point hardware. Training experiments were conducted using open source model implementations from TensorFlow[1] and OpenSeq2Seq[19]. Our training framework internally updates the training graph by inserting quantization OPs, in the forward, backward, weight update paths for all convolution and GEMM kernels as described in Section.3. Using the proposed training method, we have successfully trained Resnet-18, Resnet-34 and Resnet- 50[12] on Imagenet-1K[7] data set. We have used the same set of hyper parameters (except for loss scaling) and converged the network in the same number of iterations as the baseline FP32 training. For these convolution networks, the first convolution and the last fully-connected (FC) layers are maintained at a higher precision (16-bit) to maintain the model accuracy. For all convolution networks, in addition to using FP8 data format for weights, activations, error and weight gradients, we have also reduced the precision of the master copy of weights to FP16. Using techniques described in Section.3.2, we also manged to achieve slightly better top-1 accuracy compared to the baseline. Table.2 summarizes the validation accuracy achieved by convolution networks on imagenet-1K[7] dataset. Figure.5 shows the convergence plots for Resnet-34 and Resnet-50 comparing top-1 accuracy of FP8 training with the baseline FP32 training. It can be seen that the validation accuracy of FP8 training closely follow the baseline numbers indicating the robustness of the training method. 6 # Table 2: Top-1 validation accuracy for convolution networks on Imagenet-1K[7] data set. Model Dataset Batch-size Epochs FP32 (top-1 %) FP8 (top-1 %) Resnet-18 Resnet-34 Resnet-50 imagenet-1K 256 imagenet-1K 256 imagenet-1K 256 100 100 100 69.23 72.96 75.47 69.71 72.95 75.70 Table 3: Comparison of our method with the only other FP8 training method on Imagenet-1K[7] data set. W, A, E, G, MasterWts represent the precision setting for weights, activations, error, weight gradients and mater copy of weights respectively. # Method, Format # W,A,E,G MasterWts Resnet-18 # (top-1 error %) # Resnet-50 (top-1 error %) Wang et al.[24], FP8 Ours, FP8 8,8,8,8 8,8,8,8 16 16 33.05 30.29 28.28 24.30 eee te tnnteeson a in a i penta Fh Mied-recen : 0% ge pe eR a ge ee (a) (b) Figure 5: Convergence plots showing Top-1 validation accuracy for. (a) Resnet-34[12] (b) Resnet- 50[12] on imagenet-1K[7] dataset. — GNMT, 8-layer. £32 basline GNMT, B-layer, FP8, Mined-precision —Trensformer-Mr, #P32 —Trensformer-MT, #P8, Mixed-precision iningloss training loss | 20 ° Y50 SES 74501 OHHOL HA7IOI 16RSO OHO BMS THES SED erations 160201 Iterations 229001 — GNMT, 8-layer. £32 basline GNMT, B-layer, FP8, Mined-precision iningloss | 20 Y50 SES 74501 OHHOL HA7IOI 16RSO OHO BMS THES SED erations —Trensformer-Mr, #P32 —Trensformer-MT, #P8, Mixed-precision training loss ° 160201 Iterations 229001 (a) (b) Figure 6: Convergence plots showing training loss for (a) 8-layer GNMT[26] and, (b) 6-layer Transformer[23] trained on WMT16 English->German dataset. In addition to convolution networks, we have also trained two state of the art machine translation workloads (GNMT[26] and Transformer[23]) and demonstrated BLEU scores matching single precision baselines. We trained an 8-layer GNMT[26] encoder/decoder LSTM model with 1024 recurrent units and 1024 attention units. We trained this network using FP8 numeric format for all GEMM operations, while the activation functions such as tanh and sigmoid use FP16 data type. We used the loss scaling schedule described in Section.3.1. We also trained a 6-layer Transformer[23] translation network with with roughly 200M parameters. For the Transformer network, our internal baseline score is lower than the current reported high- 7 est score. Both GNMT[26] and Transformer[23] models were trained on large scale, WMT2016 English−→German dataset consisting of 4.5 million sentence pairs. We trained these networks using ADAM optimizer with same hyper parameters used by the FP32 baseline. On both these models, our FP8 mixed precision training achieved BLEU score comparable to the FP32 baseline. The results are summarized in Table.4. Table 4: sacreBLEU[22] score measured on WMT 2014 English−→German dataset Model Dataset/ Task FP32 baseline FP8 Mixed Precision WMT 2016 English−→German GNMT Transformer WMT 2016 English−→German 24.3 23.6 24.6 23 # 5 Conclusion We demonstrate state-of-the-art accuracy across multiple data sets (imagenet-1K, WMT16) and a broader set of workloads (Resnet-18/34/50, GNMT, Transformer) than previously reported. We propose easy to implement and scalable solution for building FP8 compute primitives, eliminating the need for stochastic rounding hardware in the critical compute path, as proposed in [24], thereby reducing the cost and complexity of the MAC unit. We explore issues around gradient underflow and quantization noise that arise as a result of using the proposed 8-bit numeric format for large scale neural network training. We propose solutions to deal with these problems in the form of enhanced loss scaling and stochastic rounding. # References [1] Models and examples built with TensorFlow. https://github.com/tensorflow/models. [2] Tensorflow framework for reduced precision training. https://github.com/nkmellem/ tensorflow. [3] Dario Amodei and Danny Hernandez. AI and Compute. https://openai.com/blog/ ai-and-compute/. [4] Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5918–5926, 2017. [5] Dipankar Das, Naveen Mellempudi, Dheevatsa Mudigere, Dhiraj Kalamkar, Sasikanth Avancha, Kunal Banerjee, Srinivas Sridharan, Karthik Vaidyanathan, Bharat Kaul, Evangelos Georganas, et al. Mixed precision training of convolutional neural networks using integer operations. arXiv preprint arXiv:1802.00930, 2018. [6] Christopher De Sa, Megan Leszczynski, Jian Zhang, Alana Marzoev, Christopher R Aberger, Kunle Olukotun, and Christopher Ré. High-accuracy low-precision training. arXiv preprint arXiv:1803.03383, 2018. [7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. [8] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS’10). Society for Artificial Intelligence and Statistics, 2010. [9] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE, 2013. [10] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pages 1737–1746, 2015. 8 [11] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014. [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [13] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quan- tized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 18(1):6869–6898, 2017. [14] Atef Ibrahim and Fayez Gebali. Low power semi-systolic architectures for polynomial-basis multiplication over gf (2 m) using progressive multiplier reduction. Journal of Signal Processing Systems, 82(3):331–343, 2016. [15] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. [16] Urs Köster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In Advances in neural information processing systems, pages 1742–1752, 2017. [17] Urs Köster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In Advances in neural information processing systems, pages 1742–1752, 2017. [18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [19] Oleksii Kuchaiev, Boris Ginsburg, Igor Gitman, Vitaly Lavrukhin, Jason Li, Huyen Nguyen, Carl Case, and Paulius Micikevicius. Mixed-precision training for nlp and speech recognition with openseq2seq. Computing Research Repository (CoRR), abs/1805.10387 v2, 2018. [20] Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng, and Jeffrey S Vetter. Nvidia tensor core programmability, performance & precision. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 522–531. IEEE, 2018. [21] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017. [22] Matt Post. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771, 2018. [23] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017. [24] Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. Training deep neural networks with 8-bit floating point numbers. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7675–7684. Curran Associates, Inc., 2018. [25] Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. Training and inference with integers in deep neural networks. In International Conference on Learning Representations, 2018. [26] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. 9 [27] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. [28] Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. CoRR, abs/1611.01578, 2016. 10
{ "id": "1802.00930" }
1905.11946
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet.
http://arxiv.org/pdf/1905.11946
Mingxing Tan, Quoc V. Le
cs.LG, cs.CV, stat.ML
ICML 2019
International Conference on Machine Learning, 2019
cs.LG
20190528
20200911
0 2 0 2 p e S 1 1 ] G L . s c [ 5 v 6 4 9 1 1 . 5 0 9 1 : v i X r a # EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks # Mingxing Tan 1 Quoc V. Le 1 # Abstract Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more In this paper, we sys- resources are available. tematically study model scaling and identify that carefully balancing network depth, width, and res- olution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. ResNet-152 (He et al., 2016) EfficientNet-B1 ResNeXt-101 (Xie et al., 2017) EfficientNet-B3 SENet (Hu et al., 2018) NASNet-A (Zoph et al., 2018) EfficientNet-B4 GPipe (Huang et al., 2018) † EfficientNet-B7 †Not plotted Top1 Acc. #Params 60M 7.8M 84M 12M 146M 89M 19M 556M 66M 77.8% 79.1% 80.9% 81.6% 82.7% 82.7% 82.9% 84.3% 84.3% To go even further, we use neural architec- ture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https: //github.com/tensorflow/tpu/tree/ master/models/official/efficientnet. Figure 1. Model Size vs. ImageNet Accuracy. All numbers are for single-crop, single-model. Our EfficientNets significantly out- perform other ConvNets. In particular, EfficientNet-B7 achieves new state-of-the-art 84.3% top-1 accuracy but being 8.4x smaller and 6.1x faster than GPipe. EfficientNet-B1 is 7.6x smaller and 5.7x faster than ResNet-152. Details are in Table 2 and 4. time larger. However, the process of scaling up ConvNets has never been well understood and there are currently many ways to do it. The most common way is to scale up Con- vNets by their depth (He et al., 2016) or width (Zagoruyko & Komodakis, 2016). Another less common, but increasingly popular, method is to scale up models by image resolution (Huang et al., 2018). In previous work, it is common to scale only one of the three dimensions – depth, width, and image size. Though it is possible to scale two or three dimensions arbitrarily, arbitrary scaling requires tedious manual tuning and still often yields sub-optimal accuracy and efficiency. # 1. Introduction Scaling up ConvNets is widely used to achieve better accu- racy. For example, ResNet (He et al., 2016) can be scaled up from ResNet-18 to ResNet-200 by using more layers; Recently, GPipe (Huang et al., 2018) achieved 84.3% Ima- geNet top-1 accuracy by scaling up a baseline model four 1Google Research, Brain Team, Mountain View, CA. Corre- spondence to: Mingxing Tan <[email protected]>. Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. In this paper, we want to study and rethink the process of scaling up ConvNets. In particular, we investigate the central question: is there a principled method to scale up ConvNets that can achieve better accuracy and efficiency? Our empirical study shows that it is critical to balance all dimensions of network width/depth/resolution, and surpris- ingly such balance can be achieved by simply scaling each of them with constant ratio. Based on this observation, we propose a simple yet effective compound scaling method. Unlike conventional practice that arbitrary scales these fac- tors, our method uniformly scales network width, depth, EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks #channels porccteanny |_| ~--layer_i } resolution HxW (a) baseline (b) width scaling (c) depth scaling H ---higher_ te _1_.resolution (d) resolution scaling (e) compound scaling Figure 2. Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling method that uniformly scales all three dimensions with a fixed ratio. and resolution with a set of fixed scaling coefficients. For example, if we want to use 2N times more computational resources, then we can simply increase the network depth by αN , width by βN , and image size by γN , where α, β, γ are constant coefficients determined by a small grid search on the original small model. Figure 2 illustrates the difference between our scaling method and conventional methods. Intuitively, the compound scaling method makes sense be- cause if the input image is bigger, then the network needs more layers to increase the receptive field and more channels to capture more fine-grained patterns on the bigger image. In fact, previous theoretical (Raghu et al., 2017; Lu et al., 2018) and empirical results (Zagoruyko & Komodakis, 2016) both show that there exists certain relationship between network width and depth, but to our best knowledge, we are the first to empirically quantify the relationship among all three dimensions of network width, depth, and resolution. We demonstrate that our scaling method work well on exist- ing MobileNets (Howard et al., 2017; Sandler et al., 2018) and ResNet (He et al., 2016). Notably, the effectiveness of model scaling heavily depends on the baseline network; to go even further, we use neural architecture search (Zoph & Le, 2017; Tan et al., 2019) to develop a new baseline network, and scale it up to obtain a family of models, called EfficientNets. Figure 1 summarizes the ImageNet perfor- mance, where our EfficientNets significantly outperform other ConvNets. In particular, our EfficientNet-B7 surpasses the best existing GPipe accuracy (Huang et al., 2018), but using 8.4x fewer parameters and running 6.1x faster on in- ference. Compared to the widely used ResNet-50 (He et al., 2016), our EfficientNet-B4 improves the top-1 accuracy from 76.3% to 83.0% (+6.7%) with similar FLOPS. Besides ImageNet, EfficientNets also transfer well and achieve state- of-the-art accuracy on 5 out of 8 widely used datasets, while reducing parameters by up to 21x than existing ConvNets. # 2. Related Work ConvNet Accuracy: Since AlexNet (Krizhevsky et al., 2012) won the 2012 ImageNet competition, ConvNets have become increasingly more accurate by going bigger: while the 2014 ImageNet winner GoogleNet (Szegedy et al., 2015) achieves 74.8% top-1 accuracy with about 6.8M parameters, the 2017 ImageNet winner SENet (Hu et al., 2018) achieves 82.7% top-1 accuracy with 145M parameters. Recently, GPipe (Huang et al., 2018) further pushes the state-of-the-art ImageNet top-1 validation accuracy to 84.3% using 557M parameters: it is so big that it can only be trained with a specialized pipeline parallelism library by partitioning the network and spreading each part to a different accelera- tor. While these models are mainly designed for ImageNet, recent studies have shown better ImageNet models also per- form better across a variety of transfer learning datasets (Kornblith et al., 2019), and other computer vision tasks such as object detection (He et al., 2016; Tan et al., 2019). Although higher accuracy is critical for many applications, we have already hit the hardware memory limit, and thus further accuracy gain needs better efficiency. ConvNet Efficiency: Deep ConvNets are often over- parameterized. Model compression (Han et al., 2016; He et al., 2018; Yang et al., 2018) is a common way to re- duce model size by trading accuracy for efficiency. As mo- bile phones become ubiquitous, it is also common to hand- craft efficient mobile-size ConvNets, such as SqueezeNets (Iandola et al., 2016; Gholami et al., 2018), MobileNets (Howard et al., 2017; Sandler et al., 2018), and ShuffleNets EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (Zhang et al., 2018; Ma et al., 2018). Recently, neural archi- tecture search becomes increasingly popular in designing efficient mobile-size ConvNets (Tan et al., 2019; Cai et al., 2019), and achieves even better efficiency than hand-crafted mobile ConvNets by extensively tuning the network width, depth, convolution kernel types and sizes. However, it is unclear how to apply these techniques for larger models that have much larger design space and much more expensive tuning cost. In this paper, we aim to study model efficiency for super large ConvNets that surpass state-of-the-art accu- racy. To achieve this goal, we resort to model scaling. Model Scaling: There are many ways to scale a Con- vNet for different resource constraints: ResNet (He et al., 2016) can be scaled down (e.g., ResNet-18) or up (e.g., ResNet-200) by adjusting network depth (#layers), while WideResNet (Zagoruyko & Komodakis, 2016) and Mo- bileNets (Howard et al., 2017) can be scaled by network width (#channels). It is also well-recognized that bigger input image size will help accuracy with the overhead of more FLOPS. Although prior studies (Raghu et al., 2017; Lin & Jegelka, 2018; Sharir & Shashua, 2018; Lu et al., 2018) have shown that network depth and width are both important for ConvNets’ expressive power, it still remains an open question of how to effectively scale a ConvNet to achieve better efficiency and accuracy. Our work systemati- cally and empirically studies ConvNet scaling for all three dimensions of network width, depth, and resolutions. i. Figure 2(a) illustrate a representative ConvNet, where the spatial dimension is gradually shrunk but the channel dimension is expanded over layers, for example, from initial input shape (224, 224, 3) to final output shape (7,7, 512). Unlike regular ConvNet designs that mostly focus on find- ing the best layer architecture Fi, model scaling tries to ex- pand the network length (Li), width (Ci), and/or resolution (Hi, Wi) without changing Fi predefined in the baseline network. By fixing Fi, model scaling simplifies the design problem for new resource constraints, but it still remains a large design space to explore different Li, Ci, Hi, Wi for each layer. In order to further reduce the design space, we restrict that all layers must be scaled uniformly with con- stant ratio. Our target is to maximize the model accuracy for any given resource constraints, which can be formulated as an optimization problem: max Accuracy (W(d, w, r)) dyw.r st. N(d,w,r) = © FEE (Xo piv) isl. Memory(WV) < target_ memory FLOPS(V) < target_flops (2) where w, d, r are coefficients for scaling network width, depth, and resolution; ˆFi, ˆLi, ˆHi, ˆWi, ˆCi are predefined pa- rameters in baseline network (see Table 1 as an example). # 3. Compound Model Scaling # 3.2. Scaling Dimensions In this section, we will formulate the scaling problem, study different approaches, and propose our new scaling method. # 3.1. Problem Formulation The main difficulty of problem 2 is that the optimal d, w, r depend on each other and the values change under different resource constraints. Due to this difficulty, conventional methods mostly scale ConvNets in one of these dimensions: A ConvNet Layer i can be defined as a function: Y; = F;(X;), where F; is the operator, Y; is output tensor, X; is input tensor, with tensor shape (H;, W;, C;)', where H; and W; are spatial dimension and C; is the channel dimension. A ConvNet WN can be represented by a list of composed lay- ers: NM = Fi, ©... O Fe OFi(X1) = Oyen. Fi(X1). In practice, ConvNet layers are often partitioned into multiple stages and all layers in each stage share the same architec- ture: for example, ResNet (He et al., 2016) has five stages, and all layers in each stage has the same convolutional type except the first layer performs down-sampling. Therefore, we can define a ConvNet as: N= © FP (Xun,w..c)) () i=1...s Depth (ddd): Scaling network depth is the most common way used by many ConvNets (He et al., 2016; Huang et al., 2017; Szegedy et al., 2015; 2016). The intuition is that deeper ConvNet can capture richer and more complex features, and generalize well on new tasks. However, deeper networks are also more difficult to train due to the vanishing gradient problem (Zagoruyko & Komodakis, 2016). Although sev- eral techniques, such as skip connections (He et al., 2016) and batch normalization (Ioffe & Szegedy, 2015), alleviate the training problem, the accuracy gain of very deep network diminishes: for example, ResNet-1000 has similar accuracy as ResNet-101 even though it has much more layers. Figure 3 (middle) shows our empirical study on scaling a baseline model with different depth coefficient d, further suggesting the diminishing accuracy return for very deep ConvNets. where F, i denotes layer F; is repeated L; times in stage i, (H;, W;, C;) denotes the shape of input tensor X of layer 1For the sake of simplicity, we omit batch dimension. Width (www): Scaling network width is commonly used for small size models (Howard et al., 2017; Sandler et al., 2018; EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks _ 8 81 81 & B80 804 804 4 3 79 794 794 < "78 784 784 e od a7 74 v7 4 za o_. * 5 2 76 764 764 £& 5 r r ; r 75 r Y r Y 75 Y r . 0 2 4 6 8 0 1 2 3 4 0 L 2 3 FLOPS (Billions) FLOPS (Billions) FLOPS (Billions) Figure 3. Scaling Up a Baseline Model with Different Network Width (w), Depth (d), and Resolution (r) Coefficients. Bigger networks with larger width, depth, or resolution tend to achieve higher accuracy, but the accuracy gain quickly saturate after reaching 80%, demonstrating the limitation of single dimension scaling. Baseline network is described in Table 1. Tan et al., 2019)2. As discussed in (Zagoruyko & Ko- modakis, 2016), wider networks tend to be able to capture more fine-grained features and are easier to train. However, extremely wide but shallow networks tend to have difficul- ties in capturing higher level features. Our empirical results in Figure 3 (left) show that the accuracy quickly saturates when networks become much wider with larger w. Resolution (rrr): With higher resolution input images, Con- vNets can potentially capture more fine-grained patterns. Starting from 224x224 in early ConvNets, modern Con- vNets tend to use 299x299 (Szegedy et al., 2016) or 331x331 (Zoph et al., 2018) for better accuracy. Recently, GPipe (Huang et al., 2018) achieves state-of-the-art ImageNet ac- curacy with 480x480 resolution. Higher resolutions, such as 600x600, are also widely used in object detection ConvNets (He et al., 2017; Lin et al., 2017). Figure 3 (right) shows the results of scaling network resolutions, where indeed higher resolutions improve accuracy, but the accuracy gain dimin- ishes for very high resolutions (r = 1.0 denotes resolution 224x224 and r = 2.5 denotes resolution 560x560). 80, ImageNet Top1 Accuracy (%) ~ 3 0 5 10 15 FLOPS (billions) Figure 4. Scaling Network Width for Different Baseline Net- works. Each dot in a line denotes a model with different width coefficient (w). All baseline networks are from Table 1. The first baseline network (d=1.0, r=1.0) has 18 convolutional layers with resolution 224x224, while the last baseline (d=2.0, r=1.3) has 36 layers with resolution 299x299. The above analyses lead us to the first observation: Observation 1 – Scaling up any dimension of network width, depth, or resolution improves accuracy, but the accu- racy gain diminishes for bigger models. order to capture more fine-grained patterns with more pixels in high resolution images. These intuitions suggest that we need to coordinate and balance different scaling dimensions rather than conventional single-dimension scaling. # 3.3. Compound Scaling We empirically observe that different scaling dimensions are not independent. Intuitively, for higher resolution images, we should increase network depth, such that the larger re- ceptive fields can help capture similar features that include more pixels in bigger images. Correspondingly, we should also increase network width when resolution is higher, in To validate our intuitions, we compare width scaling under different network depths and resolutions, as shown in Figure 4. If we only scale network width w without changing depth (d=1.0) and resolution (r=1.0), the accuracy saturates quickly. With deeper (d=2.0) and higher resolution (r=2.0), width scaling achieves much better accuracy under the same FLOPS cost. These results lead us to the second observation: 2In some literature, scaling number of channels is called “depth multiplier”, which means the same as our width coefficient w. Observation 2 – In order to pursue better accuracy and efficiency, it is critical to balance all dimensions of network width, depth, and resolution during ConvNet scaling. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks In fact, a few prior work (Zoph et al., 2018; Real et al., 2019) have already tried to arbitrarily balance network width and depth, but they all require tedious manual tuning. Table 1. EfficientNet-B0 baseline network — Each row describes a stage 7 with L; layers, with input resolution (4; Wi) and output channels C;,. Notations are adopted from equation 2. In this paper, we propose a new compound scaling method, which use a compound coefficient φ to uniformly scales network width, depth, and resolution in a principled way: depth: d = αφ width: w = βφ resolution: r = γφ s.t. α · β2 · γ2 ≈ 2 α ≥ 1, β ≥ 1, γ ≥ 1 (3) Stage i 1 2 3 4 5 6 7 8 9 Operator ˆFi Conv3x3 MBConv1, k3x3 MBConv6, k3x3 MBConv6, k5x5 MBConv6, k3x3 MBConv6, k5x5 MBConv6, k5x5 MBConv6, k3x3 Conv1x1 & Pooling & FC Resolution ˆHi × ˆWi 224 × 224 112 × 112 112 × 112 56 × 56 28 × 28 14 × 14 14 × 14 7 × 7 7 × 7 #Channels ˆCi 32 16 24 40 80 112 192 320 1280 #Layers ˆLi 1 1 2 2 3 3 4 1 1 where a, 3,7 are constants that can be determined by a small grid search. Intuitively, @ is a user-specified coeffi- cient that controls how many more resources are available for model scaling, while a, 3,7 specify how to assign these extra resources to network width, depth, and resolution re- spectively. Notably, the FLOPS of a regular convolution op is proportional to d, w?, r?, i.e., doubling network depth will double FLOPS, but doubling network width or resolu- tion will increase FLOPS by four times. Since convolution ops usually dominate the computation cost in ConvNets, scaling a ConvNet with equation 3 will approximately in- crease total FLOPS by (a - 6? - ~y?. In this paper, we constraint a - 6? - y? = 2 such that for any new 4, the total FLOPS will approximately? increase by 2%. 4. EfficientNet Architecture Since model scaling does not change layer operators ˆFi in baseline network, having a good baseline network is also critical. We will evaluate our scaling method using existing ConvNets, but in order to better demonstrate the effectiveness of our scaling method, we have also developed a new mobile-size baseline, called EfficientNet. Net, except our EfficientNet-B0 is slightly bigger due to the larger FLOPS target (our FLOPS target is 400M). Ta- ble 1 shows the architecture of EfficientNet-B0. Its main building block is mobile inverted bottleneck MBConv (San- dler et al., 2018; Tan et al., 2019), to which we also add squeeze-and-excitation optimization (Hu et al., 2018). Starting from the baseline EfficientNet-B0, we apply our compound scaling method to scale it up with two steps: STEP 1: we first fix φ = 1, assuming twice more re- sources available, and do a small grid search of α, β, γ In particular, we find based on Equation 2 and 3. the best values for EfficientNet-B0 are α = 1.2, β = 1.1, γ = 1.15, under constraint of α · β2 · γ2 ≈ 2. • STEP 2: we then fix α, β, γ as constants and scale up baseline network with different φ using Equation 3, to obtain EfficientNet-B1 to B7 (Details in Table 2). Notably, it is possible to achieve even better performance by searching for α, β, γ directly around a large model, but the search cost becomes prohibitively more expensive on larger models. Our method solves this issue by only doing search once on the small baseline network (step 1), and then use the same scaling coefficients for all other models (step 2). Inspired by (Tan et al., 2019), we develop our baseline net- work by leveraging a multi-objective neural architecture search that optimizes both accuracy and FLOPS. Specifi- cally, we use the same search space as (Tan et al., 2019), and use ACC(m)×[F LOP S(m)/T ]w as the optimization goal, where ACC(m) and F LOP S(m) denote the accu- racy and FLOPS of model m, T is the target FLOPS and w=-0.07 is a hyperparameter for controlling the trade-off between accuracy and FLOPS. Unlike (Tan et al., 2019; Cai et al., 2019), here we optimize FLOPS rather than la- tency since we are not targeting any specific hardware de- vice. Our search produces an efficient network, which we name EfficientNet-B0. Since we use the same search space as (Tan et al., 2019), the architecture is similar to Mnas- 3FLOPS may differ from theoretical value due to rounding. # 5. Experiments In this section, we will first evaluate our scaling method on existing ConvNets and the new proposed EfficientNets. # 5.1. Scaling Up MobileNets and ResNets As a proof of concept, we first apply our scaling method to the widely-used MobileNets (Howard et al., 2017; San- dler et al., 2018) and ResNet (He et al., 2016). Table 3 shows the ImageNet results of scaling them in different ways. Compared to other single-dimension scaling methods, our compound scaling method improves the accuracy on all these models, suggesting the effectiveness of our proposed scaling method for general existing ConvNets. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Table 2. EfficientNet Performance Results on ImageNet (Russakovsky et al., 2015). All EfficientNet models are scaled from our baseline EfficientNet-B0 using different compound coefficient φ in Equation 3. ConvNets with similar top-1/top-5 accuracy are grouped together for efficiency comparison. Our scaled EfficientNet models consistently reduce parameters and FLOPS by an order of magnitude (up to 8.4x parameter reduction and up to 16x FLOPS reduction) than existing ConvNets. Model Top-1 Acc. Top-5 Acc. #Params Ratio-to-EfficientNet EfficientNet-B0 ResNet-50 (He et al., 2016) DenseNet-169 (Huang et al., 2017) 77.1% 76.0% 76.2% 93.3% 93.0% 93.2% 5.3M 26M 14M 1x 4.9x 2.6x 0.39B 4.1B 3.5B 1x 11x 8.9x EfficientNet-B1 ResNet-152 (He et al., 2016) DenseNet-264 (Huang et al., 2017) Inception-v3 (Szegedy et al., 2016) Xception (Chollet, 2017) 79.1% 77.8% 77.9% 78.8% 79.0% 94.4% 93.8% 93.9% 94.4% 94.5% 7.8M 60M 34M 24M 23M 1x 7.6x 4.3x 3.0x 3.0x 0.70B 11B 6.0B 5.7B 8.4B 1x 16x 8.6x 8.1x 12x EfficientNet-B2 Inception-v4 (Szegedy et al., 2017) Inception-resnet-v2 (Szegedy et al., 2017) 80.1% 80.0% 80.1% 94.9% 95.0% 95.1% 9.2M 48M 56M 1x 5.2x 6.1x 1.0B 13B 13B 1x 13x 13x EfficientNet-B3 ResNeXt-101 (Xie et al., 2017) PolyNet (Zhang et al., 2017) 81.6% 80.9% 81.3% 95.7% 95.6% 95.8% 12M 84M 92M 1x 7.0x 7.7x 1.8B 32B 35B 1x 18x 19x EfficientNet-B4 SENet (Hu et al., 2018) NASNet-A (Zoph et al., 2018) AmoebaNet-A (Real et al., 2019) PNASNet (Liu et al., 2018) 82.9% 82.7% 82.7% 82.8% 82.9% 96.4% 96.2% 96.2% 96.1% 96.2% 19M 146M 89M 87M 86M 1x 7.7x 4.7x 4.6x 4.5x 4.2B 42B 24B 23B 23B 1x 10x 5.7x 5.5x 6.0x EfficientNet-B5 AmoebaNet-C (Cubuk et al., 2019) 83.6% 83.5% 96.7% 96.5% 30M 155M 1x 5.2x 9.9B 41B 1x 4.1x EfficientNet-B6 84.0% 96.8% 43M 1x 19B 1x We omit ensemble and multi-crop models (Hu et al., 2018), or models pretrained on 3.5B Instagram images (Mahajan et al., 2018). Table 3. Scaling Up MobileNets and ResNet. Model FLOPS Top-1 Acc. Baseline MobileNetV1 (Howard et al., 2017) 0.6B 70.6% Scale MobileNetV1 by width (w=2) Scale MobileNetV1 by resolution (r=2) compound scale (ddd=1.4, www=1.2, rrr=1.3) 2.2B 2.2B 2.3B 74.2% 72.7% 75.6% Baseline MobileNetV2 (Sandler et al., 2018) 0.3B 72.0% Scale MobileNetV2 by depth (d=4) Scale MobileNetV2 by width (w=2) Scale MobileNetV2 by resolution (r=2) MobileNetV2 compound scale Baseline ResNet-50 (He et al., 2016) Scale ResNet-50 by depth (d=4) Scale ResNet-50 by width (w=2) Scale ResNet-50 by resolution (r=2) ResNet-50 compound scale 1.2B 1.1B 1.2B 1.3B 4.1B 16.2B 14.7B 16.4B 16.7B 76.8% 76.4% 74.8% 77.4% 76.0% 78.1% 77.7% 77.5% 78.8% ResNet-152 (Xie et al., 2017) EfficientNet-B1 ResNeXt-101 (Xie et al., 2017) EfficientNet-B3 SENet (Hu et al., 2018) NASNet-A (Zoph et al., 2018) EfficientNet-B4 AmeobaNet-C (Cubuk et al., 2019) EfficientNet-B5 Top1 Acc. FLOPS 11B 0.7B 32B 1.8B 42B 24B 4.2B 41B 9.9B 77.8% 79.1% 80.9% 81.6% 82.7% 80.7% 82.9% 83.5% 83.6% Table 4. Inference Latency Comparison – Latency is measured with batch size 1 on a single core of Intel Xeon CPU E5-2690. Figure 5. FLOPS vs. ImageNet Accuracy – Similar to Figure 1 except it compares FLOPS rather than model size. ResNet-152 EfficientNet-B1 Speedup Acc. @ Latency 77.8% @ 0.554s 78.8% @ 0.098s 5.7x GPipe EfficientNet-B7 Speedup Acc. @ Latency 84.3% @ 19.0s 84.4% @ 3.1s 6.1x # 5.2. ImageNet Results for EfficientNet We train our EfficientNet models on ImageNet using simi- lar settings as (Tan et al., 2019): RMSProp optimizer with decay 0.9 and momentum 0.9; batch norm momentum 0.99; EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Table 5. EfficientNet Performance Results on Transfer Learning Datasets. Our scaled EfficientNet models achieve new state-of-the- art accuracy for 5 out of 8 datasets, with 9.6x fewer parameters on average. Model Comparison to best public-available results Acc. Acc. #Param Our Model #Param(ratio) Model Acc. Comparison to best reported results Acc. #Param Our Model #Param(ratio) CIFAR-10 CIFAR-100 Birdsnap Stanford Cars Flowers FGVC Aircraft Oxford-IIIT Pets Food-101 NASNet-A NASNet-A Inception-v4 Inception-v4 Inception-v4 Inception-v4 ResNet-152 Inception-v4 98.0% 87.5% 81.8% 93.4% 98.5% 90.9% 94.5% 90.8% 85M EfficientNet-B0 85M EfficientNet-B0 41M EfficientNet-B5 41M EfficientNet-B3 41M EfficientNet-B5 41M EfficientNet-B3 58M EfficientNet-B4 41M EfficientNet-B4 98.1% 88.1% 82.0% 93.6% 98.5% 90.7% 94.8% 91.5% 4M (21x) 4M (21x) 28M (1.5x) 10M (4.1x) 28M (1.5x) 10M (4.1x) 17M (5.6x) 17M (2.4x) †Gpipe Gpipe GPipe ‡DAT DAT DAT GPipe GPipe 99.0% 556M EfficientNet-B7 91.3% 556M EfficientNet-B7 83.6% 556M EfficientNet-B7 94.8% EfficientNet-B7 EfficientNet-B7 97.7% 92.9% EfficientNet-B7 95.9% 556M EfficientNet-B6 93.0% 556M EfficientNet-B7 - - - 64M (8.7x) 98.9% 91.7% 64M (8.7x) 84.3% 64M (8.7x) - 94.7% 98.8% - 92.9% - 95.4% 41M (14x) 93.0% 64M (8.7x) (9.6x) (4.7x) Geo-Mean †GPipe (Huang et al., 2018) trains giant models with specialized pipeline parallelism library. ‡DAT denotes domain adaptive transfer learning (Ngiam et al., 2018). Here we only compare ImageNet-based transfer learning results. Transfer accuracy and #params for NASNet (Zoph et al., 2018), Inception-v4 (Szegedy et al., 2017), ResNet-152 (He et al., 2016) are from (Kornblith et al., 2019). CIFAR10 CIFAR100 Birdsnap Stanford Cars Accuracy(%) 10! 10? 10% 10" 107 10% 10! 107 10% 10! 107 10° Flowers FGVC Aircraft Oxford-IlIT Pets Food-101 96 . ° lo2.5, + | _ 92 z 190.0 > < 94 90 g 87.5 af 8 . 88 <° 5.0, * y, * 92 2.514 1a 10" 107 10% 10" 107 10% 10! 107 10% 10! 107 10° Number of Parameters (Millions, log-scale) + DenseNet-201 + ResNet-50 a Inception-v1 + ResNet-152 « NASNet-A « GPIPE +» ResNet-101 «— Inception-v3 + DenseNet-121 = EfficientNet + Inception-ResNet-v2 v DenseNet-169 » — Inception-v4 Figure 6. Model Parameters vs. Transfer Learning Accuracy – All models are pretrained on ImageNet and finetuned on new datasets. weight decay 1e-5; initial learning rate 0.256 that decays by 0.97 every 2.4 epochs. We also use SiLU (Swish-1) ac- tivation (Ramachandran et al., 2018; Elfwing et al., 2018; Hendrycks & Gimpel, 2016), AutoAugment (Cubuk et al., 2019), and stochastic depth (Huang et al., 2016) with sur- vival probability 0.8. As commonly known that bigger mod- els need more regularization, we linearly increase dropout (Srivastava et al., 2014) ratio from 0.2 for EfficientNet-B0 to 0.5 for B7. We reserve 25K randomly picked images from the training set as a minival set, and perform early stopping on this minival; we then evaluate the early- stopped checkpoint on the original validation set to report the final validation accuracy. being more accurate but 8.4x smaller than the previous best GPipe (Huang et al., 2018). These gains come from both better architectures, better scaling, and better training settings that are customized for EfficientNet. Figure 1 and Figure 5 illustrates the parameters-accuracy and FLOPS-accuracy curve for representative ConvNets, where our scaled EfficientNet models achieve better accu- racy with much fewer parameters and FLOPS than other ConvNets. Notably, our EfficientNet models are not only small, but also computational cheaper. For example, our EfficientNet-B3 achieves higher accuracy than ResNeXt- 101 (Xie et al., 2017) using 18x fewer FLOPS. Table 2 shows the performance of all EfficientNet models that are scaled from the same baseline EfficientNet-B0. Our EfficientNet models generally use an order of magnitude fewer parameters and FLOPS than other ConvNets with similar accuracy. In particular, our EfficientNet-B7 achieves 84.3% top1 accuracy with 66M parameters and 37B FLOPS, To validate the latency, we have also measured the inference latency for a few representative CovNets on a real CPU as shown in Table 4, where we report average latency over 20 runs. Our EfficientNet-B1 runs 5.7x faster than the widely used ResNet-152, while EfficientNet-B7 runs about 6.1x faster than GPipe (Huang et al., 2018), suggesting our EfficientNets are indeed fast on real hardware. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks baseline model original image deeper (d=4) —_z bakeshop maze r 3 wider (w=2) higher resolution (r=2) compound scaling Figure 7. Class Activation Map (CAM) (Zhou et al., 2016) for Models with different scaling methods- Our compound scaling method allows the scaled model (last column) to focus on more relevant regions with more object details. Model details are in Table 7. Table 6. Transfer Learning Datasets. Dataset Train Size Test Size #Classes CIFAR-10 (Krizhevsky & Hinton, 2009) CIFAR-100 (Krizhevsky & Hinton, 2009) Birdsnap (Berg et al., 2014) Stanford Cars (Krause et al., 2013) Flowers (Nilsback & Zisserman, 2008) FGVC Aircraft (Maji et al., 2013) Oxford-IIIT Pets (Parkhi et al., 2012) Food-101 (Bossard et al., 2014) 50,000 50,000 47,386 8,144 2,040 6,667 3,680 75,750 10,000 10,000 2,443 8,041 6,149 3,333 3,369 25,250 10 100 500 196 102 100 37 101 # 5.3. Transfer Learning Results for EfficientNet oo i ~ scale by width sox scale by depth + scale by resolution —— compound scaling ImageNet Top-1 Accuracy(%) 0 1 2 3 4 FLOPS (Billions) We have also evaluated our EfficientNet on a list of com- monly used transfer learning datasets, as shown in Table 6. We borrow the same training settings from (Kornblith et al., 2019) and (Huang et al., 2018), which take ImageNet pretrained checkpoints and finetune on new datasets. Table 5 shows the transfer learning performance: (1) Com- pared to public available models, such as NASNet-A (Zoph et al., 2018) and Inception-v4 (Szegedy et al., 2017), our Ef- ficientNet models achieve better accuracy with 4.7x average (up to 21x) parameter reduction. (2) Compared to state- of-the-art models, including DAT (Ngiam et al., 2018) that dynamically synthesizes training data and GPipe (Huang et al., 2018) that is trained with specialized pipeline paral- lelism, our EfficientNet models still surpass their accuracy in 5 out of 8 datasets, but using 9.6x fewer parameters Figure 6 compares the accuracy-parameters curve for a va- riety of models. In general, our EfficientNets consistently achieve better accuracy with an order of magnitude fewer pa- rameters than existing models, including ResNet (He et al., 2016), DenseNet (Huang et al., 2017), Inception (Szegedy et al., 2017), and NASNet (Zoph et al., 2018). # 6. Discussion To disentangle the contribution of our proposed scaling method from the EfficientNet architecture, Figure 8 com- pares the ImageNet performance of different scaling meth- Figure 8. Scaling Up EfficientNet-B0 with Different Methods. Table 7. Scaled Models Used in Figure 7. Model FLOPS Top-1 Acc. Baseline model (EfficientNet-B0) 0.4B 77.3% Scale model by depth (d=4) Scale model by width (w=2) Scale model by resolution (r=2) Compound Scale (ddd=1.4, www=1.2, rrr=1.3) 1.8B 1.8B 1.9B 1.8B 79.0% 78.9% 79.1% 81.1% ods for the same EfficientNet-B0 baseline network. In gen- eral, all scaling methods improve accuracy with the cost of more FLOPS, but our compound scaling method can further improve accuracy, by up to 2.5%, than other single- dimension scaling methods, suggesting the importance of our proposed compound scaling. In order to further understand why our compound scaling method is better than others, Figure 7 compares the class activation map (Zhou et al., 2016) for a few representative models with different scaling methods. All these models are scaled from the same baseline, and their statistics are shown in Table 7. Images are randomly picked from ImageNet validation set. As shown in the figure, the model with com- pound scaling tends to focus on more relevant regions with more object details, while other models are either lack of object details or unable to capture all objects in the images. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks # 7. Conclusion In this paper, we systematically study ConvNet scaling and identify that carefully balancing network width, depth, and resolution is an important but missing piece, preventing us from better accuracy and efficiency. To address this issue, we propose a simple and highly effective compound scaling method, which enables us to easily scale up a baseline Con- vNet to any target resource constraints in a more principled way, while maintaining model efficiency. Powered by this compound scaling method, we demonstrate that a mobile- size EfficientNet model can be scaled up very effectively, surpassing state-of-the-art accuracy with an order of magni- tude fewer parameters and FLOPS, on both ImageNet and five commonly used transfer learning datasets. Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation policies from data. CVPR, 2019. Elfwing, S., Uchibe, E., and Doya, K. Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 107:3–11, 2018. Gholami, A., Kwon, K., Wu, B., Tai, Z., Yue, X., Jin, P., Zhao, S., and Keutzer, K. Squeezenext: Hardware-aware neural network design. ECV Workshop at CVPR’18, 2018. # Acknowledgements Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. ICLR, 2016. We thank Ruoming Pang, Vijay Vasudevan, Alok Aggarwal, Barret Zoph, Hongkun Yu, Jonathon Shlens, Raphael Gon- tijo Lopes, Yifeng Lu, Daiyi Peng, Xiaodan Song, Samy Bengio, Jeff Dean, and the Google Brain team for their help. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. CVPR, pp. 770–778, 2016. He, K., Gkioxari, G., Doll´ar, P., and Girshick, R. Mask r-cnn. ICCV, pp. 2980–2988, 2017. # Appendix Since 2017, most research papers only report and compare ImageNet validation accuracy; this paper also follows this convention for better comparison. In addition, we have also verified the test accuracy by submitting our predictions on the 100k test set images to http://image-net.org; results are in Table 8. As expected, the test accuracy is very close to the validation accuracy. Table 8. ImageNet Validation vs. Test Top-1/5 Accuracy. B0 B1 B2 B3 B4 B5 B6 B7 Val top1 Test top1 77.11 77.23 79.13 79.17 80.07 80.16 81.59 81.72 82.89 82.94 83.60 83.69 83.95 84.04 84.26 84.33 Val top5 Test top5 93.35 93.45 94.47 94.43 94.90 94.98 95.67 95.70 96.37 96.27 96.71 96.64 96.76 96.86 96.97 96.94 He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., and Han, S. Amc: Automl for model compression and acceleration on mobile devices. ECCV, 2018. Hendrycks, D. and Gimpel, K. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Hu, J., Shen, L., and Sun, G. Squeeze-and-excitation net- works. CVPR, 2018. # References Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. Deep networks with stochastic depth. ECCV, pp. 646–661, 2016. Berg, T., Liu, J., Woo Lee, S., Alexander, M. L., Jacobs, D. W., and Belhumeur, P. N. Birdsnap: Large-scale fine-grained visual categorization of birds. CVPR, pp. 2011–2018, 2014. Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. CVPR, 2017. Bossard, L., Guillaumin, M., and Van Gool, L. Food-101– mining discriminative components with random forests. ECCV, pp. 446–461, 2014. Cai, H., Zhu, L., and Han, S. Proxylessnas: Direct neural architecture search on target task and hardware. ICLR, 2019. Chollet, F. Xception: Deep learning with depthwise separa- ble convolutions. CVPR, pp. 1610–02357, 2017. Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., and Chen, Z. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1808.07233, 2018. Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, pp. 448–456, 2015. Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Sohl- Dickstein, J. On the expressive power of deep neural networks. ICML, 2017. Kornblith, S., Shlens, J., and Le, Q. V. Do better imagenet models transfer better? CVPR, 2019. Ramachandran, P., Zoph, B., and Le, Q. V. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2018. Krause, J., Deng, J., Stark, M., and Fei-Fei, L. Collecting a large-scale dataset of fine-grained cars. Second Workshop on Fine-Grained Visual Categorizatio, 2013. Real, E., Aggarwal, A., Huang, Y., and Le, Q. V. Regu- larized evolution for image classifier architecture search. AAAI, 2019. Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Technical Report, 2009. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In NIPS, pp. 1097–1105, 2012. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition chal- lenge. International Journal of Computer Vision, 115(3): 211–252, 2015. Lin, H. and Jegelka, S. Resnet with one-neuron hidden layers is a universal approximator. NeurIPS, pp. 6172– 6181, 2018. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. CVPR, 2018. Lin, T.-Y., Doll´ar, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. Feature pyramid networks for object detection. CVPR, 2017. Sharir, O. and Shashua, A. On the expressive power of overlapping architectures of deep learning. ICLR, 2018. Liu, C., Zoph, B., Shlens, J., Hua, W., Li, L.-J., Fei-Fei, L., Yuille, A., Huang, J., and Murphy, K. Progressive neural architecture search. ECCV, 2018. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. Lu, Z., Pu, H., Wang, F., Hu, Z., and Wang, L. The expres- sive power of neural networks: A view from the width. NeurIPS, 2018. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. CVPR, pp. 1–9, 2015. Ma, N., Zhang, X., Zheng, H.-T., and Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. ECCV, 2018. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. CVPR, pp. 2818–2826, 2016. Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., Bharambe, A., and van der Maaten, L. Explor- ing the limits of weakly supervised pretraining. arXiv preprint arXiv:1805.00932, 2018. Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A. Inception-v4, inception-resnet and the impact of residual connections on learning. AAAI, 4:12, 2017. Maji, S., Rahtu, E., Kannala, J., Blaschko, M., and Vedaldi, A. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013. Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., and Le, Q. V. MnasNet: Platform-aware neural architecture search for mobile. CVPR, 2019. Ngiam, J., Peng, D., Vasudevan, V., Kornblith, S., Le, Q. V., and Pang, R. Domain adaptive transfer learning with spe- cialist models. arXiv preprint arXiv:1811.07056, 2018. Xie, S., Girshick, R., Doll´ar, P., Tu, Z., and He, K. Aggre- gated residual transformations for deep neural networks. CVPR, pp. 5987–5995, 2017. Nilsback, M.-E. and Zisserman, A. Automated flower clas- sification over a large number of classes. ICVGIP, pp. 722–729, 2008. Yang, T.-J., Howard, A., Chen, B., Zhang, X., Go, A., Sze, V., and Adam, H. Netadapt: Platform-aware neural net- work adaptation for mobile applications. ECCV, 2018. Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. Cats and dogs. CVPR, pp. 3498–3505, 2012. Zagoruyko, S. and Komodakis, N. Wide residual networks. BMVC, 2016. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks Zhang, X., Li, Z., Loy, C. C., and Lin, D. Polynet: A pursuit of structural diversity in very deep networks. CVPR, pp. 3900–3908, 2017. Zhang, X., Zhou, X., Lin, M., and Sun, J. Shufflenet: An ex- tremely efficient convolutional neural network for mobile devices. CVPR, 2018. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., and Torralba, A. Learning deep features for discriminative localization. CVPR, pp. 2921–2929, 2016. Zoph, B. and Le, Q. V. Neural architecture search with reinforcement learning. ICLR, 2017. Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. Learning transferable architectures for scalable image recognition. CVPR, 2018.
{ "id": "1606.08415" }
1905.11742
Overlearning Reveals Sensitive Attributes
"Overlearning" means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races\textemdash even races that are not represented in the training data\textemdash and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be "re-purposed" for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.
http://arxiv.org/pdf/1905.11742
Congzheng Song, Vitaly Shmatikov
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20190528
20200208
0 2 0 2 b e F 8 ] G L . s c [ 3 v 2 4 7 1 1 . 5 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # OVERLEARNING REVEALS SENSITIVE ATTRIBUTES # Congzheng Song Cornell University [email protected] Vitaly Shmatikov Cornell Tech [email protected] # ABSTRACT “Overlearning” means that a model trained for a seemingly simple objective im- plicitly learns to recognize attributes and concepts that are (1) not part of the learn- ing objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races—even races that are not represented in the training data—and identities. We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be “re-purposed” for a different, privacy-violating task even in the absence of the original training data. We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training. # INTRODUCTION We demonstrate that representations learned by deep models when training for seemingly simple objectives reveal privacy- and bias-sensitive attributes that are not part of the specified objective. These unintentionally learned concepts are neither finer-, nor coarse-grained versions of the model’s labels, nor statistically correlated with them. We call this phenomenon overlearning. For example, a binary classifier trained to determine the gender of a facial image also learns to recognize races (including races not represented in the training data) and even identities of individuals. Overlearning has two distinct consequences. First, the model’s inference-time representation of an input reveals the input’s sensitive attributes. For example, a facial recognition model’s representa- tion of an image reveals if two specific individuals appear together in it. Overlearning thus breaks inference-time privacy protections based on model partitioning (Osia et al., 2018; Chi et al., 2018; Wang et al., 2018). Second, we develop a new, transfer learning-based technique to “re-purpose” a model trained for benign task into a model for a different, privacy-violating task. This shows the inadequacy of privacy regulations that rely on explicit enumeration of learned attributes. Overlearning is intrinsic for some tasks, i.e., it is not possible to prevent a model from learning sensitive attributes. We show that if these attributes are censored (Xie et al., 2017; Moyer et al., 2018), the censored models either fail to learn their specified tasks, or still leak sensitive information. We develop a new de-censoring technique to extract information from censored representations. We also show that overlearned representations enable recognition of sensitive attributes that are not present in the training data. Such attributes cannot be censored using any known technique. This shows the the inadequacy of censoring as a privacy protection technology. To analyze where and why overlearning happens, we empirically show how general features emerge in the lower layers of models trained for simple objectives and conjecture an explanation based on the complexity of the training data. # 2 BACKGROUND We focus on supervised deep learning. Given an input x, a model M is trained to predict the target y using a discriminative approach. We represent the model M = C ◦ E as a feature extractor 1 Published as a conference paper at ICLR 2020 (encoder) E and classifier C. The representation z = E(x) is passed to C to produce the prediction by modeling p(y|z) = C(z). Since E can have multiple layers of representation, we use El(x) = zl to denote the model’s internal representation at layer l; z is the representation at the last layer. Model partitioning splits the model into a local, on-device part and a remote, cloud-based part to improve scalability of inference (Lane & Georgiev, 2015; Kang et al., 2017) and protect privacy of inputs into the model (Li et al., 2017; Osia et al., 2018; Chi et al., 2018; Wang et al., 2018). For privacy, the local part of the model computes a representation, censors it as described below, and sends it to the cloud part, which computes the model’s output. Censoring representations. The goal is to encode input x into a representation z that does not reveal unwanted properties of x, yet is expressive enough to predict the task label y. Censoring has been used to achieve transform-invariant representations for computer vision, bias-free representations for fair machine learning, and privacy-preserving representations that hide sensitive attributes. A straightforward censoring approach is based on adversarial training (Goodfellow et al., 2014). It involves a mini-max game between a discriminator D trying to infer s from z during training and an encoder and classifier trying to infer the task label y while minimizing the discriminator’s success (Edwards & Storkey, 2016; Iwasawa et al., 2016; Hamm, 2017; Xie et al., 2017; Li et al., 2018; Coavoux et al., 2018; Elazar & Goldberg, 2018). The game is formulated as: min E,C max D Ex,y,s[γ log p(s|z = E(x)) − log p(y|z = E(x))] (1) where γ balances the two log likelihood terms. The inner optimization maximizes log p(s|z = E(x)), i.e., the discriminator’s prediction of the sensitive attribute s given a representation z. The outer optimization, on the other hand, trains the encoder and classifier to minimize the log likelihood of the discriminator predicting s and maximize that of predicting the task label y. Another approach casts censoring as a single information-theoretical objective. The requirement that z not reveal s can be formalized as an independence constraint z ⊥ s, but independence is intractable to measure in practice, thus the requirement is relaxed to a constraint on the mutual information between z and s (Osia et al., 2018; Moyer et al., 2018). The overall training objective of censoring s and predicting y from z is formulated as: max I(z, y) − βI(z, x) − λI(z, s) (2) where I is mutual information and β, λ are the balancing coefficients; β = 0 in (Osia et al., 2018). The first two terms I(z, y) − βI(z, x) is the objective of variational information bottle- neck (Alemi et al., 2017), the third term is the relaxed independence constraint of z and s. Intuitively, this objective aims to maximize the information of y in z as per I(z, y), forget the information of x in z as per −βI(z, x), and remove the information of s in z as per −λI(z, s). This objective has an analytical lower bound (Moyer et al., 2018): Ex,s[Ez,y[log p(y|z)] − (β + λ)KL[q(z|x)||q(z)] − λEz[log p(x|z, s)]] where KL is Kullback-Leibler divergence and log p(x|z, s) is the reconstruction likelihood of x given z and s. The conditional distributions p(y|z) = C(z), q(z|x) = E(x) are modeled as in adversarial training and p(x|z, s) is modeled with a decoder R(z, s) = p(x|z, s). All known censoring techniques require a “blacklist” of attributes to censor, and inputs with these attributes must be represented in the training data. Censoring for fairness is applied to the model’s final layer to make its output independent of the sensitive attributes or satisfy a specific fairness constraint (Zemel et al., 2013; Louizos et al., 2016; Madras et al., 2018; Song et al., 2019). In this paper, we use censoring not for fairness but to demonstrate that models cannot be prevented from learning to recognize sensitive attributes. To show this, we apply censoring to different layers, not just the output. # 3 EXPLOITING OVERLEARNING We demonstrate two different ways to exploit overlearning in a trained model M . The inference- time attack (Section 3.1) applies M to an input and uses M ’s representation of that input to predict its sensitive attributes. The model-repurposing attack (Section 3.2) uses M to create another model that, when applied to an input, directly predicts its sensitive attributes. 2 Published as a conference paper at ICLR 2020 Inferring s from representation: 1: Input: Adversary’s auxiliary dataset Daux, ⋆ black-box oracle E, observed z 2: Dattack ← {(E(x), s) | (x, s) ∈ Daux} 3: Train attack model Mattack on Dattack 4: return prediction ˆs = Mattack(z ⋆ ) Adversarial re-purposing: 1: Input: Model M for the original task, transfer dataset Dtransfer for the new task 2: Build Mtransfer = Ctransfer ◦ El on layer l 3: Fine-tune Mtransfer on Dtransfer 4: return transfer model Mtransfer # Figure 1: Pseudo-code for inference from representation and adversarial re-purposing # Algorithm 1 De-censoring representations ⋆ 1: Input: Auxiliary dataset Daux, black-box oracle E, observed representation z 2: Train auxiliary model Maux = Eaux ◦ Caux on Daux 3: Initialize transform model T , inference attack model Mattack 4: for each training iteration do 5: 6: 7: 8: end for 9: return prediction ˆs = Mattack(T (z 3.1 # INFERRING SENSITIVE ATTRIBUTES FROM REPRESENTATION We measure the leakage of sensitive properties from the representations of overlearned models via the following attack. Suppose an adversary can observe the representation z⋆ of a trained model M on input x⋆ at inference time but cannot observe x⋆ directly. This scenario arises in practice when model evaluation is partitioned in order to protect privacy of inputs—see Section 2. The adversary wants to infer some property s of x⋆ that is not part of the task label y. We assume that the adversary has an auxiliary set Daux of labeled (x, s) pairs and black-box oracle E to compute the corresponding E(x). The purpose of Daux is to help the adversary recognize the property of interest in the model’s representations; it need not be drawn from the same dataset as x⋆. The adversary uses supervised learning on the (E(x), s) pairs to train an attack model Mattack. At inference time, the adversary predicts ˆs from the observed z⋆ as Mattack(z⋆). De-censoring. If the representation z is “censored” (see Section 2) to reduce the amount of informa- tion it reveals about s, the direct inference attack may not succeed. We develop a new, learning-based de-censoring approach (see Algorithm 1) to convert censored representations into a different form that leaks more information about the property of interest. The adversary trains Maux on Daux to predict s from x, then transforms z into the input features of Maux. We treat de-censoring as an optimization problem with a feature space L2 loss ||T (z)−zaux||2 2, where T is the transformer that the adversary wants to learn and zaux is the uncensored representation from Maux. Training with a feature-space loss has been proposed for synthesizing more natural images by matching them with real images (Dosovitskiy & Brox, 2016; Nguyen et al., 2016). In our case, we match censored and uncensored representations. The adversary can then use T (z) as an uncensored approximation of z to train an inference model Mattack and infer property s as Mattack(T (z⋆)). 3.2 RE-PURPOSING MODELS TO PREDICT SENSITIVE ATTRIBUTES To re-purpose a model—for example, to convert a model trained for a benign task into a model that predicts a sensitive attribute—we can use features zl in any layer of M as the feature extractor and connect a new classifier Ctransfer to El. The transferred model Mtransfer = Ctransfer ◦ El is fine-tuned on another, small dataset Dtransfer, which in itself is not sufficient to train an accurate model for the new task. Utilizing features learned by M on the original D, Mtransfer can achieve better results than models trained from scratch on Dtransfer. Feasibility of model re-purposing complicates the application of policies and regulations such as GDPR (EU, 2018). GDPR requires data processors to disclose every purpose of data collection and obtain consent from the users whose data was collected. We show that, given a trained model, it is not possible to determine—nor, consequently, disclose or obtain user consent for—what the model 3 Published as a conference paper at ICLR 2020 Table 1: Summary of datasets and tasks. Cramer’s V captures statistical correlation between y and s (0 indicates no correlation and 1 indicates perfectly correlated). Dataset Health UTKFace FaceScrub Places365 Twitter Yelp PIPA Target y Attribute s Cramer’s V CCI age 0.149 gender race 0.035 gender facial IDs 0.044 in/outdoor scene type 0.052 age author 0.134 review score author 0.033 facial IDs IDs together n/a has learned. Learning per se thus cannot be a regulated “purpose” of data collection. Regulators must be aware that even if the original training data has been erased, a model can be re-purposed for a different objective, possibly not envisioned at the time of original data collection. We discuss this further in Section 6. # 4 EXPERIMENTAL RESULTS # 4.1 DATASETS, TASKS, AND MODELS Health is the Heritage Health dataset (Heritage Health Prize) with medical records of over 55,000 patients, binarized into 112 features with age information removed. The task is to predict if Charlson Index (an estimate of patient mortality) is greater than zero; the sensitive attribute is age (binned into 9 ranges). UTKFace is a set of over 23,000 face images labeled with age, gender, and race (UTKFace; Zhang et al., 2017). We rescaled them into 50×50 RGB pixels. The task is to predict gender; the sensitive attribute is race. FaceScrub is a set of face images labeled with gender (FaceScrub). Some URLs are expired, but we were able to download 74,000 images for 500 individuals and rescale them into 50×50 RGB pixels. The task is to predict gender; the sensitive attribute is identity. Places365 is a set of 1.8 million images labeled with 365 fine-grained scene categories. We use a subset of 73,000 images, 200 per category. The task is to predict whether the scene is indoor or outdoor; the sensitive attribute is the fine-grained scene label. Twitter is a set of tweets from the PAN16 dataset (Rangel et al., 2016) labeled with user information. We removed tweets with fewer than 20 tokens and users with fewer than 50 tweets, yielding a dataset of over 46,000 tweets from 151 users with an over 80,000-word vocabulary. The task is to predict the age of the user given a tweet; the sensitive attribute is the author’s identity. Yelp is a set of Yelp reviews labeled with user identities (Yelp Open Dataset). We removed users with fewer than 1,000 reviews and reviews with more than 200 tokens, yielding a dataset of over 39,000 reviews from 137 users with an over 69,000-word vocabulary. The task is to predict the review score between 1 to 5; the sensitive attribute is the author’s identity. PIPA is a set of over 60,000 photos of 2,000 individuals gathered from public Flickr photo al- bums (Piper project page; Zhang et al., 2015). Each image can include one or more individuals. We cropped their head regions using the bounding boxes in the image annotations. The task is to predict the identity given the head region; the sensitive attribute is whether two head regions are from the same photo. Models. For Health, we use a two-layer fully connected (FC) neural network with 128 and 32 hidden units, respectively, following (Xie et al., 2017; Moyer et al., 2018). For UTKFace and FaceScrub, we use a LeNet (LeCun et al., 1998) variant: three 3×3 convolutional and 2×2 max-pooling layers with 16, 32, and 64 filters, followed by two FC layers with 128 and 64 hidden units. For Twitter and Yelp, we use text CNN (Kim, 2014). For Places365 and PIPA, we use AlexNet (Krizhevsky et al., 2012) with convolutional layers pre-trained on ImageNet (Deng et al., 2009) and further add a 3×3 convolutional layer with 128 filters and 2×2 max-pooling followed by two FC layers with 128 and 64 hidden units, respectively. 4 Published as a conference paper at ICLR 2020 Table 2: Accuracy of inference from representations (last FC layer). RAND is random guessing based on majority class labels; BASE is inference from the uncensored representation; ADV from the representation censored with adversarial training; IT from the information-theoretically censored representation. Dataset Acc of predicting target y RAND BASE ADV IT Acc of inferring sensitive attribute s IT RAND BASE ADV Health UTKFace FaceScrub Places365 Twitter Yelp PIPA 66.31 52.27 53.53 56.16 45.17 42.56 7.67 84.33 90.38 98.77 91.41 76.22 57.81 77.34 80.16 90.15 97.90 90.84 57.97 56.79 52.02 82.63 88.15 97.66 89.82 n/a n/a 29.64 16.00 42.52 1.42 1.37 6.93 15.88 68.50 32.52 62.18 33.65 31.03 38.46 33.09 87.95 32.00 53.28 30.23 12.56 34.27 27.32 69.96 26.60 53.30 10.61 2.29 n/a n/a 82.02 INFERRING SENSITIVE ATTRIBUTES FROM REPRESENTATIONS Setup. We use 80% of the data for training the target models and 20% for evaluation. The size of the adversary’s auxiliary dataset is 50% of the training data. Success of the inference attack is measured on the final FC layer’s representation of test data. The baseline is inference from the uncensored representation. We also measure the success of inference against representations censored with γ = 1.0 for adversarial training and β = 0.01, λ = 0.0001 for information-theoretical censoring, following (Xie et al., 2017; Moyer et al., 2018). For censoring with adversarial training, we simulate the adversary with a two-layer FC neural network with 256 and 128 hidden units. The number of epochs is 50 for censoring with ad- versarial training, 30 for the other models. We use the Adam optimizer with the learning rate of 0.001 and batch size of 128. For information-theoretical censoring, the model is based on VAE (Kingma & Welling, 2013; Moyer et al., 2018). The encoder q(z|x) has the same architec- ture as the CNN models with all convolutional layers. On top of that, the encoder outputs a mean vector and a standard deviation vector to model the random variable z with the re-parameterization trick. The decoder p(x|z) has three de-convolution layers with up-sampling to map z back to the same shape as the input x. For our inference model, we use the same architecture as the censoring adversary. For the PIPA inference model, which takes two representations of faces and outputs a binary prediction of whether these faces appear in the same photo, we use two FC layers followed by a bilinear model: p(s|z1, z2) = σ(h(z1)W h(z2)⊤), where z1, z2 are the two input representations, h is the two FC layers, and σ is the sigmoid function. We train the inference model for 50 epochs with the Adam optimizer, learning rate of 0.001, and batch size of 128. Results. Table 2 reports the results. When representations are not censored, accuracy of inference from the last-layer representations is much higher than random guessing for all tasks, which means models overlearn even in the higher, task-specific layers. When representations are censored with adversarial training, accuracy drops for both the main and inference tasks. Accuracy of infer- ence is much higher than in (Xie et al., 2017). The latter uses logistic regression, which is weaker than the training-time censoring-adversary network, whereas we use the same architecture for both the training-time and post-hoc adversaries. Information-theoretical censoring reduces accuracy of inference, but also damages main-task accuracy more than adversarial training for almost all models. Overlearning can cause a model to recognize even the sensitive attributes that are not repre- sented in the training dataset. Such attributes cannot be censored using any known technique. We trained a UTKFace gender classifier on datasets where all faces are of the same race. We then applied this model to test images with four races (White, Black, Asian, Indian) and attempted to infer the race attribute from the model’s representations. Inference accuracy is 61.95%, 61.99%, 60.85% and 60.81% for models trained only on, respectively, White, Black, Asian, and Indian images—almost as good as the 62.18% baseline and much higher than random guessing (42.52%). Effect of censoring strength. Fig. 2 shows that stronger censoring does not help. On FaceScrub and Twitter with adversarial training, increasing γ damages the model’s accuracy on the main task, while 5 Published as a conference paper at ICLR 2020 FaceScrub UTKFace Twitter Yelp c c a 0 0 0 0 V D A e v i t a l e R −20 −40 −5 −10 −10 −20 −30 −5 −10 2 γ 4 6 0.5 1 γ 1.5 2 0.2 0.4 0.6 0.8 γ 1 0.5 1 γ 1.5 c c a −5 Health 0 −10 UTKFace 0 FaceScrub 0 −10 Places365 T I e v i t a l e R −10 −15 −20 −20 −30 −40 −20 −40 −20 −30 1 2 4 β · 102 6 8 1 1.2 1.4 1.6 1.8 2 β · 102 0.5 0.75 1.25 1.5 1 β · 102 1 2 3 β · 102 4 c c a −5 Health 0 −10 UTKFace 0 FaceScrub 0 Places365 T I e v i t a l e R −10 −15 −20 −30 −40 −20 −40 −10 −20 −30 0.01 λ 0.1 0.5 0.2 0.4 0.6 0.8 λ · 103 1 10−4 λ 10−2 10−5 10−4 λ 2 5 10−3 Figure 2: Reduction in accuracy due to censoring. Blue lines are the main task, red lines are the inference of sensitive attributes. First row is adversarial training with different γ values; second and third row is information-theoretical censoring with different β and λ values respectively. Table 3: Improving inference accuracy with de-censoring. δ is the increase from Table 2. Dataset Health UTKFace FaceScrub Places365 Twitter Yelp ADV +δ IT +δ 32.55 +0.55 27.05 +0.45 59.38 +6.10 54.31 +1.01 40.37 +12.24 16.40 +5.79 19.71 +7.15 3.10 +0.81 36.55 +2.22 n/a 31.36 +4.04 n/a accuracy of inference decreases slightly or remains the same. For UTKFace and Yelp, increasing γ improves accuracy of inference. This may indicate that the simulated “adversary” during adversarial training overpowers the optimization process and censoring defeats itself. For all models with information-theoretical censoring, increasing β reduces the accuracy of infer- ence but can lead to the model not converging on its main task. Increasing λ results in the model not converging on the main task, without affecting the accuracy of inference, on Health, UTKFace and FaceScrub. This seems to contradict the censoring objective, but the reconstruction loss in Equa- tion 2 dominates the other loss terms, which leads to poor divergence between conditional q(z|x) and q(z), i.e., information about x is still retained in z. De-censoring. As described in Section 3.1, we developed a new technique to transform censored representations to make inference easier. We first train an auxiliary model on Daux to predict the sensitive attribute from representations, using the same architecture as in the baseline models. The resulting uncensored representations from the last convolutional layer are the target for the de- censoring transformations. We use a single-layer fully connected neural network as the transformer and set the number of hidden units to the dimension of the uncensored representation. The inference model operates on top of the transformer network, with the same hyper-parameters as before. Table 3 shows that de-censoring significantly boosts the accuracy of inference from representations censored with adversarial training. The boost is smaller against information-theoretical censoring because its objective not only censors z with I(z, s), but also forgets x with I(x, z). On the Health task, there is not much difference since the baseline attack is already similar to the attack on censored representations, leaving little room for improvement. 6 Published as a conference paper at ICLR 2020 Table 4: Adversarial re-purposing. The values are differences between the accuracy of predicting sensitive attributes using a re-purposed model vs. a model trained from scratch. |Dtransfer|/|D| Health UTKFace FaceScrub Places365 Twitter Yelp PIPA 0.02 0.04 0.06 0.08 0.10 -0.57 0.22 -1.21 -0.99 0.35 4.72 2.70 2.83 0.25 2.24 7.01 15.07 7.02 11.80 9.43 4.42 2.14 2.06 3.39 2.86 12.99 10.87 10.51 9.57 7.30 5.57 3.60 8.45 0.33 2.1 1.33 2.41 6.50 4.93 5.89 Table 5: The effect of censoring on adversarial re-purposing for FaceScrub with γ = 0.5, 0.75, 1.0. δA is the difference in the original-task accuracy (second column) between uncensored and censored models; δB is the difference in the accuracy of inferring the sensitive attribute (columns 3 to 7) between the models re-purposed from different layers and the model trained from scratch. Negative values mean reduced accuracy. Heatmaps on the right are linear CKA similarities between censored and uncensored representations. Numbers 0 through 4 represent layers conv1, conv2, conv3, fc4, and fc5. For each model censored at layer i (x-axis), we measure similarity between the censored and uncensored models at layer j (y-axis). Censored on γ = 0.5 δA conv1 δB when transferred from fc4 conv3 conv2 conv1 conv2 conv3 fc4 fc5 -1.66 -2.87 -0.64 -0.16 0.05 -6.42 0.95 1.49 2.03 1.52 -4.09 -1.77 1.49 5.16 4.53 -1.65 -2.88 0.67 6.73 7.42 0.46 -1.53 -0.48 6.12 6.14 fc5 -3.87 -2.22 -1.38 0.54 4.53 n o d e n u t - e n i F 4 3 2 1 0 γ = 0.75 γ = 0.75 conv1 conv2 conv3 fc4 fc5 -4.48 -6.02 -1.90 0.01 -0.74 -7.33 0.44 1.32 3.65 1.54 -5.01 -7.04 1.37 4.56 3.61 -1.51 -5.46 1.88 5.11 6.75 -7.99 -5.94 0.74 4.44 7.18 -7.82 -5.82 -0.67 0.91 4.99 n o d e n u t - e n i F 4 3 2 1 0 γ = 1.0 γ = 1 conv1 conv2 conv3 fc4 fc5 -45.25 -20.30 -45.20 -0.52 -0.86 -7.36 -3.28 -2.13 1.73 1.56 -3.93 -5.27 -3.06 5.19 3.55 -2.75 -7.03 -4.48 4.80 5.59 -4.37 -6.38 -4.05 5.83 5.14 -2.91 -5.54 -5.18 1.84 1.97 n o d e n u t - e n i F 4 3 2 1 0 0 1 2 3 4 1.0 0.8 0.6 0.4 0.2 0.0 1.0 0.8 0.6 0.4 0.2 0.0 1.0 0.8 0.6 0.4 0.2 0.0 In summary, these results demonstrate that information about sensitive attributes unintentionally captured by the overlearned representations cannot be suppressed by censoring. 4.3 RE-PURPOSING MODELS TO PREDICT SENSITIVE ATTRIBUTES To demonstrate that overlearned representations can be picked up by a small set of unseen data to create a model for predicting sensitive attributes, we re-purpose uncensored baseline models from Section 4.2 by fine-tuning them on a small (2 − 10% of D) set Dtransfer and compare with the models trained from scratch on Dtransfer. We fine-tune all models for 50 epochs with batch size of 32; the other hyper-parameters are as in Section 4.2. For all CNN models, we use the trained convolutional layers as the feature extractor and randomly initialize the other layers. Table 4 shows that the re-purposed models always outperform those trained from scratch. FaceScrub and Twitter exhibit the biggest gain. Effect of censoring. Previous work only censored the highest layer of the models. Model re- purposing can use any layer of the model for transfer learning. Therefore, to prevent re-purposing, inner layers must be censored, too. We perform the first study of inner-layers censoring and measure 7 Published as a conference paper at ICLR 2020 0% trained 10% trained 20% trained 40% trained 100% trained e c a F K T U B f o r e y a L 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 b u r c S e c a F B f o r e y a L 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 4 3 2 1 0 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 Layer of A Layer of A Layer of A Layer of A Layer of A Figure 3: Pairwise similarities of layer representations between models for the original task (A) and for predicting a sensitive attribute (B). Numbers 0 through 4 denote layers conv1, conv2, conv3, fc4 and fc5. its effect on both the original and re-purposed tasks. We use FaceScrub for this experiment and apply adversarial training to every layer with different strengths (γ = 0.5, 0.75, 1.0). Table 5 summarizes the results. Censoring lower layers (conv1 to conv3) blocks adversarial re- purposing, at the cost of reducing the model’s accuracy on its original task. Hyper-parameters must be tuned carefully, e.g. when γ = 1, there is a huge drop in the original-task accuracy. To further investigate how censoring in one layer affects the representations learned across all layers, we measure per-layer similarity between censored and uncensored models using CKA, linear cen- tered kernel alignment (Kornblith et al., 2019)—see Table 5. When censoring is applied to a specific layer, similarity for that layer is the smallest (values on the diagonal). When censoring lower layers with moderate strength (γ = 0.5 or 0.75), similarity between higher layers is still strong; when censoring higher layers, similarity between lower layers is strong. Therefore, censoring can block adversarial re-purposing from a specific layer, but the adversary can still re-purpose representations in the other layer(s) to obtain an accurate model for predicting sensitive attributes. # 4.4 WHEN, WHERE, AND WHY OVERLEARNING HAPPENS To investigate when (during training) and where (in which layer) the models overlearn, we use linear CKA similarity (Kornblith et al., 2019) to compare the representations at different epochs of training between models trained for the original task (A) and models trained to predict a sensitive attribute (B). We use UTKFace and FaceScrub for these experiments. Fig. 3 shows that lower layers of models A and B learn very similar features. This was observed in (Kornblith et al., 2019) for CIFAR-10 and CIFAR-100 models, but those tasks are closely related. In our case, the tasks are entirely different and B reveals the sensitive attribute while A does not. The similar low-level features are learned very early during training. There is little similarity be- tween the low-level features of A and high-level features of B (and vice versa), matching intuition. Interestingly, on FaceScrub even the high-level features are similar between A and B. We conjecture that one of the reasons for overlearning is structural complexity of the data. Previous work theoretically showed that over-parameterized neural networks favor simple solutions on struc- tured data when optimized with SGD, where structure is quantified as the number of distributions (e.g., images from different identities) within each class in the target task (Li & Liang, 2018), i.e., the fewer distributions, the more structured the data. For data generated from more complicated distributions, networks learn more complex solutions, leading to the emergence of features that are much more general than the learning objective and, consequently, overlearning. Fig. 4 shows that the representations of a gender classifier trained on the faces from 50 individuals are closer to the random initialization than the representations trained on the faces from 500 individuals (the hyper-parameters and the total number of training examples are the same in both cases). More complex training data thus results in more complex representations for the same objective. 8 1.0 0.8 0.6 0.4 0.2 0.0 Published as a conference paper at ICLR 2020 s t h g i e W m o d n a R 0.95 0.9 0.85 Conv1 0.65 0.6 0.55 0.5 Conv2 0.5 0.45 0.4 Conv3 50 IDs 500 IDs 0.45 10 15 20 25 30 10 15 20 25 30 10 15 20 25 Epoch Epoch Epoch 30 o t y t i r a l i # m i S Figure 4: Similarity of layer representations of a partially trained gender classifier to a randomly initialized model before training. Models are trained on FaceScrub using 50 IDs (blue line) and 500 IDs (red line). # 5 RELATED WORK Prior work studied transferability of representations only between closely related tasks. Transfer- ability of features between ImageNet models decreases as the distance between the base and target tasks grows (Yosinski et al., 2014), and performance of tasks is correlated to their distance from the source task (Azizpour et al., 2015). CNN models trained to distinguish coarse classes also distin- guish their subsets (Huh et al., 2016). By contrast, we show that models trained for simple tasks implicitly learn privacy-sensitive concepts unrelated to the labels of the original task. Other than an anecdotal mention in the acknowledgments paragraph of (Kim et al., 2017) that logit-layer activa- tions leak non-label concepts, this phenomenon has never been described in the research literature. Gradient updates revealed by participants in distributed learning leak information about individual training batches that is uncorrelated with the learning objective (Melis et al., 2019). We show that overlearning is a generic problem in (fully trained) models, helping explain these observations. There is a large body of research on learning disentangled representations (Bengio et al., 2013; Locatello et al., 2019). The goal is to separate the underlying explanatory factors in the represen- tation so that it contains all information about the input in an interpretable structure. State-of-the- art approaches use variational autoencoders (Kingma & Welling, 2013) and their variants to learn disentangled representations in an unsupervised fashion (Higgins et al., 2017; Kumar et al., 2018; Kim & Mnih, 2018; Chen et al., 2018). By contrast, overlearning means that representations learned during supervised training for one task implicitly and automatically enable another task—without disentangling the representation on purpose during training. Work on censoring representations aims to suppress sensitive demographic attributes and iden- tities in the model’s output for fairness and privacy. Techniques include adversarial train- ing (Edwards & Storkey, 2016), which has been applied to census and health records (Xie et al., 2017), text (Li et al., 2018; Coavoux et al., 2018; Elazar & Goldberg, 2018), images (Hamm, 2017) and sensor data of wearables (Iwasawa et al., 2016). An alternative approach is to minimize mutual information between the representation and the sensitive attribute (Moyer et al., 2018; Osia et al., 2018). Neither approach can prevent overlearning, except at the cost of destroying the model’s accuracy. Furthermore, these techniques cannot censor attributes that are not represented in the training data. We show that overlearned models recognize such attributes, too. # 6 CONCLUSIONS We demonstrated that models trained for seemingly simple tasks implicitly learn concepts that are not represented in the objective function. In particular, they learn to recognize sensitive attributes, such as race and identity, that are statistically orthogonal to the objective. The failure of censoring to suppress these attributes and the similarity of learned representations across uncorrelated tasks suggest that overlearning may be intrinsic, i.e., learning for some objectives may not be possible without recognizing generic low-level features that enable other tasks, including inference of sensi- 9 Published as a conference paper at ICLR 2020 tive attributes. For example, there may not exist a set of features that enables a model to accurately determine the gender of a face but not its race or identity. This is a challenge for regulations such as GDPR that aim to control the purposes and uses of machine learning technologies. To protect privacy and ensure certain forms of fairness, users and regulators may desire that models not learn some features and attributes. If overlearning is intrinsic, it may not be technically possible to enumerate, let alone control, what models are learning. There- fore, regulators should focus on ensuring that models are applied in a way that respects privacy and fairness, while acknowledging that they may still recognize and use sensitive attributes. Acknowledgments. This research was supported in part by NSF grants 1611770, 1704296, and 1916717, the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, and a Google Faculty Research Award. # REFERENCES Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. In ICLR, 2017. Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. In CVPR Workshops, 2015. Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. PAMI, 2013. Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentan- glement in variational autoencoders. In NIPS, 2018. Jianfeng Chi, Emmanuel Owusu, Xuwang Yin, Tong Yu, William Chan, Patrick Tague, and Yuan Tian. Privacy partitioning: Protecting user data during the deep learning inference phase. arXiv:1812.02863, 2018. Maximin Coavoux, Shashi Narayan, and Shay B. Cohen. Privacy-preserving neural representations of text. In EMNLP, 2018. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016. Harrison Edwards and Amos J. Storkey. Censoring representations with an adversary. In ICLR, 2016. Yanai Elazar and Yoav Goldberg. Adversarial removal of demographic attributes from text data. In EMNLP, 2018. EU. General Data Protection Regulation. https://en.wikipedia.org/wiki/General_Data_Protection_Regulation, 2018. # FaceScrub. http://vintage.winklerbros.net/facescrub.html, 2014. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. Jihun Hamm. Minimax filter: Learning to preserve privacy from inference attacks. JMLR, 18(129): 1–31, 2017. Heritage Health Prize. https://www.kaggle.com/c/hhp, 2012. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. 10 Published as a conference paper at ICLR 2020 Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes ImageNet good for transfer learning? arXiv:1608.08614, 2016. Yusuke Iwasawa, Kotaro Nakayama, Ikuko Yairi, and Yutaka Matsuo. Privacy issues regarding the application of DNNs to activity-recognition using wearables and its countermeasures by use of adversarial training. In IJCAI, 2016. Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In ASPLOS, 2017. Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). arXiv:1711.11279, 2017. Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In ICML, 2018. Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv:1312.6114, 2013. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In ICML, 2019. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convo- lutional neural networks. In NIPS, 2012. Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentan- gled latent concepts from unlabeled observations. In ICLR, 2018. Nicholas D Lane and Petko Georgiev. Can deep learning revolutionize mobile sensing? In HotMo- bile, 2015. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278–2324, 1998. Meng Li, Liangzhen Lai, Naveen Suda, Vikas Chandra, and David Z Pan. PrivyNet: A flexible framework for privacy-preserving deep neural network training. arXiv:1709.06161, 2017. Yitong Li, Timothy Baldwin, and Trevor Cohn. Towards robust and privacy-preserving text repre- sentations. In ACL, 2018. Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In NIPS, 2018. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Sch¨olkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learn- ing of disentangled representations. In ICML, 2019. Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. In ICLR, 2016. David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, 2018. Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In S&P, 2019. Daniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, and Greg Ver Steeg. representations without adversarial training. In NIPS, 2018. Invariant Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In NIPS, 2016. 11 Published as a conference paper at ICLR 2020 Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Minos Katevas, Hamed Haddadi, and Hamid R. R. Rabiee. Deep private-feature extraction. TKDE, 2018. Piper project page. https://people.eecs.berkeley.edu/˜nzhang/piper.html, 2015. Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. Overview of the 4th author profiling task at PAN 2016: Cross-genre evaluations. In CEUR Workshop, 2016. Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. Learning controllable fair representations. In AISTATS, 2019. # UTKFace. http://aicip.eecs.utk.edu/wiki/UTKFace, 2017. Ji Wang, Jianguo Zhang, Weidong Bao, Xiaomin Zhu, Bokai Cao, and Philip S Yu. Not just privacy: Improving performance of private deep learning in mobile cloud. In KDD, 2018. Qizhe Xie, Zihang Dai, Yulun Du, Eduard H. Hovy, and Graham Neubig. Controllable invariance through adversarial feature learning. In NIPS, 2017. # Yelp Open Dataset. https://www.yelp.com/dataset, 2018. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In ICML, 2013. Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In CVPR, 2015. Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017. 12
{ "id": "1709.06161" }
1905.10044
BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions
In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid information, and require difficult entailment-like inference to solve. We also explore the effectiveness of a range of transfer learning baselines. We find that transferring from entailment data is more effective than transferring from paraphrase or extractive QA data, and that it, surprisingly, continues to be very beneficial even when starting from massive pre-trained language models such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on our train set. It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.
http://arxiv.org/pdf/1905.10044
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, Kristina Toutanova
cs.CL
In NAACL 2019
null
cs.CL
20190524
20190524
2019 9 1 0 2 y a M 4 2 ] L C . s c [ 1 v 4 4 0 0 1 . 5 0 9 1 : v i X r a # BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions Christopher Clark∗1, Kenton Lee†, Ming-Wei Chang†, Tom Kwiatkowski† # Michael Collins †2, Kristina Toutanova† # ∗Paul G. Allen School of CSE, University of Washington [email protected] # †Google AI Language {kentonl, mingweichang, tomkwiat, mjcollins, kristout}@google.com # Abstract In this paper we study yes/no questions that are naturally occurring — meaning that they are generated in unprompted and unconstrained settings. We build a reading comprehension dataset, BoolQ, of such questions, and show that they are unexpectedly challenging. They often query for complex, non-factoid informa- tion, and require difficult entailment-like infer- ence to solve. We also explore the effective- ness of a range of transfer learning baselines. We find that transferring from entailment data is more effective than transferring from para- phrase or extractive QA data, and that it, sur- prisingly, continues to be very beneficial even when starting from massive pre-trained lan- guage models such as BERT. Our best method trains BERT on MultiNLI and then re-trains it on our train set. It achieves 80.4% accuracy compared to 90% accuracy of human anno- tators (and 62% majority-baseline), leaving a significant gap for future work. Q: Has the UK been hit by a hurricane? P: The Great Storm of 1987 was a violent extratropical cyclone which caused casualties in England, France and the Channel Islands . . . Yes. [An example event is given.] # FR Q: Does France have a Prime Minister and a President? P: . . . The extent to which those decisions lie with the Prime Minister or President depends upon . . . Yes. [Both are mentioned, so it can be inferred both exist.] # Q: Have the San Jose Sharks won a Stanley Cup? P: . . . The Sharks have advanced to the Stanley Cup fi- nals once, losing to the Pittsburgh Penguins in 2016 . . . No. [They were in the finals once, and lost.] A: Figure 1: Example yes/no questions from the BoolQ dataset. Each example consists of a question (Q), an excerpt from a passage (P), and an answer (A) with an explanation added for clarity. # 1 Introduction Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these in- ferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Be- larus in freestyle skiing.” implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. To test a model’s ability to make these kinds of inferences, previous work in natural language in- ~~ 1Work completed while interning at Google. 2Also affiliated with Columbia University, work done at Google. ference (NLI) proposed the task of labeling candi- date statements as being entailed or contradicted by a given passage. However, in practice, gen- erating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests (Gururangan et al., 2018; Jia and Liang, 2017; McCoy et al., 2019) that simply asking human annotators to write candidate state- ments will result in examples that typically only require surface-level reasoning. In this paper we propose an alternative: we test models on their ability to answer naturally occur- ring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure 1 contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annota- tors need to apply a wide range of inferential abili- ties when answering them. As a result, they can be used to construct highly inferential reading com- prehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions. Yes/No questions do appear as a subset of some existing datasets (Reddy et al., 2018; Choi et al., 2018; Yang et al., 2018). However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions. We follow the data collection method used by Natural Questions (NQ) (Kwiatkowski et al., 2019) to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes” or “no” as output. Fig- ure 1 contains some examples, and Appendix A.1 contains additional randomly selected examples. Following recent work (Wang et al., 2018), we focus on using transfer learning to establish base- lines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphras- it is not clear what the best ing. Therefore, data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre- trained language models such as BERT (Devlin et al., 2018) or ELMo (Peters et al., 2018). We experiment with state-of-the-art unsupervised ap- proaches, using existing entailment datasets, three methods of leveraging extractive QA data, and us- ing a few other supervised datasets. We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be com- bined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq. # 2 Related Work Yes/No questions make up a subset of the read- ing comprehension datasets CoQA (Reddy et al., 2018), QuAC (Choi et al., 2018), and Hot- PotQA (Yang et al., 2018), and are present in the ShARC (Saeidi et al., 2018) dataset. These datasets were built to challenge models to under- stand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no ques- tions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to an- swer their questions, making it the best candidate to contain naturally occurring questions. How- ever, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes” answers. The MS Marco dataset (Nguyen et al., 2016), which contains questions with free-form text an- swers, also includes some yes/no questions. We experiment with heuristically identifying them in Section 4, but this process can be noisy and the quality of the resulting annotations is unknown. We also found the resulting dataset is class imbal- anced, with 80% “yes” answers. Yes/No QA has been used in other contexts, such as the templated bAbI stories (Weston et al., 2015) or some Visual QA datasets (Antol et al., 2015; Wu et al., 2017). We focus on answering yes/no questions using natural language text. Question answering for reading comprehension in general has seen a great deal of recent work (Ra- jpurkar et al., 2016; Joshi et al., 2017), and there have been many recent attempts to construct QA datasets that require advanced reasoning abili- ties (Yang et al., 2018; Welbl et al., 2018; Mi- haylov et al., 2018; Zellers et al., 2018; Zhang et al., 2018). However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions (Yang et al., 2018; Mihaylov et al., 2018), or filtering out easy questions (Zellers et al., 2018). This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting. In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources. language inference is also a well studied area of research, particularly on the MultiNLI (Williams et al., 2018) and SNLI (Bow- man et al., 2015) datasets. Other sources of entailment data include the PASCAL RTE chal- lenges (Bentivogli et al., 2009, 2011) or Sci- Tail (Khot et al., 2018). We note that, although Sc- iTail, RTE-6 and RTE-7 did not use crowd work- ers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text. Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task. BoolQ also requires detecting entailment in paragraphs instead of sentence pairs. Transfer learning for entailment has been stud- ied in GLUE (Wang et al., 2018) and SentE- val (Conneau and Kiela, 2018). Unsupervised pre-training in general has recently shown excel- lent results on many datasets, including entailment data (Peters et al., 2018; Devlin et al., 2018; Rad- ford et al., 2018). Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works (Demszky et al., 2018; Poliak et al., 2018; Khot et al., 2018). In this paper we found some evidence suggesting that these approaches are less effective than us- ing crowd-sourced entailment examples when it comes to transferring to natural yes/no questions. Contemporaneously with our work, Phang et al. (2018) showed that pre-training on supervised tasks could be beneficial even when using pre- trained language models, especially for a textual entailment task. Our work confirms these results for yes/no question answering. This work builds upon the Natural Questions (NQ) (Kwiatkowski et al., 2019), which contains some natural yes/no questions. However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task. In this pa- per, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset. # 3 The BoolQ Dataset An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes” or “no”. We include the article title since it can potentially help resolve ambiguities (e.g., corefer- ent phrases) in the passage, although none of the models presented in this paper make use of them. # 3.1 Data Collection We gather from using NQ (Kwiatkowski et al., 2019), but with an additional filtering step to focus on yes/no questions. We summarize the complete pipeline here, but refer to their paper for a more detailed description. Questions are gathered from anonymized, ag- gregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words3 and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the ques- tion is good, meaning it is comprehensible, unam- biguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annota- tors find a passage within the document that con- tains enough information to answer the question. Annotators can mark questions as “not answer- able” if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question’s answer is “yes” or “no”. Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Note that, unlike in NQ, we only use ques- tions that were marked as having a yes/no an- swer, and pair each question with the selected pas- sage instead of the entire document. This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be ap- plied to our dataset. We combine 13k questions gathered from this 3The full set is: {“did”, “do”, “does”, “is”, “are”, “was”, “were”, “have”, “has”, “can”, “could”, “will”, “would”}. Question Topic Category Entertainment Media Nature/Science Sports Law/Government History Fictional Events Other Example Is You and I by Lady Gaga a cover? Are there blue whales in the Atlantic Ocean? Has the US men’s team ever won the World Cup? Is there a seat belt law in New Hampshire? Were submarines used in the American Civil War? Is the Incredible Hulk part of the avengers? Is GDP per capita same as per capita income? Percent Yes% 65.9 56.8 54.5 70.0 70.0 87.5 65.4 22.0 22.0 11.0 10.0 5.0 4.0 26.0 # Question Type Category Definitional Existence Event Occurrence Other General Fact Other Entity Fact Example Is thread seal tape the same as Teflon tape? Is there any dollar bill higher than a 100? Did the great fire of London destroy St. Paul’s Cathedral? Is there such thing as a dominant eye? Is the Arch in St. Louis a national park? Percent Yes% 55.2 69.0 73.9 62.7 63.3 14.5 14.5 11.5 29.5 30.0 Table 1: Question categorization of BoolQ. Question topics are shown in the top half and question types are shown in the bottom half. pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer pas- sages (average length 108 tokens). # 3.2 Analysis In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abil- ities required to answer them. # 3.3 Annotation Quality First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen ex- amples. the au- If there was a disagreement, thors conferred and selected a single answer by mutual agreement. We call the resulting labels “gold-standard” labels. On the 110 selected ex- amples, the answer annotations reached 90% ac- curacy compared to the gold-standard labels. Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage. Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset. # 3.4 Question Types Part of the value of this dataset is that it contains questions that people genuinely want to answer. To explore this further, we manually define a set of topics that questions can be about. An author categorized 200 questions into these topics. The results can be found in the upper half of Table 1. Questions were often about entertainment me- dia (including T.V., movies, and music), along with other popular topics like sports. However, there are still a good portion of questions ask- ing for more general factual knowledge, including ones about historical events or the natural world. We also broke the questions into categories based on what kind of information they were re- questing, shown in the lower half of Table 1. Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a par- ticular event occurred (Event Occurrence), and an- other sixth ask whether an object is known by a particular name, or belongs to a particular cate- gory (Definitional). The questions that do not fall into these three categories were split between re- questing facts about a specific entity, or requesting more general factual information. We do find a correlation between the nature of the question and the likelihood of a “yes” answer. However, this correlation is too weak to help out- perform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class. We also found that question-only models perform very poorly on this task (see Sec- tion 5.3), which helps confirm that the questions Reasoning Types Paraphrasing (38.7%) The passage explicitly asserts or refutes what is stated in the question. Yes/No Question Answering Examples Q: P: Is Tim Brown in the Hall of Fame? Brown has also played for the Tampa Bay Buccaneers. In 2015, he was inducted into the Pro Football Hall of Fame. A: Yes. [“inducted into” directly implies he is in Hall of Fame.] Q: Are there any nuclear power plants in Michigan? P: By Example (11.8%) The passage provides an example or counter-example to what is asserted by the question. Factual Reasoning (8.5%) Answering the question requires using is world-knowledge to connect what stated in the passage to the question. Implicit (8.5%) The passage mentions or describes en- tities in the question in way that would not make sense if the answer was not yes/no. Missing Mention (6.6%) We can conclude the answer is yes or no because, if this was not the case, it would have been mentioned in the pas- sage. . . . three nuclear power plants supply Michigan with about 30% of its elec- tricity. A: Yes. [Since there must be at least three.] Q: Was designated survivor filmed in the White House? P: The series is. . . filmed in Toronto, Ontario. A: No. [The White House is not located in Toronto.] Q: P: Is static pressure the same as atmospheric pressure? The aircraft designer’s objective is to ensure the pressure in the aircraft’s static pressure system is as close as possible to the atmospheric pressure. . . A: No. [It would not make sense to bring them “as close as possible” if those terms referred to the same thing.] Q: Did Bonnie Blair’s daughter make the Olympic team? P: Blair and Cruikshank have two children: a son, Grant, and daughter, Blair.... Blair Cruikshank competed at the 2018 United States Olympic speed skat- ing trials at the 500 meter distance. A: No. [The passage describes Blair Cruikshank’s daughter’s skating accom- plishments, so it would have mentioned it if she had qualified.] Is the sea snake the most venomous snake? . . . the venom of the inland taipan, drop by drop, is the most toxic among all snakes Other Inference (25.9%) The passage states a fact that can be used to infer whether the answer is true or false, and does not fall into any of the other categories. Q: P: A: No. [If inland taipan is the most venomous snake, the sea snake must not be.] Table 2: Kinds of reasoning needed in the BoolQ dataset. do not contain sufficient information to predict the answer on their own. # 3.5 Types of Inference Finally, we categorize the kinds of inference re- quired to answer the questions in BoolQ4. The def- initions and results are shown in Table 2. Less than 40% of the examples can be solved by detecting paraphrases. Instead, many ques- tions require making additional inferences (cate- gories “Factual Reasoning”, “By Example”, and “Other Inference”) to connect what is stated in the passage to the question. There is also a signifi- cant class of questions (categories “Implicit” and “Missing Mention”) that require a subtler kind of inference based on how the passage is written. # 3.6 Discussion Why do natural yes/no questions require inference so often? We hypothesize that there are several factors. First, we notice factoid questions that ask about simple properties of entities, such as “Was Obama born in 1962?”, are rare. We suspect this is because people will almost always prefer to 4Note the dataset has been updated since we carried out this analysis, so it might be slighly out-of-date. phrase such questions as short-answer questions (e.g., “When was Obama born?”). Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information. Second, both the passages and questions rarely include negation. As a result, detecting a “no” an- swer typically requires understanding that a pos- itive assertion in the text excludes, or makes un- likely, a positive assertion in the question. This requires reasoning that goes beyond paraphras- ing (see the “Other-Inference” or “Implicit” exam- ples). We also think it was important that annotators only had to answer questions, rather than generate them. For example, imagine trying to construct questions that fall into the categories of “Missing Mention” or “Implicit”. While possible, it would require a great deal of thought and creativity. On the other hand, detecting when a yes/no ques- tion can be answered using these strategies seems much easier and more intuitive. Thus, having an- notators answer pre-existing questions opens the door to building datasets that contain more infer- ence and have higher quality labels. # 4 Training Yes/No QA Models Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm (Wang et al., 2018). We find training models on our train set alone to be relatively ineffective. Our best model reaches 69.6% accuracy, only 8% better than the majority baseline. Therefore, we follow the recent trend in NLP of using transfer learning. In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data. We list the sources we consider for pre-training below. Entailment: We two entailment datasets, MultiNLI (Williams et al., 2018) and SNLI (Bowman et al., 2015). We choose these datasets since they are widely-used and large enough to use for pre-training. We also ex- periment with ablating classes from MultiNLI. During fine-tuning we use the probability the model assigns to the “entailment” class as the probability of predicting a “yes” answer. Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE (Lai et al., 2017), which contains stories or short essays paired with questions built to test the reader’s comprehension of the text. Following what was done in SciTail (Khot et al., 2018), we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question. During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get state- ment probabilities. We use the negative log probability of the correct statement as a loss function. To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a “yes” answer. Extractive QA: We consider several meth- ods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage. Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of con- structing entailment-like data from extractive QA data. First, we use the QNLI task from GLUE (Wang et al., 2018), where the model must determine if a sentence from SQuAD 1.1 (Rajpurkar et al., 2016) contains the answer to an input question or not. Following previous work (Hu et al., 2018), we also try building entailment-like training data from SQuAD 2.0 (Rajpurkar et al., 2018). We concate- nate questions with either the correct answer, or with the incorrect “distractor” answer candidate provided by the dataset, and train the model to classify which is which given the question’s sup- porting text. Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document. Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph. We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question. On BoolQ, we compute the probability of a “yes” answer by applying the sigmoid operator to the score the model gives to the input question and passage. Paraphrasing: We use the Quora Question Paraphrasing (QQP) dataset, which consists of pairs of questions labelled as being paraphrases or not.5 Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question. Heuristic Yes/No: We attempt to heuristi- cally construct a corpus of yes/no questions from the MS Marco corpus (Nguyen et al., 2016). MS Marco has free-form answers paired with snippets of related web documents. We search for answers starting with “yes” or “no”, and then pair the corresponding questions with snippets marked as being related to the question. We call this task Y/N MS Marco; in total we gather 38k examples, 5data.quora.com/First-Quora-Dataset-Release-Question- Pairs 80% of which are “yes” answers. Unsupervised: is well known that unsu- pervised pre-training using language-modeling objectives (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018), can improve performance on many tasks. We experiment with these meth- ods by using the pre-trained models from ELMo, BERT, and OpenAI’s Generative Pre-trained Transformer (OpenAI GPT) (see Section 5.2). # 5 Results # 5.1 Shallow Models First, we experiment with using a linear classi- fier on our task. In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority- class baseline accuracy (62.17% on the dev set). We did find there was a correlation between the number of times question words occurred in the passage and the answer being “yes”, but the corre- lation was not strong enough to build an effective classifier. “Yes” is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases. # 5.2 Neural Models For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention. Our experiments using unsupervised pre-training use the models provided by the au- thors. In more detail: Our Recurrent model follows a standard recur- rent plus attention architecture for text-pair clas- sification (Wang et al., 2018). It embeds the premise/hypothesis text using fasttext word vec- tors (Mikolov et al., 2018) and learned charac- ter vectors, applies a shared bidirectional LSTM to both parts, applies co-attention (Parikh et al., 2016) to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to pre- dict the final class. See Appendix A.2 for details. Our Recurrent +ELMo model uses the language model from Peters et al. (2018) to provide con- textualized embeddings to the baseline model out- lined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from Radford et al. (2018), which has been pre- trained as a language model on the Books cor- pus (Zhu et al., 2015). Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from Devlin et al. (2018), which has been trained on next-sentence-selection and masked language modelling on the Book Cor- pus and Wikipedia. We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the opti- mization parameters to achieve the best results. We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT. # 5.3 Question/Passage Only Results Following the recommendation of Gururangan et al. (2018), we first experiment with models that are only allowed to observe the question or the passage. The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage. Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage cor- relate with the answer. Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with “yes” answers. # 5.4 Transfer Learning Results The results of our transfer learning methods are shown in Table 3. All results are averaged over five runs. For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated. For unsupervised pre- training, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning. QA Results: We were unable to transfer For RACE, the from RACE or SQuAD 2.0. problem might be domain mismatch. In RACE the passages are stories, and the questions often query for passage-specific information such as the author’s intent or the state of a particular entity from the passage, instead of general knowledge. We would expect SQuAD 2.0 to be a bet- ter match for BoolQ since it is also Wikipedia- based, but its possible detecting the adversarially- Transfer Task Model Transfer Data #Examples Source Acc. BoolQ Acc. - - 79.66 69.45 71.78 89.58 87.26 78.23 84.26 81.16 89.72 88.17 42.30 - - - - - QNLI SQuAD 2.0 NQ Long Answer QQP Y/N MS Marco MultiNLI Majority Recurrent - - 108k 130k 93k 364k 39k 392k 262k 262k 262k 351k 549k 1000M 800M 3,300M N/A N/A Extractive QA Recurrent Recurrent Paraphrasing Heuristic Y/N Recurrent - w/o Entail - w/o Contradict - w/o Neutral Entailment Recurrent SNLI RACE Recurrent Recurrent +ELMo Billion Word OpenAI GPT BERTL MC QA Unsupervised Books Books/Wikipedia 62.17 69.60 71.36 69.83 72.78 71.30 71.40 75.57 72.95 72.85 74.83 73.16 68.40 71.41 72.87 76.90 Table 3: Transfer learning results on the BoolQ dev set after fine-tuning on the BoolQ training set. Results are averaged over five runs. In all cases directly using the pre-trained model without fine-tuning did not achieve results better than the majority baseline, so we do not include them here. constructed distractors used for negative examples does not relate well to yes/no QA. We got better results using QNLI, and even better results using NQ. This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline. Model Dev Acc. Test Acc. Majority Class Recurrent +MultiNLI Pre-trained BERTL +MultiNLI 62.17 70.28 76.15 78.09 82.20 62.31 67.52 74.24 76.70 80.43 Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin. Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies. Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value. SNLI transferred better than other datasets, but worse than MultiNLI. We suspect this is due to limitations of the photo-caption domain it was constructed from. Table 4: Test set results on BoolQ, “+MultiNLI” in- dicates models that were additionally pre-trained on MultiNLI before being fine-tuned on the train set. Marco. Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness. The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ. Unsupervised Results: Following results on other datasets (Wang et al., 2018), we found BERTL to be the most effective unsupervised method, surpassing all other methods of pre- training. # 5.5 Multi-Step Transfer Results Other Supervised Results: We obtained a small amount of transfer using QQP and Y/N MS Our best single-step transfer learning results were from using the pre-trained BERTL model and MultiNLI. We also experiment with combining these approaches using a two-step pre-training regime. In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the re- sulting model again on the BoolQ train set. We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI. We show the test set results for this model, and some other pre-training variations, in Table 4. For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance. Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable. This suggests MultiNLI contains signal orthogonal to what is found in BERT’s unsupervised objectives. # 5.6 Sample Efficiency In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI. Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples. For small numbers of examples, the recurrent model with MultiNLI pre- training actually out-performs BERTL. # 5.7 Discussion A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi- sentence passages, such as SQuAD 2.0, RACE, or 80 y c a r u c c A v e D Q l o o B 70 60 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 Number of Training Examples Recurrent Recurrent +MultiNLI BERTL BERTL +MultiNLI Figure 2: Accuracy for various models on the BoolQ dev set as the number of training examples varies. Y/N MS Marco, were not very useful for transfer. The entailment datasets were stronger despite con- sisting of sentence pairs. This suggests that adapt- ing from sentence-pair input to question/passage input was not a large obstacle to achieving transfer. Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative state- ments did not improve transfer from MultiNLI, which supports this hypothesis. The success of MultiNLI might also be surpris- ing given recent concerns about the generalization abilities of models trained on it (Glockner et al., 2018), particularly related to “annotation artifacts” caused by using crowd workers to write the hy- pothesis statements (Gururangan et al., 2018). We have shown that, despite these weaknesses, it can still be an important starting point for models be- ing used on natural data. We hypothesize that a key advantage of MultiNLI is that it contains examples of contra- dictions. The other sources of transfer we con- sider, including the next-sentence-selection objec- tive in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text. Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training. Note that it is possible to pre-train a model on several of the suggested datasets, either in succes- sion or in a multi-task setup. We leave these ex- periments to future work. Our results also sug- gest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions. # 6 Conclusion We have introduced BoolQ, a new reading com- prehension dataset of naturally occurring yes/no questions. We have shown these questions are challenging and require a wide range of infer- ence abilities to solve. We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be lever- aged to boost performance even on top of lan- guage model pre-training. Future work could in- clude building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application. # References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In Proceedings of the IEEE international conference on computer vision. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The Sixth PASCAL Recogniz- ing Textual Entailment Challenge. In TAC. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2011. The Seventh PASCAL Recog- nizing Textual Entailment Challenge. In TAC. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A Large Anno- tated Corpus for Learning Natural Language Infer- ence. In EMNLP. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for Natural Language Inference. In ACL. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuaC: Question Answering in Con- text. In EMNLP. Alexis Conneau and Douwe Kiela. 2018. Senteval: An Evaluation Toolkit for Universal Sentence Rep- resentations. In LREC. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming Question Answering Datasets Into Natural Language Inference Datasets. Comput- ing Research Repository, arXiv:1809.02922. Ver- sion 2. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Computing Research Repository, arXiv:1810.04805. Version 1. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In ACL. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation Artifacts in Natu- ral Language Inference Data. In NAACL. Minghao Hu, Yuxing Peng, Zhen Huang, Nan Yang, Ming Zhou, et al. 2018. Read+ Verify: Machine Reading Comprehension with Unanswerable Ques- tions. In CoRR. Robin Jia and Percy Liang. 2017. Adversarial Ex- amples for Evaluating Reading Comprehension Sys- tems. In EMNLP. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A Large Scale Dis- tantly Supervised Challenge Dataset for Reading Comprehension. In ACL. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A Textual Entailment Dataset from Science Question Answering. In AAAI. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. In ICLR. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral Questions: a Benchmark for Question Answering Research. In TACL. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-Scale Read- ing Comprehension Dataset from Examinations. In EMNLP. R Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. Comput- ing Research Repository, arXiv:1902.01007. Ver- sion 1. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Ques- tion Answering. In EMNLP. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in Pre-Training Distributed Word Represen- tations. In LREC. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated Machine Reading Comprehension Dataset. Computing Re- search Repository, arXiv:1611.09268. Version 3. Ankur P Parikh, Oscar T¨ackstr¨om, Dipanjan Das, and Jakob Uszkoreit. 2016. A Decomposable Attention Model for Natural Language Inference. In EMNLP. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In NAACL. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence Encoders on STILTs: Supplemen- tary Training on Intermediate Labeled-data Tasks. Computing Research Repository, arXiv:1811.01088. Version 2. Adam Poliak, Aparajita Haldar, Rachel Rudinger, J Ed- ward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. 2018. Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation. In EMNLP. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Under- standing by Generative Pre-training. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don’t Know: Unanswerable Ques- tions for SQuAD. In ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ Questions for Machine Comprehension of Text. In EMNLP. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. CoQA: A Conversational Question Answer- ing Challenge. In TACL. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt¨aschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpreta- tion of Natural Language Rules in Conversational Machine Reading. In EMNLP. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional Attention Flow for Machine Comprehension. In ICLR. Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Plat- form for Natural Language Understanding. In EMNLP. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing Datasets for Multi-hop Reading Comprehension Across Documents. In ACL. Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks. In ICLR. Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2018. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In NAACL. Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual Question Answering: A Survey of Methods In Computer Vision and Image Un- and Datasets. derstanding. Elsevier. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A Dataset for Diverse, Explainable Multi-hop Question An- swering. In EMNLP. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. In EMNLP. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension. Computing Research Repository, arXiv:1810.12885. Version 1. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: To- wards Story-Like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the IEEE international conference on computer vision, pages 19–27. # A Appendices # A.1 Randomly Selected Examples We include a number of randomly selected exam- ples from the BoolQ train set in Figure 3. For each example we show the question in bold, followed by the answer in parentheses, and then the passage below. # A.2 Recurrent Model Our recurrent model is a standard model from the text pair classification literature, similar to the one used in the GLUE baseline (Wang et al., 2018) and the model from Chen et al. (2017). Our model has the following stages: Embed: Embed the words using a character CNN following what was done by Seo et al. (2017), and the fasttext crawl word embed- dings (Mikolov et al., 2018). Then run a BiLSTM over the results to get context-aware word hy- pothesis embeddings hu1, u2, u3, ...i and premise embeddings hv1, v2, v3, ...i. Co-Attention: Compute a co-attention ma- trix, A, between the hypothesis and premise where Aij = w1 · ui + w2 · vj + w3 · (ui ◦ vj), ◦ is elementwise multiplication, and w1, w2, and w3 are weights to be learned. Attend: For each row in A, apply the soft- max operator and use the results to compute a weighed sum of the hypothesis embeddings, resulting in attended vectors h˜u1, ˜u2, ...i. We use the transpose of A to compute vectors h˜v1, ˜v2, ...i from the premise embeddings in a similar manner. Pool: BiLSTM over h[v1; ˜v1; ˜v1 ◦ v1], [v2; ˜v2; ˜v2 ◦ v2], ...i to get embeddings hh1, h2, ...i. Then pool these embed- dings by computing attention scores ai = w · hi, p = sof tmax(a), and then the sum v∗ = Pi pihi. Likewise we compute p∗ from the premise. Classify: Finally we feed [v∗; p∗] into a fully connected layer, and then through a softmax layer to predict the output class. We apply dropout at a rate of 0.2 between all layers, and train the model using the Adam optimizer (Kingma and Ba, 2014). The learning rate is decayed by 0.999 every 100 steps. We use 200 dimensional LSTMs and a 100 dimensional fully connected layer. # Is there a catalytic converter on a diesel? (Y) A catalytic converter is an exhaust emission control device that converts toxic gases and pollutants in exhaust gas from an internal combustion engine into less-toxic pollutants by catalyzing a redox reaction (an oxidation and a reduction reaction). Catalytic converters are usually used with internal combustion engines fueled by either gasoline or diesel–including lean-burn engines as well as kerosene heaters and stoves. # Is there a season 2 of Pride and Prejudice? (N) Pride and Prejudice is a six-episode 1995 British television drama, adapted by Andrew Davies from Jane Austen’s 1813 novel of the same name. Jennifer Ehle and Colin Firth starred as Elizabeth Bennet and Mr. Darcy. Produced by Sue Birtwistle and directed by Simon Langton, the serial was a BBC production with additional funding from the American A&E Network. BBC1 originally broadcast the 55-minute episodes from 24 September to 29 October 1995. The A&E Network aired the series in double episodes on three consecutive nights beginning 14 January 1996. There are six episodes in the series. # Is Saving Private Ryan based on a book? (N) In 1994, Robert Rodat wrote the script for the film. Rodat’s script was submitted to producer Mark Gordon, who liked it and in turn passed it along to Spielberg to direct. The film is loosely based on the World War II life stories of the Niland brothers. A shooting date was set for June 27, 1997. # Is The Talk the same as The View? (N) In November 2008, the show’s post-election day telecast garnered the biggest audience in the show’s history at 6.2 million in total viewers, becoming the week’s most-watched program in daytime television. It was surpassed on July 29, 2010, during which former President Barack Obama first appeared as a guest on The View, which garnered a total of 6.6 mil- lion viewers. In 2013, the show was reported to be averaging 3.1 million daily viewers, which outpaced rival talk show The Talk. # Does the concept of a contact force apply to both a macroscopic scale and an atomic scale? (N) In the Standard Model of modern physics, the four fundamental forces of nature are known to be non-contact forces. The strong and weak interaction primarily deal with forces within atoms, while gravitational effects are only obvious on an ultra-macroscopic scale. Molecular and quantum physics show that the electromagnetic force is the fundamental interaction responsible for contact forces. The interaction between macroscopic objects can be roughly described as resulting from the electromagnetic interactions between protons and electrons of the atomic constituents of these objects. Everyday objects do not actually touch; rather, contact forces are the result of the interactions of the electrons at or near the surfaces of the objects. # Legal to break out of prison in Germany? (Y) In Mexico, Belgium, Germany and Austria, the philosophy of the law holds that it is human nature to want to escape. In those countries, escapees who do not break any other laws are not charged for anything and no extra time is added to their sentence. However, in Mexico, officers are allowed to shoot prisoners attempting to escape, and an escape is illegal if violence is used against prison personnel or property, or if prison inmates or officials aid the escape. # Is the movie sand pebbles based on a true story? (N) The Sand Pebbles is a 1966 American war film directed by Robert Wise in Panavision. It tells the story of an independent, rebellious U.S. Navy machinist’s mate, first class aboard the fictional gunboat USS San Pablo in 1920s China. # Is Burberrys of London the same as Burberry? (Y) Burberry was founded in 1856 when 21-year-old Thomas Burberry, a former draper’s apprentice, opened his own store in Basingstoke, Hampshire, England. By 1870, the business had established itself by focusing on the development of outdoors attire. In 1879, Burberry introduced in his brand the gabardine, a hardwearing, water-resistant yet breathable fabric, in which the yarn is waterproofed before weaving. “Burberry” was the original name until it became “Burberrys”, due to many customers from around the world began calling it “Burberrys of London”. In 1999, the name was reverted to In the original, “Burberry”. However, the name “Burberrys of London” is still visible on many older Burberry products. 1891, Burberry opened a shop in the Haymarket, London. Before being termed as trench, it was known as the Tielocken worn by the British officers and featured a belt with no buttons, was double breasted, and protected the body from neck to knees. # Is the Saturn Vue the same as the Chevy Equinox? (N) Riding on the GM Theta platform, the unibody is mechanically similar to the Saturn Vue and the Suzuki XL7. However, the Equinox and the Torrent are larger than the Vue, riding on a 112.5 in (2,858mm) wheelbase, 5.9 in (150mm) longer than the Vue. Front-wheel drive is standard, with optional all-wheel drive. They are not designed for serious off-roading like the truck-based Chevrolet Tahoe and Chevrolet TrailBlazer. # Is Destin FL on the Gulf of Mexico? (Y) The city is located on a peninsula separating the Gulf of Mexico from Choctawhatchee Bay. The peninsula was originally an island; hurricanes and sea level changes gradually connected the island to the mainland. Figure 3: Randomly sampled examples from the BoolQ train set.
{ "id": "1811.01088" }
1905.09866
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Analogies such as "man is to king as woman is to X" are often used to illustrate the amazing power of word embeddings. Concurrently, they have also been used to expose how strongly human biases are encoded in vector spaces built on natural language, like "man is to computer programmer as woman is to homemaker". Recent work has shown that analogies are in fact not such a diagnostic for bias, and other methods have been proven to be more apt to the task. However, beside the intrinsic problems with the analogy task as a bias detection tool, in this paper we show that a series of issues related to how analogies have been implemented and used might have yielded a distorted picture of bias in word embeddings. Human biases are present in word embeddings and need to be addressed. Analogies, though, are probably not the right tool to do so. Also, the way they have been most often used has exacerbated some possibly non-existing biases and perhaps hid others. Because they are still widely popular, and some of them have become classics within and outside the NLP community, we deem it important to provide a series of clarifications that should put well-known, and potentially new cases into the right perspective.
http://arxiv.org/pdf/1905.09866
Malvina Nissim, Rik van Noord, Rob van der Goot
cs.CL
null
null
cs.CL
20190523
20191109
9 1 0 2 v o N 9 ] L C . s c [ 2 v 6 6 8 9 0 . 5 0 9 1 : v i X r a # Fair is Better than Sensational: Man is to Doctor as Woman is to Doctor Malvina Nissim University of Groningen Rik van Noord University of Groningen Rob van der Goot University of Groningen Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also been used to expose how strongly human biases are encoded in vector spaces built on natural language, like man is to computer programmer as woman is to homemaker. Recent work has shown that analogies are in fact not such a diagnostic for bias, and other methods have been proven to be more apt to the task. However, beside the intrinsic problems with the analogy task as a bias detection tool, in this paper we show that a series of issues related to how analogies have been implemented and used might have yielded a distorted picture of bias in word embeddings. Human biases are present in word embeddings and need to be addressed. Analogies, though, are probably not the right tool to do so. Also, the way they have been most often used has exacerbated some possibly non-existing biases and perhaps hid others. Because they are still widely popular, and some of them have become classics within and outside the NLP community, we deem it important to provide a series of clarifications that should put well-known, and potentially new cases into the right perspective. # 1. Introduction Word embeddings are distributed representations of texts which capture similarities between words. Besides improving a wide variety of NLP tasks, the power of word embeddings is often also tested intrinsically. Mikolov et al. (2013) introduced the idea of testing the soundness of embedding spaces via the analogy task. Analogies are equations of the form A : B :: C : D, or simply A is to B as C is to D. Given the terms A, B, C, the model must return the word that correctly stands for D in the given analogy. A most classic example is man is to king as woman is to X, where the model is expected to return queen, by subtracting “manness” from the concept of king to obtain some general royalty, and then re-adding some “womanness” to obtain the concept of queen (king − man + woman = queen). Beside showing this kind of magical power, analogies have been extensively used to show that embeddings carry worrying biases present in our society and thus encoded in language. This bias is often demonstrated by using the analogy task to find stereotypical relations, such as the classic man is to doctor as woman is to nurse or man is to computer programmer as woman is to homemaker. The potential of the analogy task has been recently questioned, though. It has been argued that what is observed through the analogy task might be mainly due to irrelevant neighborhood structure rather than to the vector offset that supposedly © 2019 Association for Computational Linguistics Computational Linguistics Volume 0, Number 0 captures the analogy itself (Linzen 2016; Rogers, Drozd, and Li 2017). Also, Drozd, Gladkova, and Matsuoka (2016) have shown that the original and classically used 3COSADD method (Mikolov et al. 2013) is not able to capture all linguistic regularities present in the embeddings. And regarding bias, Gonen and Goldberg (2019) have shown that analogies are not a good diagnostic for bias in embeddings, and have in a way misled bias identification, with consequences on debiasing efforts. Indeed, on the recent contextualised embeddings (Peters et al. 2018; Devlin et al. 2019), the analogy task is not used anymore to either evaluate their soundness, or to detect bias (Zhao et al. 2019; Basta, Costa-jussà, and Casas 2019; May et al. 2019). While research indicates that this is the direction that should be pursued to deal with bias in word embeddings, analogies are not only still widely used, but have also left a strong footprint, with some by-now-classic examples often brought up as proof of human bias in language models. A case in point is the opening speech by the ACL President at ACL 2019 in Florence, Italy, where the issue of bias in embeddings is brought up showing biased analogies from a 2019 paper (Manzini et al. 2019b).1 This contribution thus aims at providing some clarifications over the past use of analogies to hopefully raise further and broader awareness of their potential and their limitations, and put well-known and possibly new ones in the right perspective. First, we take a closer look at the concept of analogy together with requirements and expectations. We look at how the original analogy structure was used to query embeddings, and some misconceptions that a simple implementation choice has caused. More specifically, in the original proportional analogy implementation, all terms of the equation A : B :: C : D are distinct (Mikolov et al. 2013). In other words, the model is forced to return a different concept than any of the original ones. Given an analogy of the form A : B :: C : D, the model is not allowed to yield any term D such that D == B, D == A, or D == C, since the code explicitly prevents this. While this constraint is helpful when all terms of the analogy are expected to be different, it becomes a problem, and even a dangerous artifact, when the terms could or even should be the same. Second, we discuss different analogy detection strategies/measures that have been proposed, namely the original 3COSADD measure, the revised 3COSMUL measure (Levy and Goldberg 2014), and the Bolukbasi et al. (2016) formula, which introduces a different take on the analogy construction, reducing the impact of subjective choices. Third, we highlight the role played by human biases in choosing which analogies to search for, and which results to report. We also show that even when subjective choices are minimised in input (like in Bolukbasi et al. 2016), parameter tuning might have consequences on the results, which should not go unnoticed or underestimated. This work does not mean at all to downplay the presence and danger of human biases in word embeddings. On the contrary: embeddings do encode human biases (Caliskan, Bryson, and Narayanan 2017; Garg et al. 2018; Kozlowski, Taddy, and Evans 2018; Gonen and Goldberg 2019), and we agree that this issue deserves the full attention of the field (Hovy and Spruit 2016). This is the main reason why transparency over existing and possibly future analogies is crucial. 1 https://www.microsoft.com/en-us/research/uploads/prod/2019/08/ACL-MingZhou- 50min-ming.v9-5d5104dcbe73c.pdf, slide 29. 2 M. Nissim, R. van Noord and R. van der Goot Fair is Better than Sensational # 2. What counts as analogy? In linguistics, analogies of the form A : B :: C : D can be conceived on two main levels of analysis (Fischer 2019). The first one is morphological (so-called strict proportional analogies), and they account for systematic language regularities. The second one is more at the lexico-semantic level, and similarities can get looser and more subject to interpretation (e.g. traffic is to street as water is to riverbed (Turney 2012)). The original, widely used, analogy test set introduced by Mikolov et al. (2013) consists indeed of two main categories: morpho-syntactic analogies (car is to cars as table is to tables) and semantic analogies (Paris is to France as Tokyo is to Japan). Within these, examples are classified in more specific sub-categories. There are two important aspects that must be considered following the above. First, analogies are (traditionally) mostly conceived as featuring four distinct terms. Second, we need to distinguish between cases where there is one specific, expected, correct fourth term, and cases where there isn’t. Both aspects bear important methodological consequences in the way we query and analyse (biased) analogies in word embeddings. # 2.1 Should all terms be different? Of the four constraints introduced by Turney in formally defining analogies, the last two constraints indirectly force the terms B and D to be different (Turney 2012, p. 540). In practice, also all the examples of the original analogy test (Mikolov et al. 2013) expect four different terms. Is this always the case? Are expressions featuring the same term twice non-analogies? Because most out-of-the-box word embeddings have no notion of senses, homo- graphs are modelled as one unit. For example, the infinitive form and the past tense of the verb “to read”, will be represented by one single vector for the word “read”. A consequence of this is that for certain examples, two terms would be identical, though they would be conceptually different. In strong verbs, infinitive and simple past can be homographs (e.g. split/split), and countries or regions can be homographs with their capitals (e.g. Singapore/Singapore). Other cases where all terms are not necessarily dis- tinct include “is-a” relations (hypernyms, cat:animal :: dog:animal), and ordered concepts (silver:gold :: bronze:silver). Moreover, the extended analogy test set created by Gladkova, Drozd, and Matsuoka (2016) also includes examples where B is the correct answer, for example country:language and thing:color. While these examples might not be conceived as standard analogies, the issue with homographs remains. # 2.2 Is there a correct answer? In Mikolov’s analogy set, for both macro-categories (morpho-syntactic and semantic), all of the examples are structured such that given the first three terms, there is one specific, correct (expected) fourth term. We can call such analogies “factual”. While morphosyntactic analogies are in general indeed factual (but there are exceptions due to homographical ambiguities), the picture is rather different for the semantic ones. If we take man:computer_programmer :: woman:X as a semantic analogy, what is the “correct” answer? Is there an expected, unbiased completion to this query? Compare it to the case of he:actor :: she:X – it seems quite straightforward to assume that X should be resolved as actress. However, such resolution easily rescales the analogy to a morphosyntactic rather than semantic level, thereby also ensuring a factual, unbiased answer. 3 Computational Linguistics Volume 0, Number 0 The morpho-syntactic and semantic levels are indeed not always distinct. When querying man:computer_programmer :: woman:X, or man:doctor :: woman:X, is one after a morphosyntactic or a semantic answer? Morpho-syntactically, we should resolve to doctor, thereby violating the all-terms-different constraint. If we take the semantic interpretation, there is no single predefined term that “correctly” completes the analogy (or maybe doctor does here too).2 In such non factual, more creative analogies, various terms could be used for completion depending on the implied underlying relation (Turney 2012), which could be unclear or unspecified in the query. For the analogies used by Manzini et al. (2019b) (see Table 3), for example, it is rather unclear what one would expect to find. Some of the returned terms might be biased, but in order to claim bias, one should also conceive the expected unbiased term. So, if doctor is not eligible by violating the distinction constraint, what would the unbiased answer be to the semantic analogy man:doctor :: woman:X? When posing queries, all such aspects should be considered, and one should be aware of what analogy algorithms and implementations are designed to detect. If the correct or unbiased answer to man:woman :: doctor:X is expected to be doctor and the model is not allowed to return any of the input terms as it would otherwise not abide to the definition of analogy, then such a query should not be asked. If asked anyway under such conditions, the model should not be charged with bias for not returning doctor. # 3. Algorithms We consider three strategies that have been used to capture analogies. We use the stan- dard 3COSADD function (Eq. 1) from Mikolov et al. (2013), and 3COSMUL, introduced by Levy and Goldberg (2014) to overcome some of the shortcomings of 3COSADD, mainly ensuring that a single large term cannot dominate the expression (Eq. 2): argmax d (cos(d, c) − cos(d, a) + cos(d, b)) (1) argmax d cos(d, c) cos(d, b) cos(d, a) + 0.001 (2) Bolukbasi et al. (2016) designed another formula, specifically focused on finding pairs B : D with a similar direction as A : C: S(a,c)(b, d) = cos(a − c, b − d) 0 if||b − d|| ≤ δ otherwise (3) They do not assume that B is known beforehand, and generate a ranked list of B : D pairs, with the advantage of introducing less subjective bias in the input query (see Section 4.2). To ensure that B and D are related, the threshold δ is introduced, and set to 1.0 in Bolukbasi et al. (2016). This corresponds to π/3 and in practice means that B and D have to be closer together than two random embedding vectors. If B is chosen 2 In this sense, it is admirable that Caliskan, Bryson, and Narayanan (2017) try to better understand their results by checking them against actual job distributions between the two genders. 4 M. Nissim, R. van Noord and R. van der Goot Fair is Better than Sensational beforehand (as in classic analogies), it is more efficient to transform the formula to only search for D: argmax d cos(a − c, b − d) 0 if||b − d|| ≤ δ otherwise (4) Note that the resulting scores for the same analogy are exactly the same as in Equation 3. This formula allows one to examine the top-N B : D candidates for a specific query, while Equation 3 will only return one single B : D pair for a given B. Even though it is not part of the equations, in practice most implementations of these optimization functions specifically ignore one or more input vectors. Most likely, this is because the traditional definitions of analogy require all terms to be different (see Section 2), and the original analogy test set reflects this. However, we have seen that this is a strong constraint, both in morphosyntactic and semantic analogies. Moreover, even though this constraint is mentioned in the original paper (Mikolov et al. 2013) and in follow-up work (Linzen 2016; Bolukbasi et al. 2016; Rogers, Drozd, and Li 2017; Goldberg 2017; Schluter 2018), we believe this is not common knowledge in the field (analogy examples are still widely used), and even more so outside the field.3 # 4. Is the bias in the models, in the implementation, or in the queries? In addition to preventing input vectors from being returned, other types of implemen- tation choices (such as punctuation, capitalization, or word frequency cutoffs), and subjective decisions play a substantial role. So, what is the actual influence of such choices on obtaining biased responses? In what follows, unless otherwise specified, we run all queries on the standard GoogleNews embeddings.4 All code to reproduce our experiments is available: https: //bitbucket.org/robvanderg/w2v. # 4.1 Ignoring or allowing the input words In the default implementation of word2vec (Mikolov et al. 2013), gensim ( ˇReh ˚uˇrek and Sojka 2010) as well as the code from Bolukbasi et al. (2016), the input terms of the analogy query are not allowed to be returned.5 We adapted all these code-bases to allow for the input words to be returned.6 To compare the different methods, we evaluated all of them on the test set from Mikolov et al. (2013). The results in Table 1 show a large drop in performance for 3COSADD and 3COSMUL in the unconstrained setting. In most cases, this is because the second term is returned as answer (man is to king as woman is to king, thus D == B), but in some cases it is the third term that gets returned (short is to shorter as new is to new, thus D == C). A similar drop in performance was observed before by Linzen (2016) and Schluter (2018). The Bolukbasi et al. (2016) method shows very low scores, but this 3 This was confirmed by the response we got when we uploaded a first version of the paper. 4 https://code.google.com/archive/p/word2vec/ 5 In Equation 3, in practice B will almost never be returned, as it will always be assigned a score of 0.0, making it the last ranked candidate. 6 The 3COSADD unconstrained setting can be tested in an online demo: www.robvandergoot.com/embs 5 Computational Linguistics Volume 0, Number 0 was to be expected, since their formula was not specifically designed to capture factual analogies. But what is so different between factual and biased analogies? 3COSADD uncon. 3COSMUL uncon. BOLUKBASI uncon. Google Analogy - micro Google Analogy - macro 0.74 0.71 0.21 0.21 0.75 0.73 0.47 0.45 0.04 0.06 0.11 0.11 Table 1: Accuracies of the three formulas on the Google Analogy test set (Mikolov et al. 2013), comparing the constrained version (original code), with the unconstrained version (uncon.). For all formulas, unconstrained means also taking the input vectors into account. For BOLUKBASI, more constraints were removed (see Section 4.3). man woman : : doctor X he she : : doctor X man woman : : computer_programmer X 3COSADD unconstrained gynecologist doctor nurse doctor homemaker computer_programmer 3COSMUL unconstrained gynecologist doctor nurse doctor homemaker computer_programmer BOLUKBASI unconstrained midwife gynecologist nurse nurse schoolteacher Table 2: Example output of the three algorithms for their regular and unconstrained implementations for three well-known gender bias analogies. In Table 2, we report the results using the same settings for a small selection of mainstream examples from the literature on embedding bias. It directly becomes clear that removing constraints leads to different (and arguably less biased) results.7 More precisely, for 3COSADD and 3COSMUL we get word B as answer, and using the method described by Bolukbasi et al. (2016) we get different results because with the vocabulary cutoff they used (50,000 most frequent words, see Section 4.3), gynecologist (51,839) and computer_programmer (57,255) were excluded.8 The analogy man is to doctor as woman is to nurse in Table 2 is a classic showcase of human bias in word embeddings. This biased analogy reflecting gendered stereotypes in our society, is however truly meaningful only if the system were allowed to yield doctor (arguably the expected answer in absence of bias, see Section 2) instead of nurse, and it doesn’t. But using the original analogy code it is impossible to obtain man is to doctor as woman is to doctor (where D == B). Under such commonly used settings, it is not exactly fair to claim the embedding space is biased because it does not return doctor. 7 This was noticed before: https://medium.com/artists-and-machine-intelligence/ami-residency-part-1- exploring-word-space-andprojecting-meaning-onto-noise-98af7252f749 and https://www.youtube.com/watch?v=25nC0n9ERq4 8 Though man is to computer programmer as woman is to homemaker is used in the title of Bolukbasi et al. (2016), this analogy is obtained using 3COSADD. 6 M. Nissim, R. van Noord and R. van der Goot Fair is Better than Sensational # 4.2 Subjective factors Let us take a step back though, and ask: Why do people query man is to doctor as woman is to X? In fairness, one should wonder how much bias leaks in from our own views, preconceptions, and expectations. In this section we aim to show how these affect the queries we pose and the results we get, and how the inferences we can draw depend strongly on the choices we make in formulating queries and in reporting the outcome. To start with, the large majority of the queries we pose and find in the literature imply human bias. People usually query for man:doctor :: woman:X, which in 3COSADD and 3COSMUL is different than querying for woman:doctor :: man:X, both in results and in assumptions (often expecting to find a biased answer). This issue also raises the major, usually unaddressed question as to what would the unbiased, desired, D term be? Such bias-searching queries do not pose factual, one-correct-answer, analogies, unless interpreted morpho-syntactically (See Section 2). Another subjective decision has to do with reporting results. One would think that the top returned term should always be reported, or possibly the top five, if willing to provide a broader picture. However, subjective biases and result expectation might lead to discard returned terms that are not viewed as biased, and report biased terms that are however appearing further down in the list. This causes a degree of arbitrariness in reporting results that can be substantially misleading. As a case in point, we discuss here the recent “Manzini et al.” paper, which is the work from which the examples used in the opening presidential speech of ACL 2019 were taken (see Footnote 1). This paper was published in three subsequent versions, differing only in the analogy queries used and the results reported. We discuss this to show how subjective the types of choices above can be, and that unless very transparent about methodology and implementation, one can almost claim anything they desire. In the first version of their paper (Manzini et al. 2019a), the authors accidentally searched for the inverse of the intended query: instead of asking the model A is to B as C is to X (black is to criminal as caucasian is to X), they queried C is to B as A is to X (caucasian Analogy Reported Idx Top-5 answers (averaged) ) caucasian lawful black asian yuppie caucasian black killer asian jew liberal christian jew journalist muslim muslim regressive christian b 9 1 0 2 ( . l a t e i n i z n a M criminal hillbilly engineer conservative terrorist conservative 2.0 lawful criminal defamation libel vigilante 5.0 yuppie knighting pasty hipster hillbilly 5.2 addict aspie impostor killer engineer 2.0 liberal conservative progressive heterodox secular 1.6 terrorist purportedly journalist watchdog cia 9.2 regressive progressive milquetoast liberal neoliberal ) black homeless caucasian caucasian hillbilly asian asian laborer black jew greedy muslim christian familial muslim muslim uneducated christian intellectually servicemen suburban landowner powerless warzone c 9 1 0 2 ( . l a t e i n i z n a M 3.0 laborer landowner fugitive worker millionaire 8.8 greedy corrupt rich marginalized complacent 7172 familial domestic marital bilateral mutual 16.6 uneducated uninformed idealistic elitist arrogant Table 3: Overview of reported biased analogies in Manzini et al. (2019b) and Manzini et al. (2019c), obtained using the 3COSADD method without constraints, but their em- beddings as they are (with constraints on the vocabulary). “Idx” refers to the average position of the reported biased word as we find it in their five embedding sets (trained on Reddit data) trained with different seeds (i.e., the same space they used). 7 Computational Linguistics Volume 0, Number 0 is to criminal as black is to X).9 What is surprising is that they still managed to find biased examples by inspecting the top-N returned D terms. In other words, they reported the analogy black is to criminal as caucasian is to police to support the hypothesis that there is cultural bias against the black, but the analogy they had in fact found was caucasian is to criminal as black is to police, so the complete opposite. This should make us extremely wary of how easy it can be to find biased analogies when specifically looking for them. They fixed this mistake in their second and third version (Manzini et al. 2019b,c). However, it is unclear from the text which algorithm is used to obtain these analogies. We tried the three algorithms described in Section 3, and in Table 3 we show the results of 3COSADD, for which we could most closely reproduce their results (for both versions). For their second version, in 5 out of their 6 examples the input word B would actually be returned before the reported answer D. For three of the six analogies, they pick a term from the returned top-10 rather than the top returned one. In their third version (Manzini et al. 2019c), the authors change the list of tested analogies, especially regarding the B terms. It is unclear under which assumption some of these ‘new’ terms were chosen to be tested (greedy associated to jew, for example: what is one expecting to get – biased or non-biased – considering this is a negative stereotype to start with, and the C term is muslim?). However, for each of the analogy algorithms, we cannot reasonably reproduce four out of six analogies, even when inspecting the top 10 results. While qualitatively observing and weighing the bias of a large set of returned answers can make sense, it can be misleading to cherry-pick and report very biased terms in sensitive analogies. At the very least, when reporting term-N, one should report the top-N terms to provide a more complete picture. For example, in the top- 10 results for man is to doctor as woman is to X using 3COSADD, some terms refer to medical professions that have only women as patients (gynecologist, obstetrician, ob_gyn, midwife), going to show that it is not always clear which semantic relation is implied in the queries (in this case: professions or patients?), and more than one can be present and confounded. After all, if we query the Google embeddings using unconstrained 3COSADD for man (or woman) is to doctor as dog (or animal) is to X, we get veterinarian. # 4.3 Other constraints Using the BOLUKBASI formula is much less prone to subjective choices. It takes as input only two terms (A and C, like man and woman), thus reducing the bias present in the query itself, and consequently the impact of human-induced bias expectation. At the same time, though, starting with A : C, the formula requires some parameter tuning in order to obtain (a) meaningful B : D pair(s). Parameter values, together with other pre- processing choices, also affect the outcome, possibly substantially, and must be weighed in when assessing bias. As shown in Eq. 3, Bolukbasi et al. (2016) introduce a threshold δ to ensure that B and D are semantically similar. In their work, δ is set to 1 to ensure that B and D are closer than two random vectors (see Section 3). Choosing alternative values for δ will however yield quite different results, and it is not a straightforward parameter to tune, since it cannot be done against some gold standard, “correct” examples. Another common constraint that can have a substantial impact on the results is lim- iting the embedding set to the top-N most frequent words. Both Bolukbasi et al. (2016) and Manzini et al. (2019a) filter the embeddings to only the 50,000 most frequent words, 9 We confirmed this with the authors. 8 M. Nissim, R. van Noord and R. van der Goot Fair is Better than Sensational Threshold (δ) Voc. size 0.8 0.9 1.0 1.1 1.2 10,000 25,000 50,000 100,000 250,000 500,000 full vocab. doctors doctors doctors gynecologist gynecologist gynecologist gynecologist nurse nurse nurse gynecologist gynecologist gynecologist gynecologist nurse nurse midwife gynecologist gynecologist gynecologist gynecologist nurse nurse midwife gynecologist gynecologist nurse_midwife nurse_midwife Table 4: Influence of vocabulary size and threshold value for the method of Bolukbasi et al. (2016). With extreme values for the threshold, and allowing to return query words, the answer becomes “doctor” (≤0.5) and “she” (≥1.5). Italics: original settings. though no motivation for this need or this specific value is provided. Setting such an arbitrary value might result in the exclusion of valid alternatives. Further processing can also rule out potentially valid strings. For example, Manzini et al. (2019a) lowercase all words before training, and remove words containing punctuation after training, whereas Bolukbasi et al. (2016) keep only words that are shorter than 20 characters and do not contain punctuation or capital letters (after training the embeddings). In order to briefly illustrate the impact of varying the values of the δ threshold and the vocabulary size when using the BOLUKBASI formula, in Table 4 we show the results when changing them for the query man is to doctor as woman is to X.10 The variety of answers, ranging from what can be considered to be biased (nurse) to not biased at all (doctors), illustrates how important it is to be aware of the influence of choices concerning implementation and parameter values. # 5. Final remarks If analogies might not be the most appropriate tool to capture certain relations, surely matters have been made worse by the way that consciously or not they have been used. Gonen and Goldberg (2019) have rightly dubbed them sensational “party tricks”, and this is harmful for at least two reasons. One is that they get easily propagated both in science itself (Jha and Mamidi 2017; Gebru et al. 2018; Mohammad et al. 2018; Hall Maudslay et al. 2019), also outside NLP and AI (McQuillan 2018) and in popularised articles (Zou and Schiebinger 2018), where readers are usually in no position to verify the reliability or significance of such examples. The other is that they might mislead the search for bias and the application of debiasing strategies. And while it is debatable whether we should aim at removal or rather at transparency and awareness (Caliskan, Bryson, and Narayanan 2017; Gonen and Goldberg 2019), it is crucial that we are clear and transparent about what analogies can and cannot do as a diagnostic for embeddings bias, and about all the implications of subjective and implementation choices. This is a strict pre-requisite to truly understand how and to what extent embeddings encode and reflect the biases of our society, and how to cope with this, both socially and computationally. 10 Given that B is known, we use the formula in Eq. 4. 9 Computational Linguistics # References Basta, Christine, Marta Ruiz Costa-jussà, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. In Proceedings of the 1st ACL Workshop on Gender Bias for Natural Language Processing. Bolukbasi, Tolga, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems 29, pages 4349–4357, Curran Associates, Inc. Caliskan, Aylin, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Association for Computational Linguistics, Minneapolis, Minnesota. # Drozd, Aleksandr, Anna Gladkova, and Satoshi Matsuoka. 2016. Word embeddings, analogies, and machine learning: Beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3519–3530, Osaka, Japan. Fischer, Olga. 2019. Analogy in Language and Linguistics, Oxford Bibliographies in Linguistics. Garg, Nikhil, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences, 115(16):E3635–E3644. Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 V4. Gladkova, Anna, Aleksandr Drozd, and Satoshi Matsuoka. 2016. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL Student Research Workshop, 10 Volume 0, Number 0 pages 8–15, Association for Computational Linguistics, San Diego, California. Goldberg, Yoav. 2017. Neural network methods for natural language processing. Synthesis Lectures on Human Language Technologies, 10(1):1–309. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Association for Computational Linguistics, Minneapolis, Minnesota. Hall Maudslay, Rowan, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5270–5278, Association for Computational Linguistics, Hong Kong, China. Hovy, Dirk and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 591–598, Association for Computational Linguistics, Berlin, Germany. Jha, Akshita and Radhika Mamidi. 2017. When does a compliment become sexist? analysis and classification of ambivalent sexism using Twitter data. In Proceedings of the Second Workshop on NLP and Computational Social Science, pages 7–16, Association for Computational Linguistics, Vancouver, Canada. Kozlowski, Austin C, Matt Taddy, and James A Evans. 2018. The geometry of culture: Analyzing meaning through word embeddings. arXiv preprint arXiv:1803.09288. Levy, Omer and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 171–180, Association for Computational Linguistics, Ann Arbor, Michigan. M. Nissim, R. van Noord and R. van der Goot semantic spaces using word analogies. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 13–18, Association for Computational Linguistics, Berlin, Germany. Manzini, Thomas, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019b. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047 V2. Manzini, Thomas, Yao Chong Lim, Yulia Tsvetkov, and Alan W Black. 2019c. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047 V3. Manzini, Thomas, Lim Yao Chong, Alan W. Black, and Yulia Tsvetkov. 2019a. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 615–621, Association for Computational Linguistics, Minneapolis, Minnesota. May, Chandler, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Association for Computational Linguistics, Minneapolis, Minnesota. McQuillan, Dan. 2018. People’s councils for ethical machine learning. Social Media+ Society, 4(2). Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of Workshop at ICLR. Mohammad, Saif, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1–17, Association for Computational Linguistics, New Orleans, Louisiana. Peters, Matthew, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Fair is Better than Sensational Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, Association for Computational Linguistics, New Orleans, Louisiana. ˇReh ˚uˇrek, Radim and Petr Sojka. 2010. Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45–50, ELRA, Valletta, Malta. Li. 2017. The (too many) problems of analogical reasoning with word vectors. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 135–148, Association for Computational Linguistics, Vancouver, Canada. Schluter, Natalie. 2018. The word analogy testing caveat. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 242–246, Association for Computational Linguistics, New Orleans, Louisiana. Turney, Peter D. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533–585. Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Association for Computational Linguistics, Minneapolis, Minnesota. Zou, James and Londa Schiebinger. 2018. AI can be sexist and racist-it’s time to make it fair. Nature, 559(7714):324. 11
{ "id": "1904.04047" }
1905.09165
A framework for the extraction of Deep Neural Networks by leveraging public data
Machine learning models trained on confidential datasets are increasingly being deployed for profit. Machine Learning as a Service (MLaaS) has made such models easily accessible to end-users. Prior work has developed model extraction attacks, in which an adversary extracts an approximation of MLaaS models by making black-box queries to it. However, none of these works is able to satisfy all the three essential criteria for practical model extraction: (1) the ability to work on deep learning models, (2) the non-requirement of domain knowledge and (3) the ability to work with a limited query budget. We design a model extraction framework that makes use of active learning and large public datasets to satisfy them. We demonstrate that it is possible to use this framework to steal deep classifiers trained on a variety of datasets from image and text domains. By querying a model via black-box access for its top prediction, our framework improves performance on an average over a uniform noise baseline by 4.70x for image tasks and 2.11x for text tasks respectively, while using only 30% (30,000 samples) of the public dataset at its disposal.
http://arxiv.org/pdf/1905.09165
Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, Vinod Ganapathy
cs.LG, cs.AI, cs.CR, stat.ML
null
null
cs.LG
20190522
20190522
9 1 0 2 y a M 2 2 ] G L . s c [ 1 v 5 6 1 9 0 . 5 0 9 1 : v i X r a # A FRAMEWORK FOR THE EXTRACTION OF DEEP NEURAL NETWORKS BY LEVERAGING PUBLIC DATA A PREPRINT # Soham Pal1∗, Yash Gupta1∗, Aditya Shukla1∗, Aditya Kanade1,2 †, Shirish Shevade1†, Vinod Ganapathy1† 1 Department of Computer Science and Automation, IISc Bangalore, India 2 Google Brain, USA {sohampal,yashgupta,adityashukla,kanade,shirish,vg}@iisc.ac.in # ABSTRACT Machine learning models trained on confidential datasets are increasingly being deployed for profit. Machine Learning as a Service (MLaaS) has made such models easily accessible to end-users. Prior work has developed model extraction attacks, in which an adversary extracts an approximation of MLaaS models by making black-box queries to it. However, none of these works is able to satisfy all the three essential criteria for practical model extraction: (i) the ability to work on deep learning models, (ii) the non-requirement of domain knowledge and (iii) the ability to work with a limited query budget. We design a model extraction framework that makes use of active learning and large public datasets to satisfy them. We demonstrate that it is possible to use this framework to steal deep classifiers trained on a variety of datasets from image and text domains. By querying a model via black-box access for its top prediction, our framework improves performance on an average over a uniform noise baseline by 4.70× for image tasks and 2.11× for text tasks respectively, while using only 30% (30,000 samples) of the public dataset at its disposal. Keywords model extraction · active learning · machine learning · deep neural networks · black-box attacks 1 # Introduction Due to their success in recent years, deep neural networks (DNNs) are increasingly being deployed in production software. The security of these models is thus of paramount importance. The most common attacks against DNNs focus on the generation of adversarial examples [1, 2, 3, 4, 5, 6, 7, 8], where attackers add an imperceptible perturbation to inputs (typically images) that cause DNNs to misclassify them. In this paper, we turn our attention to privacy vulnerabilities. Today, Machine Learning as a Service (MLaaS) providers like Google, Amazon and Azure make ML models available through APIs to developers of web and mobile applications. These services are monetized by billing queries pro rata. The business model of these services rests on the privacy of the model. If it was possible for a potential competitor or end user to create a copy of these models with access only to the query API, it would pose a great threat to their business. By extracting a copy of a ML model, not only would an adversary have the ability to make unlimited free queries to it, they would also be able to implement applications requiring gradient information, such as crafting adversarial examples that fool the secret MLaaS model [7], performing model inversion [9] (discovering the training data on which the model was originally trained) and exploring the explainability of proprietary ML models (e.g., by training an explainable substitute model such as a decision tree classifier [10]). ∗All three authors contributed equally. †All three authors contributed equally. A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT MLAAS PROVIDER Secret dataset Secret model Train 3 ADVERSARY Query Prediction Thief dataset Substitute model Choose x y Train 3 Figure 1: Overview of model extraction Model privacy is also important to developers of other ML products (such as self-driving vehicles and translation tools). Datasets are expensive to gather and curate, and models require expertise to design and implement – thus, it is in the best interest of corporations to protect their ML models to maintain a competitive edge. Tramèr et al. [11] define the concept of model extraction (see Figure 1). In model extraction, the adversary is an agent that can query a secret model (e.g., a MLaaS provider via APIs) to obtain predictions on any supplied input vector of its choosing. The returned predictions may either be label probability distributions, or just the Top-1 prediction – we assume the latter. Using the obtained predictions, the adversary trains a substitute model to approximate the secret model function. The adversary may not know the secret model architecture or associated hyperparameters. The adversary has access to a thief dataset of the same media type (i.e. images or text) from which it draws samples to query the secret model. The data in this thief dataset may be drawn from a different distribution than the secret dataset on which the secret model was originally trained. Prior work has used the following thief datasets: • Uniform noise: Tramèr et al. [11] perform model extraction by querying the secret model with inputs sampled i.i.d. uniformly at random. They demonstrate their method on logistic regression models, SVMs, shallow (1 hidden layer) feedforward neural networks and decision trees. According to our experiments, this approach does not scale well to deeper neural networks (such as our architecture for image classification with 12 convolutional layers; see Section 6.1 for further details). • Hand-crafted examples: Papernot et al. [7] design a model extraction framework that can be used to extract DNNs. However, this technique assumes domain knowledge on the part of the attacker. The adversary should either have access to a subset of the secret dataset, or create data (such as by drawing digits using a pen tablet) that closely resembles it. • Unlabeled non-problem domain data: Correia-Silva et al. [12] demonstrate that convolutional neural networks (CNNs) can be copied by querying them with a mix of non-problem domain and problem domain data. For example, they demonstrate that a DNN trained using European crosswalk images [13] as the secret dataset can be copied using a mix of ImageNet (non-problem domain data) and crosswalk images from America and Asia (problem domain data) as the thief dataset. They do not consider a query budget in their work. In this work, we investigate the feasibility of implementing a practical approach to model extraction, viz. one that deals with the following criteria: • Ability to extract DNNs: Most state of the art ML solutions use DNNs. Thus, it is critical for a model extraction technique to be effective for this class of models. • No domain knowledge: The adversary should be expected to have little to no domain knowledge related to task implemented by the secret model. In particular, they should not be expected to have access to samples from the secret dataset. • Ability to work within a query budget: Queries made to MLaaS services are billed pro rata, and such services are often rate limited. Thus, it is in an attacker’s best interest to minimize the number of queries they make to the secret model. We compare our approach to the three approaches described above on these three criteria [11, 7, 12] in Table 1. As can be seen, we can extract DNNs with no domain knowledge, while working with a limited query budget. To achieve these criteria, our paper introduces two novel techniques: 2 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT # Table 1: Comparison of Model Extraction approaches Model extraction Works on Nodomain Limited # technique DNNs knowledge _ of queries Tramér et al. Papernot et al. Copycat CNN [12] Our framework NNN S&S K&NN • Universal thief datasets: These are large and diverse public domain datasets, analogous to the non-problem domain (NPD) data of Correia-Silva et al. [12]. For instance, we show that ImageNet constitutes a universal thief for vision tasks, whereas a dataset of Wikipedia articles constitutes a universal thief for NLP tasks. Our key insight is that universal thief datasets provide a more natural prior than uniform noise, while not requiring domain knowledge to obtain. Active learning strategies: Active learning is a technique used in scenarios where labeling is expensive. It strives to select a small yet informative set of training samples to maximize accuracy while minimizing the total labeling cost. In this paper, we use pool-based active learning, where the algorithm has access to a large set of unlabeled examples (i.e. the thief dataset) from which it picks the next sample(s) to be labeled. Although universal thief datasets constitute an excellent prior for model extraction, their size makes them unsuitable for use when the query budget is limited. We make use of active learning to construct an optimal query set, thus reducing the number of queries made to the MLaaS model. This ensures that the attacker stays within the query budget. Our contributions include: 1. We define the notion of universal thief datasets for different media types such as images and text. 2. We propose a framework for model extraction that makes use of universal thief datasets in conjunction with active learning strategies. We demonstrate our framework on DNNs for image and text classification tasks. 3. Finally, we introduce the notion of ensemble active learning strategies as a combination of existing active learning strategies. We design and leverage one such ensemble strategy to improve performance. Overall, we demonstrate that by leveraging public data and active learning, we improve agreement between the secret model and the substitute model by, on an average, 4.70× (across image classification tasks) and 2.11× (across text classification tasks) over the uniform noise baseline of Tramèr et al. [11], when working with a total query budget of 30K. We plan to release the source code for our framework under an open source license soon. # 2 Background In this section, we introduce the active learning set of techniques from the machine learning literature. We also briefly discuss adversarial example generation, which is later used as the crux of the DeepFool Active Learning (DFAL) strategy [14] used by our framework. # 2.1 Preliminaries In machine learning, a dataset D consists of labeled examples (x, y), where x ∈ X is an example and y ∈ Y is its associated label, where X is said to be the instance space and Y is the label space. It is assumed that there is an underlying unknown mapping φ : X → Y from which D is generated (i.e. (x, y) ∈ D implies that y = φ(x)). In this paper, we restrict ourselves to the classification setting, where Y = {e1, e2, . . . , eJ }3. 5; represents the j" standard basis vector, i.e. (0,0,...,0,1,0,...,0,0) € R’, a vector with aa 1 in the j" position, and 0 elsewhere. Such a vector is said to be one-hot. A pair (x, y) where y is a vector with 1 in the j" position indicates that the sample x belongs to the j" class (out of J classes). 3 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT In passive machine learning, the learner has access to a large training dataset Dtrain of labeled examples and must learn a hypothesis function f that minimizes a loss function. A typical loss function is mean squared error (MSE): Luse(fsPrsin) == — SD lly -F@)IB [Praia (ay) €Duain The better the hypothesis (i.e. when predictions f (x) match labels y), the lower the value of the loss function L. Other loss functions such as cross-entropy (CE) are also used. Machine learning models such as DNNs learn a function by minimizing this loss function on the training dataset. DNNs, when trained on a large corpus of training examples, have been shown to exhibit good generalization ability across a diversity of tasks in various domains [15], i.e. provided a previously unseen test example xtest, the prediction that they make, f (xtest) approximates the value of φ(xtest) well, i.e. f (xtest) ≈ φ(xtest). However, to achieve good generalization performance, such DNNs require a very large training dataset. The labeling effort required is massive, and learning may be intractable in scenarios where there is a high cost associated with each label, such as paying crowd workers. In the context of model extraction, this may involve querying a MLaaS model, which are billed pro rata by the MLaaS service provider. # 2.2 Active learning Active learning [16] is useful in scenarios where there is a high cost associated with labeling instances. In active learning, the learner does not use the full labeled dataset D. Rather, the learner starts with either an unlabeled dataset X of samples x; or, alternatively, the learner can itself generate samples x de novo. Following this, an oracle fO is used to label the sample, which assigns it the true label y = fO(x). Active learning can be broadly classified into one of the following scenarios: • Stream-based selective sampling: In this scenario, the learner is presented with a stream of unlabeled samples x1, x2, x3, . . . , drawn from the underlying distribution. The learner must decide to either accept or reject an individual sample xn for querying. This can be done by checking, e.g., the “uncertainty” of the prediction (we will formally define this in Section 4.1) made by the classifier on a specific sample xn. For samples that are accepted by the learner, the oracle is queried to label them. Once rejected, a sample cannot be queried in the future. • Pool-based sampling: In this scenario, the learner has access to a full unlabeled dataset X of samples {x1, x2, . . . x|X|}. Unlike in stream-based selective sampling, the learner does not have to consider each sample xn in isolation. The learner’s objective is thus to select a subset S ⊆ X of samples to be queried. While it is possible to do this in one shot, pool-based sampling may also be done incrementally, either choosing one sample at a time, or an entire batch of samples in each iteration. Correspondingly, the oracle may be queried on one sample at a time, or the entire batch of selected samples. • Query synthesis: Here, the learner generates samples x de novo without first approximating the underlying distribution. This process could be entirely uninformed – for instance, the learner could generate data points by sampling uniformly at random from a multivariate uniform or Gaussian distribution – or, it could be more informed: such as by using a generative model. The oracle is then queried with the generated sample. In this work, we make use of pool-based sampling. In particular, we consider the scenario where the learner adds a batch of samples in each iteration of the algorithm. We grow the subset Sg CS, € Sy ¢ --- € Sy over N iterations, such that each subset S; is a selection of samples from the full dataset S$; C X. # 2.3 Adversarial example generation and the DeepFool technique We introduce the notion of adversarial example generation, in particular the DeepFool [6] technique. This technique will be used while introducing the DeepFool Active Learning (DFAL) [14] active learning strategy in Section 4.1. It is known that DNNs can be easily fooled as demonstrated by, the Fast Gradient Sign Method (FGSM) of Goodfellow et al. [1], the C&W attack of Carlini and Wagner [3], the Jacobian-based Saliency Map Attack (JSMA) of Papernot et al. [5] and many others [2, 4, 6, 7, 8]. In particular, neural networks trained to perform image classification tasks have been shown to be vulnerable to adversarial examples. An adversary can add a small amount of noise to input images, which, while being imperceptible to the human eye, can change the classification decision made by the neural network, as shown in Figure 2. These techniques typically work as follows – given an innocuous image x, they compute a small, typically imperceptible additive noise δ. This noise is then added to the original image to produce an adversarial image, ˆx = x + δ. The 4 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT + = f (x) = MNIST Digit 2 97.09% Confidence Adversarial Noise δ (Generated by DeepFool) f (ˆx) = MNIST Digit 8 60.95% Confidence Figure 2: Adversarial example generation using DeepFool [6]. objective is that, given a machine learning model f, the prediction of the perturbed image no longer matches the prediction made for the original image, viz. f(x) 4 f(). DeepFool [6] is one such technique for the generation of adversarial examples. It solves the following problem iteratively: 5* = argmin||d|lo st. f(x +9) 4 f(x) In the binary classification setting (i.e. where range f = {−1, 1}), it uses a first order approximation of the analytical solution for the linearly-separable case: i= ___ Fe) _ IVF(@I5 Lyi = M+ VF (a1) The process is started by setting x9 = x, and terminates at the lowest index L for which f(x,) # f(x). The total perturbation is obtained by taking the sum of the individual perturbations at each step, 6 = an 6,. This algorithm can be extended to work in the multiclass classification setting. We refer interested readers to [8] for further details. # 3 Threat model Before we describe the proposed algorithm, we first state the threat model under which it operates. Attack surface. We assume that the adversary cannot directly access the secret model, but can only query it in a black-box fashion via an API. We assume that there is a query cost associated with each API query made by the adversary. While there is no limit on the number of queries that can be made theoretically, the ability of the adversary to make queries is restricted in practice by the total query budget. This query cost model can be used to model rate-limiting defenses. For example, each query can have an associated cost, and a defense would be to limit queries from a source that has exceeded its cost threshold. Capabilities. The adversary has black-box access to the secret model via an API, by which it can query it with any image or text of its choosing. It thus has full knowledge of both the input specification (i.e. the type of media – images or text) and the output specification (the set of possible labels). Note that the adversary does not have direct access to the exact gradient information of the model, but only the final prediction. We consider two scenarios – one where a Top-1 prediction is returned (as a one-hot standard basis vector), and another where the model returns a softmax4 probability distribution over the target output classes. Our primary experiments assume the weaker capability of receiving only the Top-1 predictions, and not the softmax probability distributions.5 Information of the secret model architecture and model hyperparameters need not be known to the adversary, as we show in Section 6.2. However, as peak performance is achieved when the adversary is aware of the architecture of the secret model, and since it is possible to detect these hyperparameters and architectural choices by a related line of work (model reverse-engineering [17, 18, 19, 20, 21, 22]), we report our main results using the same architecture for both the secret and substitute models. Further, the adversary has no knowledge of the secret dataset D on which the model was originally trained. It can however make use of unlabeled public data, i.e. the thief dataset Xthief. Note that this data needs to be labeled first by the secret model before it can be used to train the substitute model. 4Given unnormalized scores a1,@2,...ay, over J classes, the softmax function computes the normalized quantities pj = exp(ai)/ wy exp(a;). The resulting p; values constitute a valid probability distribution. 5However, in Table 4 we also consider the situation where the softmax probability distribution is available to the adversary. 5 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT Thief dataset Xthief 1 Random selection Seed samples S0 s Query 3 Secret Model f s Query Next query set Si+1 2 (s, f (s)) Collect True labeled samples Di 5 Subset selection strategy (e.g., K-center, adversarial, etc.) x Query 3 Train 3 Substitute Model ˜f (x, ˜f (x)) Collect 4 Approx. labeled samples ˜Di+1 Figure 3: Our framework for model extraction (see Section 4 for explanation of steps 1-5). Adversary’s goal. The goal of the adversary is to obtain a substitute model function that closely resembles (i.e. approximates) the secret model function: # ˜f ≈ f To do so, it trains a substitute model ˜f on a subset of the thief dataset, S ⊆ Xthief, fe argmin £(f", {(2, f(a)) : x € S}) where CL is the chosen loss function. As there is a cost associated with querying f and |X nier|, the adversary would want |S < |XMnier|. The resulting model f is treated as the extracted model at the end of the process. As it is not possible to arrive at analytical optimum in the general case, the quality of the extracted model is judged using the following Agreement metric. Definition (Agreement): Two models f and f agree on the label for a sample «x if they predict the same label for the same sample, i.e. f(a) = f(x). The agreement of two networks f and f is the fraction of samples x from a dataset D on which they agree, i.e. for which f(x) = f(x) Agreement(f, f,D) = Di > I[f(x) = f(#)] (z,y)ED where 1(·) is the indicator function. Note that the agreement score does not depend on the true label y. Agreement is penalized for every sample for which the predicted labels by the two models f (x) and ˜f (x) do not match. The higher the agreement between two models on a held-out test set, the more likely it is that the extracted model approximates the secret model well. # 4 Technical details We start with a high-level description of the framework with reference to Figure 3. 1. The adversary first picks a random subset S0 of the unlabeled thief dataset Xthief to kickstart the process. 2. In the ith iteration (i = 0, 1, 2, . . . , N ), the adversary queries the samples in Si against the secret model f and obtains the correctly labeled subset Di = {(x, f (x)) : x ∈ Si}. 3. Using Di, it trains the substitute model ˜f . 4. The trained substitute model is then queried with all samples in Xthief to form the approximately labeled dataset ˜Di+1. 5. A subset selection strategy uses ˜Di+1 to select the points Si+1 to be queried next. The process is repeated for a fixed number of iterations, with the substitute model ˜f being refined in each iteration. The procedure is formally described in Algorithm 1. The training procedure followed by TRAINNETWORK is described in Section 5.3. The details of SUBSETSELECTION follow. 6 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT Algorithm 1: Model extraction by active learning Input Parameters :iteration count N ; total query budget B; Input secret model f; unlabeled thief dataset X thier Parameters : iteration count NV; total query budget B; seed size ko; validation fraction 7 Output : Substitute model, f Svatid Dyaiia So Do k «+ 7B random datapoints from Xy#lit; & {(a, f(x): © © Svaia}s + ko random datapoints from X, fee & {(2, fle)) :4 € So} + ((1-n)B-ko) +7; for i€ {1...N}do f «+ TRAINNETWORK(D;-1, Dyaiia); Di {(x, f(a)) : © € XB A (x,-) ¢ Dia}: Si end e SUBSETSELECTION(D;, D,-1,k); D, — Di-1 U { (a, f(x)) sx € Si}; f <— TRAINNETWORK(Dw, Dyatia); # 4.1 Active learning subset selection strategies In each iteration, the adversary selects a new set of k thief dataset samples Si ⊆ Xthief to label by querying the secret model f . This is done using a strategy from the active learning literature: • Random strategy: A subset of size k consisting of samples xn is selected uniformly at random, corresponding to pairs (xn, ˜yn) in ˜Di. Uncertainty strategy: This method is based on uncertainty sampling [23]. For every pair (xn, ˜yn) ∈ ˜Di, the entropy Hn of predicted probability vectors ˜yn = ˜f (xn) is computed: Hn = − ˜yn,j log ˜yn,j j where j is the label index. The k samples xn corresponding to the highest entropy values Hn (i.e. those that the model is least certain about) are selected, breaking ties arbitrarily. Ducoffe and Precioso [14] demonstrate that the uncertainty strategy does not work well on DNNs. Thus, we also consider two state-of-the-art active learning strategies for DNNs: • K-center strategy: We use the greedy K-center algorithm of Sener and Savarese [24] to construct a core-set of samples. This strategy operates in the space of probability vectors produced by the substitute model. The predicted probability vectors ˜ym = ˜f (xm) for samples (xm, ym) ∈ Di−1 are considered to be cluster centers. In each iteration, the strategy selects k centers by picking, one at a time, pairs (xn, ˜yn) ∈ ˜Di such that ˜yn is the most distant from all existing centers: (x∗ ek) any . ~ ~ 1/2 (v0.90) =arg, max min | |l9n — Ymlle (@n,Gn) ED; (tm sym )EDi-1 ~ p ~ ~ 2 (ej.9f) = arg max min jn — Fld (@n,Gn)€D? (@m.ym)ED}_ 4 where: ~ . D; —D;\ {(29, 95)} 1 Fj Dj_, —Di-1U {(2; f(2))} i.e. (x∗ 0, x∗ x∗ 0, ˜y∗ 1, . . . x∗ 0) is moved to the set of selected centers. This process is repeated to obtain k pairs. The samples k corresponding to the chosen pairs are selected. • Adversarial strategy: We use the DeepFool Active Learning (DFAL) algorithm by Ducoffe and Precioso [14]. In this strategy, DeepFool [6] (explained in Section 2.3) is applied to every sample xn ∈ ˜Di to obtain a 7 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT Table 2: Details of datasets for image and text classification tasks. # Train, # Val and # Test refer to the number of samples in the train, validation and test folds respectively. Note that the thief datasets (ImageNet subset and WikiText-2) do not have predefined folds, but the fractions used for training and validation have been tabulated for reference. (a) Details of datasets for image classification tasks. Image Dataset Dimensions # Train # Val # Test # Classes MNIST F-MNIST CIFAR-10 GTSRB 28 × 28 × 1 10K 48K 28 × 28 × 1 10K 48K 32 × 32 × 3 10K 40K 32 × 32 × 3 ∼ 31K ∼ 8K ∼ 12K 12K 12K 10K 10 10 10 43 ImageNet subset 64 × 64 × 3 100K 50K – – (b) Details of datasets for text classification tasks. # t u p n I Text Dataset # Train # Val # Test # Classes MR IMDB AG News QC 7,676 20K 96K ∼ 12K 1,066 1,920 5K 25K 24K ∼ 7K .5K 3K 2 2 5 6 WikiText-2 ∼ 89K ∼ 10K – – 1 . 1 l o o p a 2 . 1 v n o c b 2 . 1 v n o c 2 . 1 l o o p a 1 . 2 v n o c b 1 . 2 v n o c 1 . 2 l o o p a 2 . 2 v n o c b 2 . 2 v n o c 2 . 2 l o o p a 1 . 3 v n o c b 1 . 3 v n o c 1 . 3 l o o p a 2 . 3 v n o c b 2 . 3 v n o c 2 . 3 l o o p C F 32 filters each 64 filters each 64 filters each 128 filters each 128 filters each Projection # s2]- _Ielels sls/2/ # b 1 . 1 v n o c # a 1 . 1 v n o c # C F # Stim | # Softmax # b o r P 32 filters each # Figure 4: Network architecture for image classification tasks . perturbed Z,, that gets misclassified by the substitute model fe ie. f (an) # flan): (Note that this does not involve querying the secret model.) Let: On = ||2n — Fnll3 DFAL is a margin-based approach to active learning, i.e. it identifies samples that lie close to the decision boundary. To do this, it prefers samples xn corresponding to lower values of αn, i.e. smallest distance between xn and its adversarially perturbed neighbor ˆxn that lies across the decision boundary. Thus, this strategy selects the k samples xn corresponding to the lowest perturbation αn. # 4.2 Ensemble of subset selection strategies While the K-center strategy maximizes diversity, it does not ensure that each individual sample is helpful to the learner. On the contrary, while the adversarial strategy ensures that each individual sample is informative, it does nothing to eliminate redundancy across the samples selected. Inspired by this observation, we introduce the following ensemble subset selection strategy called Adversarial+K-center strategy. In this ensemble strategy, the adversarial strategy is first used to pick ρ points (ρ is a configurable parameter). Of these, k points are selected using the K-center strategy.The adversarial strategy first picks samples that lie close to the decision boundary. Following this, the K-center strategy selects a subset of these points with an aim to maximize diversity. We demonstrate the effectiveness of this strategy experimentally in Section 6.1. # 5 Experimental setup We now describe the datasets and DNN architecture used in our experiments. 8 A framework for the extraction of Deep Neural Networks by leveraging public data A PREPRINT A PREPRINT # 5.1 Datasets The details of each dataset can be found in Table 2. Secret datasets. For image classification, we use the MNIST dataset of handwritten digits [25], the Fashion-MNIST (F-MNIST) dataset of small grayscale images of fashion products across 10 categories [26], the CIFAR-10 dataset of tiny color images [27] and the German Traffic Sign Recognition Benchmark (GTSRB) [28]. For text classification, we use the MR dataset [29] of 5,331 positive and 5,331 statements from movie reviews, the IMDB [30] dataset of movie reviews, AG News corpus 6 of news from 5 categories and the QC question classification dataset [31]. Thief dataset. For images, we use a subset of the ILSVRC2012-14 dataset [32] as the thief dataset. In particular, we use a downsampled version of this data prepared by Chrabaszcz et al. [33]. The training and validation splits are reduced to a subset of size 100,000, while the test split is left unchanged. For text, we use sentences extracted from the WikiText-2 [34] dataset of Wikipedia articles. # 5.2 DNN architecture The same base complexity architectures are used for both the secret and the substitute model for our primary evaluation in Sections 6.1 and 6.3. We also conduct additional experiments on image classification tasks where the model complexities are varied between the secret and substitute models in Section 6.2. We first describe the base complexity architectures for image and text classification: Image classification. We use a multi-layered CNN, shown in Figure 4. The input is followed by 3 convolution blocks. Each convolution block consists of 2 repeated units – a single repeated unit consists of 2 convolution (3 × 3 kernel with stride 1) and 1 pooling (2 × 2 kernel with stride 2) layers. Each convolution is followed by a ReLU activation and batch normalization layer. Pooling is followed by a dropout. Convolution layers in each block use 32, 64 and 128 filters respectively. No two layers share parameters. The output of the final pooling layer is flattened and passed through fully connected and softmax layers to obtain the vector of output probabilities. Text classification. We use the CNN for sentence classification by Kim [35]. In the secret model, word2vec [36] is first used to obtain the word embeddings. The embeddings are then concatenated and 100 1-dimensional filters each of sizes 3, 4 and 5 are applied to convolve over time. This is followed by max-over-time pooling, which produces a 300-dimensional vector. This vector is then passed through fully connected and softmax layers to obtain the vector of output probabilities. # 5.3 Training Regime For training, we use the Adam optimizer with default hyperparameters (3; = 0.9, 62 = 0.999, € = 10~® anda learning rate of 0.001). In each iteration, the network is trained starting from the same random initialization for at most 1,000 epochs with a batch size of 150 (for images) or 50 (for text). Early stopping is used with a patience of 100 epochs (for images) or 20 epochs (for text). An L» regularizer is applied to all the model parameters with a loss term multiplier of 0.001, and dropout is applied at a rate of 0.1 for all datasets other than CIFAR-10. For CIFAR-10, a dropout of 0.2 is used. At the end of each epoch, the model is evaluated and the F measure on the validation split is recorded. The model with the best validation F; measure is selected as f in that iteration. Our experiments are run on a server with a 24-core Intel Xeon Gold 6150 CPU and NVIDIA GeForce GTX 1080Ti GPUs. We use the algorithm parameters kg = 0.1B (where B is the total query budget, as in Algorithm[Ip and 7 = 0.2 across all our experiments. For the ensemble strategy, we set p = B, the total query budget. # 6 Experimental results In our experiments we seek to obtain answers to the following questions: 1. How do the various active learning algorithms compare in their performance, i.e., in terms of the agreement between the secret model and substitute model? # 6https://di.unipi.it/~gulli/AG_corpus_of_news_articles.html 9 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT 1 0.8 0.9 0.8 0.7 Uncertainty K-center Adversarial Adv+K-cen Random 0.6 0.4 0.6 0 2 4 6 8 10 0 2 4 6 8 10 (a) MNIST dataset (b) F-MNIST dataset 0.7 0.8 0.6 0.6 0.5 0.4 0.4 0 2 4 6 8 10 0.2 0 2 4 6 8 10 (c) CIFAR-10 dataset (d) GTSRB dataset Figure 5: The improvement in agreement for image classification experiments with a total budget of 20K over 10 iterations. Since random is not run iteratively, it is indicated as a line parallel to the X-axis. 2. How does the query budget affect agreement? 3. What is the impact of using universal thief datasets over using uniform noise samples to query the secret model? 4. What is the impact of the DNN architectures (of the secret and substitute models) on the agreement obtained? The first three questions are answered in the context of image datasets in Section 6.1 and text datasets in Section 6.3. The fourth question is answered in Section 6.2. In our experiments, for all but the random strategy, training is done iteratively. As the choice of samples in random strategy is not affected by the substitute model ˜f obtained in each iteration, we skip iterative training. We also train a substitute model using the full thief dataset for comparison. The metric used for evaluation of the closeness between the secret model f and the substitute model ˜f is agreement between f and ˜f , evaluated on the test split of the secret dataset. # Image classification For each image dataset (described in Section 5.1), we run our framework across the following total query budgets: 10K, 15K, 20K, 25K and 30K (K = 1,000). For a budget of 20K, we show the agreement at the end of each iteration for every strategy and each dataset in Figure 5. We tabulate the agreement obtained at the end of the final iteration for each experiment in Table 3. Our observations across these 20 experiments are as follows: Effectiveness of active learning. The benefits of careful selection of thief dataset samples can be clearly seen: there is no dataset for which the random strategy performs better than all of the other strategies. In particular, K-center underperforms only once, while adversarial and adversarial+K-center underperform twice. Uncertainty underperforms 6 times, but this is in line with the findings of Ducoffe and Precioso [14]. Effectiveness of the ensemble method. The agreement of the models is improved by the ensemble strategy over the basic adversarial strategy in 14 experiments. Of these, the ensemble strategy emerges as the winner in 13 experiments – a clear majority. This improvement in agreement bears evidence to the increased potential of the combined strategy in extracting information from the secret model. The other competitive method is the K-center method, which wins in 5 experiments. This is followed by the adversarial strategy which won in 2 experiments. 10 A framework for the extraction of Deep Neural Networks by leveraging public data A PREPRINT Table 3: The agreement on the secret test set for image classification tasks. Each row corresponds to a subset selection strategy, while each column corresponds to a query budget. # (a) MNIST dataset # (b) F-MNIST dataset Strategy 10K 15K 20K 25K 30K Strategy 10K 15K 20K 25K 30K Random Uncertainty K-center Adversarial Adv+K-cen 91.64 94.64 95.80 95.75 95.40 95.19 97.43 95.66 95.59 97.64 95.90 96.77 96.47 96.84 97.65 97.48 97.29 97.81 97.74 97.60 97.36 97.38 97.95 97.80 98.18 Random Uncertainty K-center Adversarial Adv+K-cen 62.36 71.18 71.37 67.61 73.51 67.61 72.19 77.03 69.89 81.45 69.32 77.39 81.21 80.84 83.24 71.76 77.88 79.46 80.28 80.83 71.57 82.63 82.90 81.17 83.38 Using the full thief dataset (100K): Using uniform noise samples (100K): 98.81 20.56 Using the full thief dataset (100K): Using uniform noise samples (100K): 84.17 17.55 # (c) CIFAR-10 dataset # (d) GTSRB dataset Strategy 10K 15K 20K 25K 30K Strategy 10K 15K 20K 25K 30K Random Uncertainty K-center Adversarial Adv+K-cen 63.75 63.36 64.20 62.49 61.52 68.93 69.45 70.95 68.37 71.14 71.38 72.99 72.97 71.52 73.47 75.33 74.22 74.71 77.41 74.23 76.82 76.75 78.26 77.00 78.36 Random Uncertainty K-center Adversarial Adv+K-cen 67.72 67.30 70.89 72.71 70.79 77.71 73.92 81.03 79.44 79.55 79.49 80.07 83.59 83.43 84.29 82.14 83.61 85.81 84.41 85.41 83.84 85.49 85.93 83.98 86.71 Using the full thief dataset (100K): Using uniform noise samples (100K): 81.57 10.62 Using the full thief dataset (100K): Using uniform noise samples (100K): 91.42 45.53 Table 4: Agreement on the secret test set for each dataset (total budget of 10K). Here, N refers to the number of iterations (as in Algorithm 1). Top-1 refers to the adversary having access only to the top prediction, while in Softmax, they have access to the output probability distribution. The agreement is reported for the winning strategy in each case. Dataset Substitute model agreement (%) Top-1 N = 10 N = 20 Top-1 Softmax N = 10 MNIST F-MNIST CIFAR-10 GTSRB 95.80 73.51 64.20 72.71 96.74 78.84 64.23 72.78 98.61 82.13 77.29 86.90 Impact of the number of iterations. Table 4 shows that with an increase in the number of iterations, there is an improvement in agreement for the same budget. Thus, the substitute model agreement can be improved by increasing the number of iterations (at the expense of increased training time). Impact of access to output probability distribution. Table 4 demonstrates that access to the output probabilities of the secret model results in an improvement in agreement. We believe that this is because the substitute model receives a signal corresponding to every output neuron for each thief dataset sample that it is trained on. Consequently, the substitute model learns a better approximation. However, as many MLaaS models return only the Top-K or often Top-1 prediction, we run our experiments on the more restricted setting with access to only the Top-1 prediction. Impact of the query budget. As is evident from Table 3, there is almost always a substantial improvement in agreement when increasing the total query budget. Effectiveness of universal thief datasets. We see that uniform noise (as used in prior work by Tramèr et al. [11]) achieves a low agreement on all datasets. The reason for failure is as follows: in our experiments, we observe that when the secret model is queried with uniform noise, there are many labels which are predicted extremely rarely, while others dominate, e.g., the digit 6 dominates in MNIST and Frog dominates in CIFAR-10 (see Figure 6). In other words, 11 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT 100 4 0 . 6 9 60 0 0 . 0 5 1 8 . 9 4 100 3 4 . 8 9 40 50 0 0 . 0 0 0 . 0 0 0 . 0 0 0 . 0 4 7 . 2 1 2 . 1 0 0 . 0 0 0 . 0 0 0 . 0 20 0 0 . 0 6 0 . 0 1 1 . 0 1 0 . 0 0 0 . 0 0 0 . 0 0 0 . 0 0 0 . 0 50 0 0 . 0 0 0 . 0 9 0 . 0 2 0 . 0 4 4 . 1 0 0 . 0 0 0 . 0 0 0 . 0 2 0 . 0 0 0123456789 0 e e T r e s u o r T l l u P s s e r D t a o C l a d n a S t r i h S r e k a e n S g a B t o o B 0 e n a l P o t u A d r i B t a C r e e D g o D g o r F e s r o H p i h S k c u r T # (a) MNIST # (b) F-MNIST # (c) CIFAR-10 Figure 6: The distribution of labels (frequency in %) assigned by the secret model to uniform noise input. Table 5: The agreement on the secret test set for image classification tasks, when architectures of different complexity are used as the secret model and substitute model. Each row corresponds to a secret model architecture, while each column corresponds to a substitute model architecture. (a) MNIST dataset (b) F-MNIST dataset Secret model Substitute model BC LC HC Secret model Substitute model BC LC HC Lower Complexity (LC) Base Complexity (BC) Higher Complexity (HC) 98.73 97.21 96.75 98.15 98.81 98.05 97.63 98.10 98.36 Lower Complexity (LC) Base Complexity (BC) Higher Complexity (HC) 87.15 81.50 79.83 80.15 84.17 73.35 75.26 79.88 84.01 (c) CIFAR-10 dataset # (d) GTSRB dataset Secret model Substitute model BC LC HC Secret model Substitute model BC LC HC Lower Complexity (LC) Base Complexity (BC) Higher Complexity (HC) 78.34 80.66 74.34 76.83 81.57 79.17 74.48 81.80 78.82 Lower Complexity (LC) Base Complexity (BC) Higher Complexity (HC) 95.02 90.08 80.95 92.30 91.42 86.50 86.88 91.28 84.69 it is difficult for an adversary to discover images belonging to certain classes using uniform noise. This problem is alleviated via the use of universal thief datasets like ImageNet. On an average, using the full thief dataset (100K) leads to an improvement in agreement by 4.82× over the uniform baseline. Even with a budget of 30K, an improvement of 4.70× is retained with active learning. # Influence of substitute model architecture To check the influence of the architecture on the substitute model, we consider the following three options: • Lower complexity (LC) architecture: This DNN architecture has two convolution blocks, with two repeated units each (consisting of two convolution layers, followed by a pooling layer). The convolution layers in each block have 32 and 64 filters, respectively. • Base complexity (BC) architecture: This architecture has three convolution blocks, with three repeated units each (of the same configuration). The convolution layers in each block have 32, 64 and 128 filters, respectively. This is the architecture described in Section 5.2 and used in all the other experiments. • Higher complexity (HC) architecture: This architecture has four convolution blocks, with two repeated units each (of the same configuration). The convolution layers in each block have 32, 64, 128 and 256 filters, respectively. We consider all possible combinations of the above DNN architectures applied to both the secret and substitute models. The results of our experiments on the image classification tasks using all possible combinations of the above architectures as the secret and substitute model are tabulated in Table 5. 12 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT 0.8 0.8 0.7 Uncertainty K-center Random 0.7 0.6 0 2 4 6 8 10 0 2 4 6 8 10 (a) MR dataset (b) IMDB dataset 0.8 0.8 0.7 0.6 0.7 0.5 0 2 4 6 8 10 0 2 4 6 8 10 (c) AG News dataset (d) QC dataset Figure 7: The improvement in agreement for text classification experiments with a total budget of 20K over 10 iterations. Since random is not run iteratively, it is indicated as a line parallel to the X-axis. As is obvious from the table, the agreements along the principal diagonal, i.e. corresponding to scenarios where the secret model and substitute model architectures are identical, are in general high. These results also corroborate the findings of [38]. We believe that the performance degradation from using a less or more complex substitute model than the secret model results from underfitting or overfitting, respectively. A less complex model may not have the required complexity to fit to the constructed dataset as it is generated by querying a more complex function. Conversely, a more complex model may readily overfit to the constructed dataset, leading to poor generalization and thus a lower agreement score. Even though the agreements are higher in general for identical complexities, they are still reasonably high even when there is mismatch in model complexities. Model reverse-engineering can be used to recover information the architecture and hyperparameters of the secret model. Using this information, the adversary can then construct a substitute model that has a similar architecture and a comparable set of hyperparameters, with the hope that the trained substitute model will achieve a better agreement. # 6.3 Text classification In addition to the image domain we also present the results of running our framework on datasets from the text domain. For each text dataset (described in Section 5.1), we run our framework across the following total query budgets: 10K, 15K, 20K, 25K and 30K. As it is non-trivial to modify DeepFool to work on text, we omit the strategies that make use of it. The results of our experiments on the text classification tasks are shown in Table 6. Like for the image classification tasks, for a budget of 20K, we show the agreement at the end of each iteration for every strategy and each dataset in Figure 7. Effectiveness of active learning. As in the case of images, the use of intelligent selection of thief dataset samples peforms better: there is no dataset for which the random strategy performs better than all of the other strategies. In particular, K-center and uncertainty underperform only once each. Furthermore, incremental improvement over iterations is evident in the case of text, as seen in Figure 7. Impact of the query budget. As in the case of images we observe a similar pattern in the text results where there is usually an improvement in agreement when increasing the total query budget. Effectiveness of the universal thief. Once again, all 3 experiments using the thief dataset (random included) perform significantly better than the uniform noise baseline. On an average, using the full thief dataset (89K) leads to an 13 A framework for the extraction of Deep Neural Networks by leveraging public data A PREPRINT Table 6: The agreement on the secret test set for text classification tasks. Each row corresponds to a subset selection strategy, while each column corresponds to a query budget. # (a) MR dataset # (b) IMDB dataset 10K 15K 20K 25K 30K 10K 15K 20K 25K 30K Random Uncertainty K-center 76.45 77.19 77.12 78.24 80.39 81.24 79.46 81.24 81.96 81.33 84.15 83.95 82.36 83.49 83.96 Random Uncertainty K-center 71.67 73.48 77.67 78.79 78.12 78.96 74.70 81.78 80.24 80.71 82.10 81.58 79.23 82.17 82.90 Using the full thief dataset (89K): Using discrete uniform noise samples (100K): 86.21 75.79 Using the full thief dataset (89K): Using discrete uniform noise samples (100K): 86.38 53.23 # (c) AG News dataset # (d) QC dataset 10K 15K 20K 25K 30K 10K 15K 20K 25K 30K Random Uncertainty K-center 74.51 75.47 75.87 80.39 82.08 79.63 82.76 83.47 84.21 83.97 84.96 84.97 84.20 87.04 85.96 Random Uncertainty K-center 53.00 58.60 56.80 58.00 65.20 65.60 57.20 64.40 68.60 64.40 65.60 67.40 60.40 69.20 71.80 Using the full thief dataset (89K): Using discrete uniform noise samples (100K): 90.07 35.50 Using the full thief dataset (89K): Using discrete uniform noise samples (100K): 77.80 21.60 improvement in agreement by 2.22× over the discrete uniform baseline. Even with a budget of 30K, an improvement of 2.11× is retained with active learning. In summary, our experiments on the text dataset illustrate that the framework is not restricted to image classification, but may be used with tangible benefits for other media types as well. # 7 Related work We discuss related work in three broad areas: model extraction, model reverse-engineering and active learning. # 7.1 Model extraction Attacks. Tramèr et al. [11] present the first work on model extraction, and the one that is closest to our setting of an adversary with a limited query budget. They introduce several methods for model extraction across different classes of models – starting with exact analytical solutions (where feasible) to gradient-based approximations (for shallow feedforward neural networks). However, as we demonstrated in Section 6.1, their approach of using random uniform noise as a thief dataset for DNNs fails for deeper networks. Shi et al. [39] perform model extraction by train a deep learning substitute model to approximate the functionality of a traditional machine learning secret model. In particular, they demonstrate their approach on naïve Bayes and SVM secret models trained to perform text classification. They also show that the reverse is not true – models of lower complexity, viz., naïve Bayes or SVM are unable to learn approximations of more complex deep learning models. Sethi and Kantardzic [40] present a Seed-Explore-Exploit framework whereby an adversary attempts to fool a security mechanism with a ML-based core, e.g., a CAPTCHA system that uses click time to determine whether the user is benign or a bot. They use model extraction to inform the generation of adversarial examples that allows the attacker to perturb inputs to bypass detection. To do this, they use the Seed-Explore-Exploit framework, which starts with a benign and malicious seed, and proceeds by using the Gram-Schmidt process to generate orthonormal samples near the mid-points of any two randomly selected seed points of opposite classes in the exploration phase. These are then used to train a substitute model, which provides useful information in the generation of adversarial examples (during the exploitation phase). Chandrasekaran et al. [41] draw parallels between model extraction and active learning. They demonstrate that query synthesis (QS) active learning can be used to steal ML models such as decision trees by generating queries de novo, independent of the original dataset distribution. They implement two QS active learning algorithms and use them to 14 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT extract binary classification models (d-dimensional halfspaces). In contrast to their approach, ours uses pool-based active learning. Shi et al. [42] make use of active learning in conjunction with problem domain data to extract a shallow feedforward neural network for text applications, when interfacing with the secret model through APIs with strict rate limits. Shi et al. [43] design an exploratory attack that uses a generative adversarial network (GAN) trained on a small number of secret dataset samples, which is then able to generate informative samples to query the secret model with. In both these works, the extracted model is then used to launch evasion attacks (i.e. finding samples which the secret model incorrectly labels) and causative attacks (i.e. exploiting classifiers trained by user feedback by intentionally providing it mislabeled data). Defenses. Quiring et al. [44] show that when the secret model is a decision tree, defenses against model watermarking can also be used as defenses for model extraction attacks. This defense is only applicable to decision trees, and does not apply to DNNs. Lee et al. [45] apply a perturbation to the predicted softmax probability scores to dissuade model extraction adversaries. Of course, such a defense would still leave the secret model vulnerable to attacks that can work with only Top-1 predictions to be returned, such as ours. Of course, we speculate that it may lead to a lower agreement in our approach if the adversary does not identify the defense ahead of time and continues to operate on the perturbed softmax outputs directly. Juuti et al. [38] design PRADA, a framework to detect model extraction attacks by computing the empirical distribution of pairwise distances between samples. They demonstrate that for natural samples (i.e. benign inputs to an MLaaS API), the distribution of pairwise distances is expected to fit a bell curve, whereas for noisy samples a peaky distribution is observed instead. The queries made by a client can be logged and the distribution can be analyzed to detect a potential model extraction attack. We speculate that our approach will break this defense, as the universal thief datasets that it pulls from – while not from the same domain – are indeed otherwise natural, and we expect pairwise distances between samples to fit a bell curve. Hanzlik et al. [46] design MLCapsule, a guarded offline deployment of MLaaS using Intel SGX. This allows providers of machine learning services to serve offline models with the same security guarantees that are possible for a custom server-side deployment, while having the additional benefit that the user does not have to trust the service provider with their input data. They demonstrate an implementation of PRADA [38] within MLCapsule as a defense against model extraction attacks. Xu et al. [47] obfuscate CNN models by replacing the complex CNN feature extractors with shallow, sequential convolution blocks. Networks with 10s or 100s of layers are simulated with a shallow network with 5-7 convolution layers. The obfuscated secret model is shown to be more resilient to both structure piracy (i.e. model reverse- engineering) and parameter piracy, thus dissuading model extraction attackers. We speculate that our approach will still be able to extract the model if it is given access to the obfuscated model through the same API interface. Kesarwani et al. [48] design a model extraction monitor that logs the queries made by users of a MLaaS service. They use two metrics – total information gain and coverage of the input feature space by the user’s queries – in order to detect a possible model extraction attack, while minimizing computational overhead. They demonstrate their monitor for decision tree and neural network secret models. We speculate that our approach may be detected by such a model extraction monitor, however an informed adversary could choose to tweak the active learning subset selection strategy to avoid detection by picking samples with lower information gain, and covering only a limited portion of the feature space. Applications. Papernot et al. [7] use model extraction for the generation of adversarial examples. They query a limited subset of the training data, or hand-crafted samples that resemble it, against the secret model. The resulting labels are then used to train a crude substitute model with a low test agreement. A white-box adversarial example generation technique is used to generate adversarial examples, which are then used to attack the original secret model by leveraging the transferability of adversarial examples. # 7.2 Model reverse-engineering As we show in Section 6.2, while the agreement obtained by us is respectable even when the secret model and substitute model architectures do not match, agreement is improved when they match. Thus, it is in the best interest of the adversary to try to obtain information about the secret model architecture – this is possible through model reverse-engineering. 15 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT Oh et al. [17] train a meta-model which takes as input the softmax prediction probabilities returned by the secret model and predicts, with statistically significant confidence, secret model hyperparameters such as the number of convolution layers, the filter size of CNNs, the activation function used, the amount of dropout, batch size, optimizer, etc. To do this, they first randomly generate and trains networks of varying complexity and queries them to create a dataset to train the meta-model on. Wang and Gong [18] estimate the regularizer scale factor λ for linear regression (ridge regression and LASSO), kernel regression (kernel ridge regression), linear classification (SVM with hinge loss, SVM with squared hinge loss, L1-regularized logistic regression and L2-regularized logistic regression) and kernel classification algorithms (kernel SVM with hinge loss, kernel SVM with squared hinge loss). Duddu et al. [19] use a timing side channel for model reverse-engineering, i.e. they use the execution time of the forward pass of the secret model, averaged across queries, to infer model architecture and hyperparameters. This information is then used to reduce the search space by querying a pretrained regressor trained to map execution time to hyperparameters (such as the number of layers). Further search is performed using a reinforcement learning algorithm that predicts the best model architecture and hyperparameters in this restricted search space. Yan et al. [20] use a similar insight that the forward pass of DNNs rely on GeMM (generalized matrix multiply) library operations. They use information from cache side channels to reverse engineer information about DNN architectures, such as the number of layers (for a fully connected network) and number of filters (for a CNN). However, such an attack cannot determine the presence and configuration of parameter-free layers such as activation and pooling layers. Hong et al. [22] present another attack using cache side channels that monitors the shared instruction cache. The attacker periodically flushes the cache lines used by the victim secret model and measures access time to the target instructions. This side channel information is then used to reconstruct the architecture, including parameter-free layers. Hu et al. [21] use bus snooping techniques (passively monitoring PCIe and memory bus events). Using this information, they first infer kernel features such as read and write data volume of memory requests. This information is then used to reconstruct the layer topology and predict the network architecture. # 7.3 Active learning There is an existing body of work on active learning, applied traditionally to classic machine learning models such as naïve Bayes and SVMs. We refer the reader to the survey by Settles [16] for details. Active learning methods engineered specifically for deep neural networks include the following: • Sener and Savarese [24] present an active learning strategy based on core-set construction. The construction of core-sets for CNNs is approximated by solving a K-center problem. The solution is further made robust by solving a mixed integer program that ensures the number of outliers does not exceed a threshold. They demonstrate significant improvements, when training deep CNNs, over earlier active learning strategies (such as uncertainty) and over a K-median baseline. • Ducoffe and Precioso [14] present a margin-based approach to active learning. The DeepFool [6] method for generation of adversarial examples is used to generate samples close to the decision boundary by perturbing an input image until the class predicted by the image classification model changes. They demonstrate that their method is competitive to that of [24] for image classification tasks on CNNs, while significantly outperforming classical methods (such as uncertainty). # 8 Conclusion In this paper, we introduce three criteria for practical model extraction. Our primary contribution is a novel framework that makes careful use of unlabeled public data and active learning to satisfy these criteria. We demonstrate the effectiveness of our framework by successfully applying it to a diverse set of datasets. Our framework is able to extract DNNs with high test agreement and on a limited query budget, using only a fraction (10-30%) of the data available to it. Future work on developing this method of attack includes the development of better active learning strategies and the exploration of other novel combinations of existing active learning strategies. 16 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT # Acknowledgement We would like to thank Somesh Jha for his helpful inputs. We thank NVIDIA for providing us computational resources, and Sonata Software Ltd. for partially funding this work. # References [1] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. [2] Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems, 2019. [3] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (S&P), pages 39–57. IEEE, 2017. [4] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019. [5] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372–387. IEEE, 2016. [6] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016. [7] Nicolas Papernot, Patrick D. McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In AsiaCCS, 2017. [8] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015. [9] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence infor- mation and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333. ACM, 2015. [10] Osbert Bastani, Carolyn Kim, and Hamsa Bastani. Interpretability via model extraction. 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning, 2017. [11] Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction APIs. In USENIX Security Symposium, 2016. [12] Jacson Rodrigues Correia-Silva, Rodrigo F. Berriel, Claudine Badue, Alberto F. de Souza, and Thiago Oliveira- Santos. Copycat CNN: Stealing knowledge by persuading confession with random non-labeled data. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–8, July 2018. [13] Rodrigo F. Berriel, André Teixeira Lopes, Alberto F. de Souza, and Thiago Oliveira-Santos. Deep learning-based large-scale automatic satellite crosswalk classification. IEEE Geoscience and Remote Sensing Letters, 14(9): 1513–1517, Sep. 2017. ISSN 1545-598X. doi: 10.1109/LGRS.2017.2719863. [14] Melanie Ducoffe and Frédéric Precioso. Adversarial active learning for deep networks: a margin based approach. CoRR, abs/1802.09841, 2018. URL http://arxiv.org/abs/1802.09841. [15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436, 2015. [16] Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009. [17] Seong Joon Oh, Max Augustin, Mario Fritz, and Bernt Schiele. Towards reverse-engineering black-box neural networks. In ICLR, 2018. URL https://openreview.net/forum?id=BydjJte0-. [18] Binghui Wang and Neil Zhenqiang Gong. Stealing hyperparameters in machine learning. 2018 IEEE Symposium on Security and Privacy (S&P), pages 36–52, 2018. [19] Vasisht Duddu, Debasis Samanta, D. Vijay Rao, and Valentina E. Balas. Stealing neural networks via timing side channels. CoRR, abs/1812.11720, 2018. URL http://arxiv.org/abs/1812.11720. [20] Mengjia Yan, Christopher Fletcher, and Josep Torrellas. Cache telepathy: Leveraging shared resource attacks to learn DNN architectures. arXiv preprint arXiv:1808.04761, 2018. 17 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT [21] Xing Hu, Ling Liang, Lei Deng, Shuangchen Li, Xinfeng Xie, Yu Ji, Yufei Ding, Chang Liu, Timothy Sherwood, and Yuan Xie. Neural network model extraction attacks in edge devices by hearing architectural hints. arXiv preprint arXiv:1903.03916, 2019. [22] Sanghyun Hong, Michael Davinroy, Yiˇgitcan Kaya, Stuart Nevans Locke, Ian Rackow, Kevin Kulda, Dana Dachman-Soled, and Tudor Dumitra¸s. Security analysis of deep neural networks operating in the presence of cache side-channel attacks. arXiv preprint arXiv:1810.03487, 2018. [23] David D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 3–12, 1994. URL http://dl.acm.org/citation.cfm?id=188490.188495. [24] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. In ICLR, 2018. URL https://openreview.net/forum?id=H1aIuk-RW. [25] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [26] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747. [27] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009. [28] Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32:323–332, 2012. [29] Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the ACL, 2005. [30] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learn- ing word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pages 142–150, 2011. [31] Xin Li and Dan Roth. Learning question classifiers. In Proceedings of the 19th International Conference on Computational Linguistics - Volume 1, pages 1–7, 2002. [32] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [33] Patryk Chrabaszcz, Ilya Loshchilov, and Frank Hutter. A downsampled variant of ImageNet as an alternative to the CIFAR datasets. CoRR, abs/1707.08819, 2017. URL http://arxiv.org/abs/1707.08819. [34] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In ICLR, 2017. URL https://openreview.net/forum?id=Byj72udxe. [35] Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014. [36] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In ICLR, 2013. URL https://openreview.net/forum?id=idpCdOWtqXd60. [37] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. URL https: //arxiv.org/abs/1412.6980. [38] Mika Juuti, Sebastian Szyller, Alexey Dmitrenko, Samuel Marchal, and N. Asokan. PRADA: Protecting against DNN model stealing attacks. In 2019 IEEE European Symposium on Security and Privacy (EuroS&P), 2019. [39] Yi Shi, Yalin E. Sagduyu, and Alexander Grushin. How to steal a machine learning classifier with deep learning. 2017 IEEE International Symposium on Technologies for Homeland Security (HST), pages 1–5, 2017. [40] Tegjyot Singh Sethi and Mehmed M. Kantardzic. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomputing, 289:129–143, 2018. [41] Varun Chandrasekaran, Kamalika Chaudhuri, Irene Giacomelli, Somesh Jha, and Songbai Yan. Exploring connections between active learning and model extraction. CoRR, abs/1811.02054, 2018. URL http://arxiv. org/abs/1811.02054. [42] Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, and Jason H. Li. Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST), pages 1–6, Oct 2018. doi: 10.1109/THS.2018.8574124. 18 A framework for the extraction of Deep Neural Networks by leveraging public data # A PREPRINT [43] Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, and Jason H. Li. Generative adversarial networks for black-box API attacks with limited training data. In 2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pages 453–458, Dec 2018. doi: 10.1109/ISSPIT.2018.8642683. [44] Erwin Quiring, Daniel Arp, and Konrad Rieck. Fraternal twins: Unifying attacks on machine learning and digital watermarking. arXiv preprint arXiv:1703.05561, 2017. [45] Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su. Defending against model stealing attacks using deceptive perturbations. CoRR, abs/1806.00054, 2018. URL http://arxiv.org/abs/1806.00054. [46] Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, and Mario Fritz. MLCapsule: Guarded offline deployment of machine learning as a service. CoRR, abs/1808.00590, 2018. URL http://arxiv.org/abs/1808.00590. [47] Hui Xu, Yuxin Su, Zirui Zhao, Yangfan Zhou, Michael R Lyu, and Irwin King. DeepObfuscation: Securing the structure of convolutional neural networks via knowledge distillation. arXiv preprint arXiv:1806.10313, 2018. [48] Manish Kesarwani, Bhaskar Mukhoty, Vijay Arya, and Sameep Mehta. Model extraction warning in MLaaS paradigm. In Proceedings of the 34th Annual Computer Security Applications Conference, pages 371–380. ACM, 2018. 19
{ "id": "1810.03487" }
1905.08232
Adversarially robust transfer learning
Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learning, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain robust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of fine tuning a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the generalization of adversarially trained models, while maintaining their robustness.
http://arxiv.org/pdf/1905.08232
Ali Shafahi, Parsa Saadatpanah, Chen Zhu, Amin Ghiasi, Christoph Studer, David Jacobs, Tom Goldstein
cs.LG, cs.CR, cs.CV, stat.ML
null
null
cs.LG
20190520
20200221
0 2 0 2 b e F 1 2 ] G L . s c [ 2 v 2 3 2 8 0 . 5 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # ADVERSARIALLY ROBUST TRANSFER LEARNING Ali Shafahi∗†, Parsa Saadatpanah∗†, Chen Zhu∗†, Amin Ghiasi†, Cristoph Studer‡, ashafahi,parsa,chenzhu,amin { David Jacobs†, Tom Goldstein† djacobs,tomg { @cs.umd.edu ; [email protected] } # @cs.umd.edu } # ABSTRACT Transfer learning, in which a network is trained on one task and re-purposed on another, is often used to produce neural network classifiers when data is scarce or full-scale training is too costly. When the goal is to produce a model that is not only accurate but also adversarially robust, data scarcity and computational limitations become even more cumbersome. We consider robust transfer learn- ing, in which we transfer not only performance but also robustness from a source model to a target domain. We start by observing that robust networks contain ro- bust feature extractors. By training classifiers on top of these feature extractors, we produce new models that inherit the robustness of their parent networks. We then consider the case of “fine tuning” a network by re-training end-to-end in the target domain. When using lifelong learning strategies, this process preserves the robustness of the source network while achieving high accuracy. By using such strategies, it is possible to produce accurate and robust models with little data, and without the cost of adversarial training. Additionally, we can improve the gener- alization of adversarially trained models, while maintaining their robustness. # INTRODUCTION Deep neural networks achieve human-like accuracy on a range of tasks when sufficient training data and computing power is available. However, when large datasets are unavailable for training, or pracitioners require a low-cost training strategy, transfer learning methods are often used. This process starts with a source network (pre-trained on a task for which large datasets are available), which is then re-purposed to act on the target problem, usually with minimal re-training on a small dataset (Yosinski et al., 2014; Pan & Yang, 2009). While transfer learning greatly accelerates the training pipeline and reduces data requirements in the target domain, it does not address the important issue of model robustness. It is well-known that naturally trained models often completely fail under adversarial inputs (Biggio et al., 2013; Szegedy et al., 2013). As a result, researchers and practitioners often resort to adversarial training, in which adversarial examples are crafted on-the-fly during network training and injected into the training set. This process greatly exacerbates the problems that transfer learning seeks to avoid. The high cost of creating adversarial examples increases training time (often by an order of magnitude or more). Furthermore, robustness is known to suffer when training on a small dataset (Schmidt et al., 2018). To make things worse, high-capacity models are often needed to achieve good robustness (Madry et al., 2017; Kurakin et al., 2016; Shafahi et al., 2019b), but these models may over-fit badly on small datasets. CONTRIBUTIONS The purpose of this paper is to study the adversarial robustness of models produced by transfer learning. We begin by observing that robust networks contain robust feature extractors, which are resistant to adversarial perturbations in different domains. Such robust features can be used as a basis # ∗equal contribution †University of Maryland ‡Cornell University 1 Published as a conference paper at ICLR 2020 for semi-supervised transfer learning, which only requires re-training the last layer of a network. To demonstrate the power of robust transfer learning, we transfer a robust ImageNet source model onto the CIFAR domain, achieving both high accuracy and robustness in the new domain without adversarial training. We use visualization methods to explore properties of robust feature extractors. Then, we consider the case of transfer of learning by “fine-tuning.” In this case, the source network is re-trained end-to-end using a small number of epochs on the target domain. Unfortunately, this end- to-end process does not always retain the robustness of the source domain; the network “forgets” the robust feature representations learned on the source task. To address this problem, we use recently proposed lifelong learning methods that prevent the network from forgetting the robustness it once learned. Using our proposed methods, we construct robust models that generalize well. In particular, we improve the generalization of a robust CIFAR-100 model by roughly 2% while preserving its robustness. # 2 BACKGROUND Adversarial examples fall within the category of evasion attacks—test-time attacks in which a per- turbation is added to a natural image before inference. Adversarial attacks are most often crafted using a differentiable loss function that measures the performance of a classifier on a chosen im- age. In the case of norm-constrained attacks (which form the basis of most standard benchmark problems), the adversary solves max (a + 6, y, 0) st. |[5l|p<e, (1) where 6 are the (already trained and frozen) parameters of classifier c(x,@) — that maps an image to aclass, | is the proxy loss used for classification (often cross-entropy), 6 is the image perturbation, (x, y) is the natural image and its true class, and ||.||, is some nor | The optimization problem in Eq.[Taims to find a bounded perturbation that maximizes the cross-entropy loss given the correct label. There are many variants of this process, including DeepFool (Moosavi-Dezfooli et al.| L-BFGS (Szegedy et a,|20T3), and CW (Carlini & Wagner] 2017), Many researchers have studied methods for building a robust network which have been later shown to be ineffective when attacked with stronger adversaries (Athalye et al., 2018). Adversarial training (Szegedy et al., 2013) is one of the defenses that was not broken by Athalye et al. (2018). While ad- versarial training using a weak adversary such as the FGSM attack (Goodfellow et al., 2015) can be broken even by single step attacks which add a simple random step prior to the FGSM step (Tram`er et al., 2017), adversarial training using a strong attack has successfully improved robustness. Madry et al. (2017) showed that a PGD attack (which is a BIM attack (Kurakin et al., 2016) with an ini- tial random step and projection) is a strong enough attack to achieve promising adversarial training results. We will refer to this training method as PGD adversarial training. PGD adversarial train- ing achieves good robustness on bounded attacks for MNIST (LeCun et al., 1998) and acceptable robustness on CIFAR-10 (Krizhevsky & Hinton, 2009) classifiers. Tsipras et al. (2018) show that adversarial training with strong PGD adversaries has many benefits in addition to robustness. They also state that while adversarial training may improve generalization in regimes where training data is limited (especially on MNIST), it may be at odds with generalization in regimes where data is available. This trade-off was also recently studied by Zhang et al. (2019), Su et al. (2018), and Shafahi et al. (2019a). While, to the best of our knowlegde, the transferability of robustness has not been studied in depth, Hendrycks et al. (2019) studied the case of adversarially training models that were pre-trained on different domains. Our work is fundamentally different in that we seek to transfer robustness with- out resorting to costly and data-hungry adversarial training. We train the target model on natural examples only, which allows us to directly study how well robustness transfers. Additionally, this allows us to have better generalization and achieve higher accuracy on validation examples. While as Hendrycks et al. (2019) state, fine-tuning on adversarial examples built for the target domain can improve robustness of relatively large datasets such as CIFAR-10 and CIFAR-100 compared to adversarial training from scratch on the target domain, we show that in the regimes of limited data (where transfer learning is more common), adversarially robust transfer learning can lead to better results measured in terms of both robustness and clean validation accuracy. 'By default we will use the @..-norm in this paper. 2 Published as a conference paper at ICLR 2020 Table 1: Accuracy and robustness of natural and adversarially trained models on CIFAR-10+ and CIFAR-100+. The “+” sign denotes standard data augmentation (€ = 8). Dataset CIFAR-10+ CIFAR-100+ model natural robust natural robust validation accuracy 95.01% 87.25% 78.84% 59.87% accuracy on PGD-20 0.00% 45.84% 0.00% 22.76% accuracy on CW-20 0.00% 46.96% 0.00% 23.16% # 3 THE ROBUSTNESS OF DEEP FEATURES In this section, we explore the robustness of different network layers, and demonstrate that robust networks rely on robust deep features. To do so, we start from robust classifiers (c(θr)) for the CIFAR-100 and CIFAR-10 datasets (Krizhevsky & Hinton, 2009), and update θ by training on natural examples. In each experiment, we re-initialize the last k layers/blocks of the network, and re-train just those layers. We start by re-initializing just the last layer, then the last two, and so on until we re-initialize all the layers. We use the adversarially trained Wide-ResNet 32-10 (Zagoruyko & Komodakis, 2016) for CIFAR- 10 from Madry et al. (2017) as our robust model for CIFAR-10. We also adversarially train our own robust classifier for CIFAR-100 using the code from Madry et al. (2017). To keep things consistent, we use the same hyper-parameters used by Madry et al. (2017) for adversarially training CIFAR-10 to adversarially train the CIFAR-100 model.2 The performance of the CIFAR-10 and CIFAR-100 models on natural and adversarial examples are summarized in Table 1. To measure robustness, we evaluate the models on adversarial examples built using PGD attacks. We break the WRN 32-10 model into 17 blocks, which are depicted in Fig. 2] In each experiment, we first re-initialize the k deepest blocks (blocks 1 through k) and then train the parameters of those blocks on natural imageq?| We train for 20,000 iterations using Momentum SGD and a learning rate of 0.001. We then incrementally unfreeze and train more blocks. For each experiment, we evaluate the newly trained model’s accuracy on validation adversarial examples built with a 20-step PGD 0, attack with « = 8. Fig. 1 shows that robustness does not drop if only the final layers of the networks are re-trained on natural examples. In fact, there is a slight increase in robustness compared to the baseline PGD- 7 adversarially trained models when we just retrain the last batch-normalization block and fully connected block. As we unfreeze and train more blocks, the network’s robustness suddenly drops. This leads us to believe that a hardened network’s robustness is mainly due to robust deep feature representations and robustness is preserved if we re-train on top of deep features. # image ‘Be cony, 160 ‘Be cony, 160 ‘xa con, 160 Â¥ ‘3x3 conv, 160 ‘3x3 conv, 160 Â¥ 3S conv 160 ‘Bea conv, 320 Â¥ BS conv 320 ‘BxS conv, 320 Â¥ ‘xa conv, 320 “Bea conv, 640 Â¥ BS conv 640 BS conv, 640 Â¥ BB conv 640 batch norm Â¥ avg poo! -- Â¥ FC ‘xa conv, 320 Â¥ ‘3x3 conv, 320 3x8 conv, 640 ‘3x3 conv, 160 - ‘Bea com, 160 Be conv, 160 3x3 conv, 640 Ba conv, 640 Â¥ BS conv 640 block 12 blocks blocks Figure 2: Wide Resnet 32-10 and the blocks used for freezing/retraining Now that we have identified feature extractors as a source of robustness, it is natural to investigate whether robustness is preserved when transfer learning using robust feature extractors. We will We adv. train the WRN 32-10 on CIFAR-100 using a 7-step foc PGD attack with step-size=2 and € = 8. We train for 80,000 iterations with a batch-size of 128. 3In this experiment, we use standard data augmentation techniques. 3 Published as a conference paper at ICLR 2020 (a) CIFAR-10 PGD-20 accuracy (b) CIFAR-100 PGD-20 accuracy Figure 1: Robustness is preserved when we retrain only the deepest block(s) of robust CIFAR- 10 and CIFAR-100 models using natural examples. The vertical axis is the accuracy on PGD-20 generated adversarial examples (i.e. robustness) after re-training deep layers. The robustness of the adversarially trained models if all layers are frozen are shown with dashed lines. study two different approaches for transferring robustness across datasets: one in which only the last layer is re-trained, and one with end-to-end re-training. # 4 TRANSFER LEARNING: RECYCLING FEATURE EXTRACTORS We study how robustness transfers when the feature extractor layers of the source network are frozen, and we retrain only the last fully connected layer (i.e. the classification layer) for the new task. Formally, the transfer learning objective is: min w l(z(x, θ∗), y, w) (2) where z is the deep feature extractor function with pre-trained and now “frozen” parameters θ∗, and w represents the trainable parameters of the last fully connected layer. To investigate how well robustness transfers, we use two source models: one that is hardened by adversarial training and another that is naturally trained. We use models trained on CIFAR-100 as source models and perform transfer learning from CIFAR- 100 to CIFAR-10. The results are summarized in Table 2. Compared to adversarial/natural training the target model, transferring from a source model seems to result in a drop in natural accuracy (compare first row of Table 1 to the first row of Table 2). This difference is wider when the source and target data distributions are dissimilar (Yosinski et al., 2014). To evaluate our method on two datasets with more similar attributes, we randomly partition CIFAR- 100 into two disjoint subsets where each subset contains images corresponding to 50 classes. Table 2 shows the accuracy of transferring from one of the disjoint sets to the other (second row) and to the same set (third row). We can compare results of transfer learning with adversarial training on CIFAR-100 by averaging the results in the second and third rows of Table 2 to get the accuracy across all 100 classes of CIFAR-100.4 By doing so, we see that the accuracy of the transferred classifier matches that of the adversarially trained one, even though no adversarial training took place in the target domain. For completeness, we have also included experiments where we use CIFAR-10 as the source and CIFAR-100 as the target domain. We make the following observations from the transfer-learning results in Table 2. 1) robustness transfers: when the source model used for transfer learning is robust, the target model is also robust (although less so than the source), 2) robustness transfers between models that are more similar: If 4The robust CIFAR-100 classifier has 59.87% validation accuracy and 22.76% accuracy on PGD-20 adver- sarial examples. The average validation accuracy of the two half-CIFAR-100 classifiers on validation examples is 64.96%+58.48% 2 4 Published as a conference paper at ICLR 2020 Table 2: Transfer learning by freezing the feature extractor layers (€ = 8). Source Dataset CIFAR-100 CIFAR-100 (50% of classes) CIFAR-100 (50% of classes) CIFAR-10 Target Dataset CIFAR-10 CIFAR-100 (other 50% of classes) CIFAR-100 (same 50% of classes) CIFAR-100 Source Model natural robust natural robust natural robust natural robust PGD-20 CW-20 83.05% 0.00% 0.00% 72.05% 17.70% 17.43% 71.44% 0.00% 0.00% 58.48% 15.86% 15.30% 80.20% 0.00% 0.00% 64.96% 25.16% 25.56% 0.00% 49.66% 0.00% 41.59% 11.63% 9.68% val. Table 3: Transfer learning from ImageNet (€ = 5). Architecture and Source Dataset | Target Dataset | Source Model val. PGD-20 | CW-20 natural 90.49% | 0.01% 0.00% CIFAR-10+ | robust (« = 5) | 88.33% | 22.66% | 26.01% ResNet-50 ImageNet natural | 72.84% | 0.05% | 0.00% CIFAR-100+ | robust (« = 5) | 68.88% | 15.21% | 18.34% Robust u-ResNet-50 (€ = 5) for CIFAR-10+ 82.00% | 53.11% Robust u-ResNet-50 (€ = 5) for CIFAR-100+ 59.90% | 29.54% the source and target models are trained on datasets which have similar distributions (and number of classes), robustness transfers better, and 3) validation accuracy is worst if we use a robust model as the source compared to using a conventionally trained source model: if the source model is naturally trained, the natural validation accuracy is better, although the target model is then vulnerable to adversarial perturbations. 4.1 TRANSFER LEARNING WITH IMAGENET MODELS Transfer learning using models trained on ImageNet (Russakovsky et al., 2015) as the source is a common practice in industry because ImageNet feature extractors are powerful and expressive. In this section we evaluate how well robustness transfers from these models. 4.1.1 TRANSFER LEARNING USING IMAGENET Starting from both a natural and robust ImageNet model, we perform the same set of experiments we did in section|4} Robust ImageNet models do not withstand untargeted ¢,, attacks using as large an ¢ as those that can be used for simpler datasets like CIFAR. Following the method |Shafahi et al.| (2019b), we “free train” a robust ResNet-50 on ImageNet using replay hyper-parameter m = 4. The hardened ImageNet classifier withstands attacks bounded by « = 5. Our robust ImageNet achieves 59.05% top-1 accuracy and roughly 27% accuracy against PGD-20 ¢,, € = 5 attacks on validation examples. We experiment with using this robust ImageNet model and a conventionally trained ResNet-50 ImageNet model as the source models. Using the ImageNet source models, we train CIFAR classifiers by retraining the last layer on natural CIFAR examples. We up-sample the 32 224 before feeding them into the ResNet-50 source models that are trained on ImageNet. For evaluation purposes, we also train robust ResNet-50 models from scratch using (Shafahi et al., 2019b) for the CIFAR models. To ensure that the transfer learning models and the end-to-end trained robust models have the same capacity and dimensionality, we first upsample the CIFAR images before feeding them to the ResNet-50 model. To distinguish between the common case of training ResNet models on CIFAR images that are 32 32-dimensional, we call our models that are trained on the upsampled CIFAR datasets the upsample-first ResNets or “u-ResNets”. Table 3 illustrates that using a robust ImageNet model as a source results in high validation accuracy for the transferred CIFAR target models. Also, given that the ImageNet classifier by itself is 27% robust, the CIFAR-10 model maintains the majority of that 27% robustness. When we compare the end-to-end hardened classifiers (robust u-ResNets) with the transferred classifier, we can see that 5 Published as a conference paper at ICLR 2020 (a) Clean validation (b) Adversarial validation (c) Average validation Figure 3: When the number of training data per-class is very limited (right bars), adversarially robust transfer learning [Transferred] is better in all metrics. However, as the number of training data increases (left bars), fine-tuning with adversarial examples of the target domain [Fine-tuned with AT] results in more robustness. Adversarially robust transfer learning always results in models that work better on natural examples and is 3 faster than fine-tuning with adversarial examples of the target domain. Using a pre-trained robust ImageNet improves both robustness and generalization. while the robustness is less for the transferred case, transferred models result in considerably better performance on clean validation examples. # 4.2 LOW-DATA REGIME As touched on before, transfer learning is more common in situations where the number of training points in the target domain is limited. Up until now, as a proof of concept, we have illustrated the majority of our experiments on the CIFAR target domains where we have many training points per- class. Hendrycks et al. (2019) show that starting from a pre-trained robust ImageNet model and fine-tuning on adversarial examples of the CIFAR domain can improve robustness beyond that of simply adversarial training CIFAR. Here, we illustrate the effect of training data size on robustness and natural performance by running various experiments on subsets of CIFAR-100 where we vary the number of training points per-class (N ). We compare three different hardening methods: (1) Free-training/adversarial training the target do- main (Shafahi et al., 2019b); (2) fine-tuning using adversarial examples of the target task starting from the Free-4 robust ImageNet model similar to (Hendrycks et al., 2019); and (3) training a fully connected layer on top of the frozen feature extractors of the Free-4 robust ImageNet model using natural examples from the target task. For comparing the three different approaches, we look at three metrics: (a) clean validation accuracy; (b) robustness against PGD-20 validation adversarial exam- ples; and (c) average of robustness and clean performance (((a)+(b))/2.) The results are summarized in Fig. 3. In the regimes where transfer learning is more common, adversarially robust transfer learn- ing results in the best overall performance. Adversarially/Free training the target domain results in less robustness and validation accuracy compared to fine-tuning which highlights the importance of pre-training (Hendrycks et al., 2019). Note that in terms of computational resources required, the cost of fine-tuning on adversarial examples of the target domain is about k our method since it requires generation of adversarial examples using k-step PGD attacks (we set k = 3). 4.2.1 TRAINING DEEPER NETWORKS ON TOP OF ROBUST FEATURE EXTRACTORS The basic transfer learning setting of section 4.1.1 only re-trains one layer for the new task. In section 4.1.1, when we transferred from the robust ImageNet to CIFAR-100, the natural train- ing accuracy was 88.84%. Given the small number of trainable parameters left for the network ( 100) and the fixed feature extractor, the network was not capable of completely fitting ≈ the training data. This means that there is potential to improve natural accuracy by learning more complex non-linear features and increasing the number of trainable parameters. To increase representation capacity and the number of trainable parameters, instead of training a 1- layer network on top of the feature extractor, we train a multi-layer perceptron (MLP) network on top of the robust feature extractor. To keep things simple and prevent bottle-necking, every hidden layer we add has 2048 neurons. We plot the training and validation accuracies on the natural examples and 6 Published as a conference paper at ICLR 2020 the robustness (i.e. PGD-20 validation accuracy) in Fig. 4 for various numbers of hidden layers. As can be seen, adding one layer is enough to achieve 100% training accuracy. However, doing so does not result in an increase in validation accuracy. To the contrary, adding more layers can result in a slight drop in validation accuracy due to overfitting. As illustrated, we can improve generalization using simple but effective methods such as dropout (Srivastava et al., 2014) (with probability 0.25) and batch-normalization (Ioffe & Szegedy, 2015). However, the most interesting behavior we observe in this experiment is that, as we increase the number of hidden layers, the robustness to PGD-20 attacks improves. Note, this seems to happen even when we transfer from a naturally trained ImageNet model. While for the case where we have no hidden layers, robustness is 0.00% on CIFAR100 when we use a naturally trained ImageNet model as source, if our MLP has 1, 2, 3, or 5 hidden layers, our robustness against PGD attacks would be 0.03%, 0.09%, 0.31% and 6.61%, respectively. This leads us to suspect that this behavior may be an artifact of vanishing gradients for adversary as the softmax loss saturates when the data is fit perfectly (Athalye et al., 2018). Therefore, for this case we change our robustness measure and use the CW attack (Carlini & Wagner, 2017) which will encounter fewer numerical issues because its loss function does not have a softmax component and does not saturate. Attacking the model from the natural source with CW-20 completely breaks the model and achieves 0.00% robustness. Most interestingly, attacking the model transferred from a robust source using the CW objective maintains robustness even when the number of hidden layers increases. Figure 4: Training an MLP for CIFAR-100 on top of the robust feature extractors from ImageNet. The x- axis corresponds to the number of hidden layers (0 is a linear classifier and corresponds to experiments in sec- tion 4.1.1). Robustness stems from robust feature extrac- tors. Adding more layers on top of this extractor does Interestingly, simply adding more not hurt robustness. layers does not improve the validation accuracy and just results in more overfitting (i.e. training accuracy becomes 100%). We can slightly improve generalization using batch norm (BN) and dropout (DO). ss Number of hidden MLP layer =e cW29 BN # 5 ANALYSIS: ROBUST FEATURE EXTRACTORS ARE FILTERS Our experiments suggest that the robustness of neural networks arises in large part from the pres- ence of robust feature extractors. We have used this observation to transfer both robustness and accuracy between domains using transfer-learning. However, we have not yet fully delved into what it means to have a robust feature extractor. Through visualizations, Tsipras et al. (2018) studied how adversarial training causes the image gradients of neural networks to exhibit meaningful generative behavior. In other words, adversarial perturbations on hardened networks “look like” the class into which the image is perturbed. Given that optimization-based attacks build adversarial examples us- ing the image gradient, we also visualize the image gradients of our transferred models to see if they exhibit the same generative behavior as adversarially trained nets. Fig. 5 plots the gradient of the loss w.r.t. the input image for models obtained by re-training only the last layer, and also for the case where we train MLPs on top of a robust feature extractor. The gradients for the transfer-learned models with a robust source are interpretable and “look like” the adversarial object class, while the gradients of models transferred from a natural source do not. This interpretatbility comes despite the fact that the source model was hardened against attacks on one dataset, and the transferred model is being tested on object classes from another. Also, we see that adding more layers on top of the feature extractor, which often leads to over-fitting, does not make gradients less interpretable. This latter observation is consistent with our observation that added layers preserve robustness(Fig. 4). These observations, together with the success of robust transfer learning, leads us to speculate that a robust model’s feature extractors act as a “filter” that ignores irrelevant parts of the image. 7 Published as a conference paper at ICLR 2020 Figure 5: Gradients of the loss w.r.t to input images for the CIFAR-100 transfer learning experiments of sections 4.1.1 & 4.2.1. The top row contains sample CIFAR-100 images. Other rows contain image gradients of the model loss. The second row is for a model transferred from a naturally trained ImageNet source. Rows 3-5 are for models transferred from a robust Im- ageNet source. These rows correspond to an MLP with 0 (row 3), 1 (row 4), and 2 (row 5) hidden layers on top of the robust feature extractor. The gradients in the last three rows all show interpretable generative behavior. # 6 END-TO-END TRAINING WITHOUT FORGETTING As discussed in section 4, transfer learning can preserve robustness of the robust source model. However, it comes at the cost of decreased validation accuracy on natural examples compared to the case where we use a naturally trained source model. Consequently, there seems to be a trade-off between generalization and robustness based on the choice of the source model. For any given clas- sifer, the trade-off between generalization and robustness is the subject of recent research (Tsipras et al., 2018; Zhang et al., 2019; Shafahi et al., 2019a). In this section, we intend to improve the overall performance of classifiers transferred from a robust source model by improving their gener- alization on natural images. To do so, unlike previous sections where we froze the feature extractor mainly to preserve robustness, we fine tune the feature extractor parameters θ. Ideally, we should learn to perform well on the target dataset without catastrophically forgetting the robustness of the source model. To achieve this, we utilize lifelong learning methods. Learning without Forgetting (LwF) (Li & Hoiem, 2018) is a method for overcoming catastrophic forgetting. The method is based on distillation. In this framework, we train the target model with a loss that includes a distillation term from the previous model. min w,θ l(z(x, θ), y, w) + λd · d(z(x, θ), z0(x, θ∗ r )) (3) Â¥v v Old FC a iNo backprop Initialize © <— 6" 1 Image! , x | | C “~ > LwF loss True label y Figure 6: Our LwF loss has a term that enforces the similarity of feature representations (i.e. penul- timate layer activations) between the source model and the fine-tuned model. where, in our method, Aq is the feature representation similarity penalty, and d is some distance metric between the robust model’s feature representations zo(x,*) and the current model’s fea- ture representations z(x,@). Unlike the original LwF paper that used a distilled loss from |Hinton| 5) and applies distillation to the logits, we simply choose d to be the £2-norm and 8 Published as a conference paper at ICLR 2020 distillation to the penultimate layer5. Our loss is designed to make the feature representations of the source and target network similar, thus preserving the robust feature representations (Fig. 6). Ideally, z(x, θ) r , we store z0(x, θ∗ r ) for the images of the target task and load this from memory (i.e. offline) instead of performing a forward pass through the robust source network online. Therefore, in the experi- ments related to LwF, we do not train with data augmentation because we have not pre-computed z(xa, θ∗ r )) was not negligible6. To improve performance, we follow a warm-start scheme and only train the fully connected parame- ters w early in training. We then cut the learning rate and continue fine tuning both feature extractors (θ) and w. In our experiments, we use a learning rate of 0.001, and the warm-start makes up half of the total training iterations. Starting from the pre-trained source model, we train for a total of 20,000 iterations with batch-size 128. The results with an adversarially trained CIFAR-100 model as source and CIFAR-10 as target are summarized in Table 4.7 As can be seen, having a LwF-type regularizer helps in maintaining robustness and also results in a considerable increase in validation accuracy. The trade-off between robustness and generalization can be controlled by the choice of λd. It seems that for some choices of λd such as 0.1, robustness also increases. However, in hindsight, the increase in accuracy on PGD-20 adversarial examples is not solely due to improvement in robustness. It is due to the fact that the validation accuracy has increased and we have a better classifier overall. For easier comparisons, we have provided the transfer results without LwF at the bottom of Table 4. Note that using LwF, we can keep the robustness of the source model and also achieve clean validation accuracy comparable to a model that uses naturally trained feature extractors. In the supplementary, we show that similar conclusions can be drawn for the split CIFAR-100 task. Table 4: Distilling robust features using learning without forgetting. The bottom rows show results from transfer learning with a frozen feature extractor. The ‘+’ sign refers to using augmentation. Source — Target Dataset Source Model Xa val. PGD-20 (€ = 8) le-7 89.07% 0.61% 0.001 | 86.15% 4.70% CIFAR-100+ — CIFAR-10 robust 0.0025 | 81.90% 15.42% 0.005 | 79.35% 17.61% 0.01 71.73% 17.55% 0.1 73.39% 18.62% natural 83.05% 0.00% CIFAR-100+ > CIFAR-10+ | bust NA_ | 72.05% 17.10% # DECREASING GENERALIZATION GAP OF ADVERSARIALLY TRAINED NETWORKS We demonstrated in our transfer experiments that using our LwF-type loss, can help decrease the generalization gap while preserving robustness. In this section, we assume that the source domain is the adversarial example domain of a dataset and the target domain is the clean example domain of the same dataset. This experiment can be seen as applying transfer learning from the adversarial example domain to the natural example domain while preventing forgetting the adversarial domain. In the case where the source and target datasets are the same (Transferring from a robust CIFAR-100 model to CIFAR-100), by applying our LwF-type loss, we can improve the generalization of robust models. Our results are summarized in Table 5. 5We do so since in section 3 we found the source of robustness to be the feature extractors and this obser- vation was later reinforced due to the empirical results in section 4 6The analysis is in the supplementary. 7Source code for LwF-based experiments: https://github.com/ashafahi/RobustTransferLWF 9 Published as a conference paper at ICLR 2020 Table 5: Decreasing generalization gap by transferring with LwF. For reference, last row shows results from adversarial training CIFAR-100. The ‘+’ sign refers to using augmentation. (€ = 8) Source → Target Dataset CIFAR-100+ → CIFAR-100 CIFAR-100+ Source Model robust robust λd 1e-5 5e-5 1e-4 0.001 0.01 NA CW-20 PGD-20 61.53% 21.83% 21.97% 61.71% 23.44% 22.94% 61.38% 23.95% 23.40% 60.17% 24.31% 23.17% 59.87% 24.10% 22.96% 59.87% 22.76% 23.16% val. # 7 CONCLUSION We identified the feature extractors of adversarially trained models as a source of robustness, and use this observation to transfer robustness to new problems domains without adversarial training. While transferring from a natural model can achieve higher validation accuracy in comparison to transferring from a robust model, we can close the gap and maintain the initial transferred robustness by borrowing ideas from the lifelong learning literature. The success of this methods suggests that a robust feature extractor is effectively a filter that sifts out relevant components of an image that are needed to assign class labels. We hope that the insights from this study enable practitioners to build robust models in situations with limited labeled training data, or when the cost and complexity of adversarial training from scratch is untenable. Acknowledgements: Goldstein and his students were supported by the DARPA QED for RML program, the DARPA GARD program, and the National Science Foundation. # REFERENCES Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim ˇSrndi´c, Pavel Laskov, Gior- In Joint gio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. European conference on machine learning and knowledge discovery in databases, pp. 387–402. Springer, 2013. Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE, 2017. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. International Conference on Learning Representation, 2015. Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960, 2019. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Tech- nical report, Citeseer, 2009. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. 10 Published as a conference paper at ICLR 2020 Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2018. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pp. 2574–2582, 2016. Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2009. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adver- sarially robust generalization requires more data. In Advances in Neural Information Processing Systems, pp. 5014–5026, 2018. Ali Shafahi, W Ronny Huang, Christoph Studer, Soheil Feizi, and Tom Goldstein. Are adversarial examples inevitable? ICLR, 2019a. Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S arXiv preprint Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv:1904.12843, 2019b. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy?–a comprehensive study on the robustness of 18 deep image classification models. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631–648, 2018. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Florian Tram`er, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick Mc- Daniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. stat, 1050:11, 2018. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019. 11 Published as a conference paper at ICLR 2020 A EXPERIMENT DETAILS A.1 LWF-BASED EXPERIMENTS In our LWF-based experiments, we use a batch-size of 128, a fixed learning-rate of 1e-2m, and fine-tune for an additional 20,000 iterations. The first 10,000 iterations are used for warm-start; during which we only update the final fully connected layer’s weights. During the remaining 10,000 iterations, we update all of the weights but do not update the batch-normalization parameters. IMAGENET TO CIFAR EXPERIMENTS When freezing the feature extractor and fine-tuning on adversarial examples, we train the last fully connected layer’s weights for 50 epochs using batch-size=128. We start with an initial learning rate of 0.01 and drop the learning rate to 0.001 at epoch 30. In the case of fine-tuning on adversarial examples, we generate the adversarial examples using a 3 step PGD attack with step-size 3 and a perturbation bound € = 5. A.3 FREE TRAINING EXPERIMENTS In all of our free-training experiments where we train the u-ResNet-50, we train for 90 epochs using a batch-size of 128. The initial learning rate used is 0.1 and we drop it by a factor of 10 at epochs 30 and 60. We use a replay parameter m = 4 and perturbation bound ¢€ = 5. B THE DISTANCE BETWEEN FEATURE REPRESENTATIONS OF NATURAL IMAGES AND AUGMENTED IMAGES To speed up the LwF experiments, we did not use data augmentation during training. Instead of computing the robust feature representations on the fly, before starting training on the new target task, we passed the entire training data of the target task through the robust network and stored the feature representation vector. If we were doing data augmentation, we would have to pass the entire augmented training data through the network, which would be slow and memory intensive. Alternatively, we could use the robust feature representation of the non-augmented images instead. The latter would have been feasible if the distance between the robust feature representations of the non-augmented and augmented images were very small. However, as shown in fig 7, this quantity is not often negligible. \2(v, 65) = =(40,6})|l2 © € CIFARIO ll2(@, 82) ~ 2a, 3))l2 ! x © CIFARL00 lm train 0.200 Mmm train 0.20 ‘ME test Mm test 075 0.150 5, 0.075 0.05 0.050 0.025 o.o04 0.000 4 oo 25 75 100 125 15.0 he 0 135 1nd aT 21,85) — an (a) Histogram of ||z(x, 07), z(xa, 97) ||2 for CIFAR-10 (b) Histogram of ||z(x, 07), z(va,%)||2 for CIFAR- dataset, given 0* for CIFAR-100 dataset. The mean for 100 dataset, given 067 for CIFAR-100 dataset. The both training and test examples is ~ 5.53 mean for both training and test examples is ~ 5.57 Figure 7: Figures[7aland[7b]both show that the values of ||z(x, 6%), z(%a, 0*)||2 are high most of the time, consequently, LWF is better done without data augmentation. 12 Published as a conference paper at ICLR 2020 (a) Clean validation (b) Adversarial validation (c) Average validation Figure 8: When the number of training data per-class is very limited (right bars), adversarially robust transfer learning [Transferred] is better overall. However, as the number of training data increases (left bars), fine-tuning with adversarial examples of the target domain [Fine-tuned with AT] results in an overall better performing model. Adversarially robust transfer learning is 3 faster than fine- tuning with adversarial examples of the target domain. C LOW-DATA REGIME TRANSFER LEARNING FROM IMAGENET TO CIFAR-10 In section 4.2, we illustrated that robust transfer learning is most beneficial where we have limited data (i.e., limited number of training data per class). We used CIFAR-100 as an illustrative target dataset. However, the overall results are not data set dependent. When we have limited number of training data, robust transfer learning results in the highest overall performance (see Fig. 8 for transferring from a robust ImageNet model to smaller instances of CIFAR-10). D LWF-BASED ROBUST TRANSFER LEARNING FOR SIMILAR SOURCE AND TARGET DATASETS In Table 6 we conduct LwF experiments on the split CIFAR-100 task which is more suited for transfer learning due to the similarities between the source and target datasets. In these situations, the LwF regularizer on the feature representations still works and can improve generalization without becoming vulnerable to adversarial examples. If we take the average performance of the robust classifiers on the split tasks (average of robust half CIFAR-100 and the LwF setting model for λd = 0.01) we get (63.32 + 64.96)/2 = 64.14% average validation accuracy and 20.42% average robustness which is comparable with the case that we had adversarially trained the entire CIFAR-100 dataset (Table 1). Table 6: Distilling robust features using LwF for the split CIFAR-100 task. For reference, we have included the results from transfer learning by freezing the features at the bottom of the table. Source CIFAR-100+ (1/2) CIFAR-100+ (1/2) → Target Dataset → CIFAR-100 (other 1/2) → CIFAR-100+ (other 1/2) Source Model robust natural robust λd 0.001 0.005 0.01 0.1 NA NA PGD-20 73.30% 1.92% 66.96% 10.52% 63.32% 15.68% 55.14% 17.26% 71.44% 0.00% 58.48% 15.86% val. E IMPROVING GENERALIZATION OF THE CIFAR-10 ADVERSARIALLY TRAINED MODEL Similar to the case of improving the generalization for CIFAR-100, we use our LwF-based loss func- tion to transfer from the robust CIFAR-10 domain to the natural CIFAR-10 domain. We summarize the results in Table 7. 13 Published as a conference paper at ICLR 2020 Table 7: Decreasing generalization gap by transferring with LwF. For reference, last row shows results from adversarial training CIFAR-10. The ‘+’ sign refers to using augmentation. (€ = 8) Source → Target Dataset CIFAR-10+ → CIFAR-10 CIFAR-10+ Source Model robust robust λd 1e-5 5e-5 1e-4 5e-4 0.001 0.01 0.1 NA CW-20 PGD-20 88.16% 45.31% 46.15% 88.08% 46.24% 47.06% 87.81% 46.54% 47.29% 87.44% 46.36% 47.16% 87.31% 46.27% 47.06% 87.27% 46.09% 46.92% 87.49% 46.11% 46.84% 87.25% 45.84% 46.96% val. 14
{ "id": "1502.03167" }
1905.07830
HellaSwag: Can a Machine Really Finish Your Sentence?
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.
http://arxiv.org/pdf/1905.07830
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi
cs.CL
ACL 2019. Project page at https://rowanzellers.com/hellaswag
null
cs.CL
20190519
20190519
9 1 0 2 y a M 9 1 ] L C . s c [ 1 v 0 3 8 7 0 . 5 0 9 1 : v i X r a # HellaSwag: Can a Machine Really Finish Your Sentence? Rowan Zellers♠ Ari Holtzman♠ Yonatan Bisk♠ Ali Farhadi♠♥ Yejin Choi♠♥ ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ♥Allen Institute for Artificial Intelligence https://rowanzellers.com/hellaswag # Abstract Recent work by Zellers et al. (2018) intro- duced a new task of commonsense natural lan- guage inference: given an event description such as “A woman sits at a piano,” a machine must select the most likely followup: “She sets her fingers on the keys.” With the intro- duction of BERT (Devlin et al., 2018), near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense in- ference still proves difficult for even state- of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its ques- tions are trivial for humans (ą95% accuracy), state-of-the-art models struggle (ă48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key in- sight is to scale up the length and complex- ity of the dataset examples towards a critical ‘Goldilocks’ zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. activityer A woman is outside with a bucket and a dog, The dog is running . around trying to avoid a bath. She... A. rinses the bucket off with soap and blow dry the dog's head. B. uses a hose to keep it from getting soapy. + (C. gets the dog wet, then it runs away again. y finer, D. gets into a bath tub with the dog. Come to a complete halt at a stop sign or red light. At a stop sign, =) come to a complete halt for about 2 seconds or until vehicles that arrived before you clear the intersection. If you're stopped at a red How to light, proceed when the light has turned green. .. determine who has right) 4: Stop for no more than two seconds, or until the light turns of way. yellow. A red light in front of you indicates that you should stop. + B. After you come to a complete stop, turn off your turn signal. sy Adversarial Allow vehicles to move in different directions before moving Filtering onto the sidewalk. C. Stay out of the oncoming traffic. People coming in from behind may elect to stay left or right. D. If the intersection has a white stripe in your lane, stop before this line. Wait until all traffic has cleared before crossing the intersection. teen “- é 7 es €asy > Figure 1: Models like BERT struggle to finish the sen- tences in HellaSwag, even when they come from the same distribution as the training set. While the wrong endings are on-topic, with words that relate to the con- text, humans consistently judge their meanings to be either incorrect or implausible. For example, option A of the WikiHow passage suggests that a driver should stop at a red light for no more than two seconds. Our construction of HellaSwag, and its result- ing difficulty, sheds light on the inner work- ings of deep pretrained models. More broadly, it suggests a new path forward for NLP re- search, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges. # Introduction Imagine a woman chasing a dog around outside, trying to give it a bath. What might happen next? Humans can read a narrative like this, shown in Figure 1, and connect it to a rich model of the world: the dog is currently dry and not soapy, and it actively doesn’t want to be bathed. Thus, one plausible next event is option C—that she’ll get the dog wet and it will run away again. When the SWAG dataset was first announced (Zellers et al., 2018), this new task of common- sense natural language inference seemed trivial for humans (88%) and yet challenging for then- state-of-the-art models (ă60%), including ELMo (Peters et al., 2018). However, BERT (Devlin et al., 2018) soon reached over 86%, almost human-level performance. One news article on this development was headlined “finally, a ma- chine that can finish your sentence.”1 In this paper, we investigate the following ques- tion: How well do deep pretrained models, like # 1A New York Times article at https://nyti.ms/2DycutY. 1 BERT, perform at commonsense natural language inference (NLI)? Our surprising conclusion is that the underlying task remains unsolved. In- deed, we find that deep models such as BERT do not demonstrate robust commonsense reasonining ability by themselves. Instead, they operate more like rapid surface learners for a particular dataset. Their strong performance on SWAG is dependent on the finetuning process, wherein they largely learn to pick up on dataset-specific distributional biases. When the distribution of language shifts slightly, performance drops drastically – even if the domain remains identical. We study this question by introducing Hella- Swag,2 a new benchmark for commonsense NLI. We use Adversarial Filtering (AF), a data- collection paradigm in which a series of discrim- inators is used to select a challenging set of gen- erated wrong answers. AF is surprisingly effec- tive towards this goal: the resulting dataset of 70k problems is easy for humans (95.6% accuracy), yet challenging for machines (ă50%q. This result holds even when models are given a significant number of training examples, and even when the test data comes from the exact same distribution as the training data. Machine performance slips an additional 5% when evaluated on examples that cover novel concepts from the same domain. to deep pre- trained models, we use a trifecta of state-of-the- art generators (Radford et al., 2018), state-of- the-art discriminators (BERT), and high quality source text. We expand on the SWAG’s origi- nal video-captioning domain by using WikiHow articles, greatly increasing the context diversity and generation length. Our investigation reveals a Goldilocks zone – roughly three sentences of context, and two generated sentences – wherein generations are largely nonsensical, even though state-of-the-art discriminators cannot reliably tell the difference between these generations and the ground truth. More broadly, our paper presents a case-study towards a future of verified progress in NLP, via it- erative rounds of building and breaking datasets. If our ultimate goal is to provide reliable benchmarks for challenging tasks, such as commonsense NLI, these benchmarks cannot be static. Instead, they must evolve together with the evolving state-of- 2Short for Harder Endings, Longer contexts, and Low- shot Activities for Situations With Adversarial Generations. Dataset and code at https://rowanzellers.com/hellaswag. 2 Zz eal 7 Train f & | |oontext 1) | oting co to discriminate $ real vs. generated = Real eee & | |eontert 2) | nang V0 g 8 8 g H | S| 8 $ Q | | comes] | nea sas nN || ending 8 Real Gen'd | = : : : p\m) 3 3] | context || eat Gen'a ob Replace Q M ‘ending ‘ending 1 easily-classified + generations with adversarial ones that currently aren't included == Figure 2: An overview of Adversarial Filtering. On each iteration, a new classifier is trained on a dummy training set Dtrain to replace easily-classified negative endings on the dummy test set Dtest with adversarial endings. This process is repeated iteratively, to obtain a challenging dataset regardless of the final split. the-art. Continued evolution in turn requires prin- cipled dataset creation algorithms. Whenever a new iteration of a dataset is created, these algo- rithms must leverage existing modeling advance- ments to filter out spurious biases. Only once this cycle becomes impossible can we say that the un- derlying task – as opposed an individual dataset – is solved. # 2 Background SWAG is a dataset for commonsense NLI. For each question, a model is given a context from a video caption and four ending choices for what might happen next. Only one choice is right – the actual next caption of the video. Obtaining interesting negatives is challenging. Prior work (e.g. Gururangan et al., 2018; Poliak et al., 2018) has found that when humans write the endings to NLI questions, they introduce subtle yet strong class-conditional biases known as an- notation artifacts.3 To address this, Zellers et al. (2018) intro- duced Adversarial Filtering (AF). An overview is shown in Figure 2. The key idea is to produce a dataset D which is adversarial for any arbitrary split of pDtrain, Dtestq. This requires a generator of negative candidates (i.e., wrong endings that vi- 3These biases simply inflate model performance, but past work has also shown that are unwanted social biases induced when humans write the endings, in terms of gender and race (Rudinger et al., 2015). 100 = -=-= Human 39 BERTLarge Se ESIM+ELMo SWAG‘1 Accuracy (%) 25 WAP erp 16 64 256 1024 4096 16384 65536 Training examples Figure 3: Validation accuracy on SWAG for BERT- Large versus training set size. The baseline (25% accu- racy) is random chance. BERT does well given as few as 16 training examples, but requires tens of thousands of examples to approach human performance. olate human notions about how the world works), which we achieve by using a language model. Po- tential candidates of incorrect answers were mas- sively oversampled from a language model trained on in-domain data, and then selected using an en- semble of adversaries. The selection process hap- pens iteratively: on each iteration, the dataset is randomly partitioned into Dtrain and Dtest. The ensemble is trained to classify endings as real or generated on Dtrain, then, AF replaces easy-to- classify generations in Dtest. This process con- tinues until the accuracy of these adversaries con- verges. Last, humans validate the data to remove adversarial endings that seem realistic. that is challenging to models regardless of the final dataset split. In Section 4, we will use AF as the underlying workhorse to construct an NLI dataset that is easy for humans, yet challenging for ma- chines. This difficulty persists even when mod- els are provided significant training data, and even when this data comes from the same distribution as the test set. This contrasts with past work on adversarial examples (e.g. Jia and Liang, 2017; Glockner et al., 2018; Belinkov and Bisk, 2018) which consider cases where an out-of-distribution test set is constructed to be adversarial. # Investigating SWAG In this section, we investigate why SWAG was solved. We focus on BERT, since it is the best 3 100 - = 90 mmm SWAG = Mm HellaSwag 3 80 - s 3 70 - [s} <x @ 60 - > 5 50 - ra i LJ a a 2. a | | | lo Default Ending Only Shuffled Shuffled+ Ending Only Figure 4: BERT validation accuracy when trained and evaluated under several versions of SWAG, with the new dataset HellaSwag as comparison. We compare: Ending Only No context is provided; just the endings. Shuffled Endings that are indidivually tokenized, shuffled, and then detokenized. No context is provided and each ending is shuffled. Shuffled+ Ending Only known approach at the time of writing.4 Core to our analysis is investigating how a model trained on Wikipedia and books can be so effectively fine- tuned for SWAG, a dataset from video captions. # 3.1 How much innate knowledge does BERT have about SWAG? We investigate this question by measuring BERT’s performance on SWAG while varying the size of the training dataset; results are shown in Fig- ure 3. While the best known ELMo NLI model (ESIM+ELMo; Chen et al., 2017) requires the en- tire training set to reach 59%, BERT outperforms this given only 64 examples. However, BERT still needs upwards of 16k examples to approach hu- man performance, around which it plateaus. # 3.2 What is learned during finetuning? Figure 4 compares BERT’s performance when trained and evaluated on variants of SWAG. Context: BERT’s performance only slips 11.9 points (86.7%Ñ74.8%) when context is omitted (Ending Only), suggesting a bias exists in the endings themselves.5 If a followup event seems unreasonable absent of context, then there must be something markedly different between the space of human-written and machine-generated endings. Structure: To distinguish word usage from 4See the appendix for a discussion of the BERT architec- ture and hyperparameter settings we used in our experiments. 5These biases are similar to those in NLI datasets, as found by Gururangan et al. (2018); Poliak et al. (2018). 100 100 1 sentence e 2sentences ~ a N a e 3-sentences BERT accuracy (4-way) a Oo BERT accuracy (4-way) a Oo ee e e e 25 25 jj wre os 0f we we se gesevedactoge. e Zellers'LM e GPT ) ) ) 10 20 30 40 50 ) 10 20 30 40 Activitynet Adversarial Filtering iteration Wikihow Adversarial Filtering iteration Figure 5: Adversarial Filtering (AF) results with BERT-Large as the discriminator. Left: AF applied to ActivityNet generations produced by Zellers et al. (2018)’s language model versus OpenAI GPT. While GPT converges at random, the LM used for SWAG converges at 75%. Right: AF applied to WikiHow generations from GPT, while varying the ending length from one to three sentences. They converge to random, „40%, and „50%, respectively. structural patterns, we consider a new scenario, Shuffled. Here the shared context is provided, but the words in each ending choice are randomly permuted. Surprisingly, this reduces BERT perfor- mance by less than 10%. Even though BERT was never exposed to randomly shuffled text during pretraining, it easily adapts to this setting, which suggests that BERT is largely performing lexical reasoning over each (context, answer) pair. Finally, when the context is removed and the words in each ending are shuffled, performance drops to 60.4%. While low, this is still higher than ELMo’s performance (ă60% from Zellers et al., 2018). As neither context nor structure is needed to discriminate between human and machine-written endings in a majority of cases, it is likely that systems primarily learn to detect dis- tributional stylistic patterns during finetuning. # 3.3 Where do the stylistic biases come from? SWAG was constructed via Adversarial Filter- ing (AF). Endings were generated via a language model, and then selected to fool a discrimina- tor. To understand why it was solved requires understanding the interplay of AF with respect to SWAG’s generators and discriminators. To investigate this, we perform AF using BERT- Large as the discriminator7 in two settings, com- paring generations from Zellers et al. (2018) with those from a finetuned GPT (Radford et al., 2018). Strikingly, the results, Figure 5 (left), show that the generations used in SWAG are so different from the human-written endings that AF never drops the accuracy to chance; instead, it converges to roughly 75%. On the other hand, GPT’s gener- ations are good enough that BERT accuracy drops below 30% over many random subsplits of the data, revealing the importance of the generator. # 4 HellaSwag The success of BERT implies that high-quality generators and discriminators are crucial to AF’s success. However, it does not imply that the un- derlying task of commonsense NLI – as opposed to a single dataset – is solved. To evaluate this claim requires us to try making a new evolution of the SWAG dataset, one in which artifacts are removed. In this section, we do just that by intro- ducing HellaSwag. # 4.1 ActivityNet Captions Zellers et al. (2018) used a two-layer LSTM for generation, with shallow stylistic adversarial fil- ters.6 This setup was robust against ELMo mod- els, but has the shallow LM in particular produced distributional artifacts that BERT picks up on? We start by including video captions from the ActivityNet Captions dataset (Krishna et al., 2017). The original SWAG dataset contains these, along with captions from LSMDC (Rohrbach et al., 2017), but for HellaSwag we solely used 6The discriminator was an ensemble that featured a bag of words model, a shallow CNN, a multilayer perceptron op- erating on language model perplexities. 7On each iteration, BERT-Large is re-initialized from its pretrained checkpoint, finetuned, and then evaluated in a four-way setting on the dummy test set of held-out data. See Supp A for a details of our BERT-Large AF setup. 4 ActivityNet. In addition to temporal descriptions, ActivityNet also provides activity labels for each caption (e.g. jumping rope). We will use these activity labels as additional structure to test gener- alization ability. # 4.2 WikiHow: A New Testbed We next consider a new and challenging testbed for commonsense reasoning: completing how-to articles from WikiHow, an online how-to manual. We scrape 80k context and follow-up paragraphs from WikiHow, covering such diverse topics as “how to make an origami owl” to “how to survive a bank robbery.” Each context has at most three sentences, as do the follow-ups. AF’s effectiveness in this new setting is shown in Figure 5 (right). We consider three settings, corresponding to endings that are either one, two, or three sentences long. In all cases, BERT per- formance begins high (70-90%), but there are enough generations for Adversarial Filtering to lower the final accuracy considerably. While the one-sentence case converges to slightly higher than random – 35% when it converges – the two and three sentence cases are higher, at 40% and 50% respectively. Given more context, it becomes easier to classify an ending as machine- or human- written. We compromise and use two-sentence generations. Particularly in the two-sentence case, we find ourselves in a Goldilocks zone wherein generations are challenging for deep models, yet as we shall soon see, easy for humans. # 4.3 Obtaining high human agreement How well can humans distinguish human-written endings from machine generations refined with Adversarial Filtering? In Figure 6, we com- pare human performance with that of BERT on a random 80%/20% split. We see a contrast between the ActivityNet and WikiHow perfor- mance. While ActivityNet starts off harder for BERT (25.5%), it also proves difficult for humans (60%). In contrast, WikiHow starts easier for BERT (41.1%) and humans find the domain al- most trivial (93.5%). We hypothesis this discrep- ancy is due to the lengths of both datasets (Fig- ure 7). WikiHow’s 2-sentence generations average 41 tokens, versus 13 for ActivityNet. This gives WikiHow generations three times as many oppor- tunities to make a detectable mistake. To ensure high agreement on ActivityNet, we in- 5 ActivityNet WikiHow 100 - 94.0 935 995 965 85.0 g 75 - + > 3 60 -—@ Human 8 57.1 -@- BERT Fi 8 < 50- 4g | 45.4 46.0 411 LO 25 25 -" Y ' 4 t Y 0 1 2 0 1 2 Number of annotators during validation Figure 6: For HellaSwag, we ensure high human agree- ment through several rounds of annotation. By collect- ing how likely each ending is we can filter false nega- tive endings – machine generations that sound realistic – and replace them with true negatives. On both sub- datasets, BERT performance increases during valida- tion, but the gap to human performance remains wide. p08 Context lengths for. 0.08 ActvityNet WikiHow (2sent) Ending lengths for. ActivityNet 0.04 WikiHow (2sent) 20 40 60 80 100 Length (## WordPiece tokens) 20 40 60 80 100 0 Length (## WordPiece tokens) ~ ~ 0 Figure 7: Lengths of ActivityNet and WikiHow; the latter with two-sentence generations. WikiHow is much longer, which corresponds to being easier for hu- mans, while taking longer for AF to converge. creasing human performance to 94%. During hu- man validation, crowd workers are given a context and six ending choices, of which one is the true ending, and the other five are from AF. On each iteration, we replace machine-written endings that the worker rated as realistic with new samples. In the end, we keep the 25k best ActivityNet contexts (i.e. those with highest agreement among workers 8) and the 45k best WikiHow contexts. # 4.4 Zero-shot categories for evaluation To evaluate a model’s ability to generalize to new situations, we use category labels from WikiHow and ActivityNet to make ‘zero-shot’ evaluation sets. For each set (validation or test), we craft two subsets: one containing 5k ‘in-domain’ examples that come from categories as seen during training (Figure 8), and another with 5k ‘zero-shot’ exam- ples from randomly chosen held-out categories. In total, there are 70k dataset examples. 8See the appendix for details about how we estimate this. Model Overall Test Split SizeÑ 10K 10K Val In-Domain Test Val 5K 5K Zero-Shot Test 5K Val 5K ActivityNet Val Test 3.2K 3.5K WikiHow Val Test 6.8K 6.5K Chance 25.0 fastText LSTM+GloVe LSTM+ELMo LSTM+BERT-Base ESIM+ELMo OpenAI GPT BERT-Base BERT-Large 30.9 31.9 31.7 35.9 33.6 41.9 39.5 46.7 31.6 31.7 31.4 36.2 33.3 41.7 40.5 47.3 33.8 34.3 33.2 38.7 35.7 45.3 42.9 50.2 32.9 32.9 32.8 38.2 34.2 44.0 42.8 49.7 28.0 29.5 30.4 33.2 31.5 38.6 36.1 43.3 30.2 30.4 30.0 34.1 32.3 39.3 38.3 45.0 27.7 34.3 33.8 40.5 37.7 46.4 48.9 54.7 28.4 33.8 33.3 40.5 36.6 43.8 45.7 51.7 32.4 30.7 30.8 33.7 31.6 39.8 34.9 42.9 33.3 30.5 30.4 33.8 31.5 40.5 37.7 45.0 Human 95.7 95.6 95.6 95.6 95.8 95.7 94.0 94.0 96.5 96.5 Table 1: Performance of models, evaluated with accuracy (%).We report results on the full validation and test sets (Overall), as well as results on informative subsets of the data: evaluated on in-domain, versus zero-shot situations, along with performance on the underlying data sources (ActivityNet versus WikiHow). All models substantially underperform humans: the gap is over 45% on in-domain categories, and 50% on zero-shot categories. wikhow Stee Food and Entertaining (500) Computrs and Electronics 454) Youth (161) Spots and Fins (48) Home and Garden (290) —__ Poop (18 is now a four-way softmax over endings. d. LSTM sentence encoder: This is a randomly initialized two-layer bi-LSTM; the second layer’s hidden states are max-pooled and fed into an MLP to predict the logit. We consider three varia- tions: GloVe embeddings, ELMo embeddings, or (frozen) BERT-Base embeddings.9 e. FastText: (Joulin et al., 2017) An off-the-shelf library for bag-of-words text classification.10 We compare all models to human performance by asking five independent crowd workers to solve the same four-way multiple choice problems; their predictions are combined via majority vote. Figure 8: Examples on the in-domain validation set of HellaSwag, grouped by category label. Our evaluation setup equally weights performance on categories seen during training as well as out-of-domain. # 5 Results We evaluate the difficulty of HellaSwag using a va- riety of strong baselines, with and without mas- sive pretraining. The models share the same for- mat: given a context and an ending, return a logit for that ending. Accordingly, we train our models using a four-way cross-entropy loss, where the ob- jective is to predict the correct ending. In addition to BERT-Large, our comparisons include: a. OpenAI GPT (Radford et al., 2018): A fine- tuned 12-layer transformer that was pre-trained on the BookCorpus (Zhu et al., 2015). b. Bert-Base: A smaller version of the BERT model whose architecture size matches GPT. c. ESIM+ELMo (Chen et al., 2017; Peters et al., 2018): This is the best-performing ELMo model for NLI, modified slightly so the final output layer Our results, shown in Table 1, hint at the diffi- culty of the dataset: human performance is over 95%, while overall model performance is below 50% for every model. Surprisingly, despite BERT- Large having been used as the adversarial filter, it still performs the strongest at 47.3% overall. By making the dataset adversarial for BERT, it seems to also have become adversarial for every other model. For instance, while ESIM+ELMo obtained 59% accuracy on SWAG, it obtains only 33.3% accuracy on HellaSwag. In addition to pretraining being critical, so too is end-to-end finetuning. Freezing BERT-Base and adding an LSTM on top lowers its overall perfor- mance 4.3%. This may help explain why mod- els such as ESIM+ELMo struggled on SWAG, as ELMo isn’t updated during finetuning. While BERT is the best model, it still struggles on HellaSwag, and especially so on zero-shot cat- 9For ELMo and BERT-Base, the model learns scalar weights to combine each internal layer of the encoder. 10This model is trained with binary cross entropy loss. 6 Evaluated on SWAG 100 Evaluated on HellaSwag Trained on... mmm SWAG ME HellaSwag 80 - 4 70 - 4 60 - 4 50 - 4 : fail o0- | i aa | Overall LSMDC ActivityNet Overall WikiHow ActivityNet 90 - 4 BERT-Large Accuracy (%) Figure 9: Transfer experiments from SWAG to Hella- Swag and vice versa, evaluated on the validation sets. Overall, a BERT-Large that is trained on SWAG hardly generalizes to HellaSwag: it scores 34.6%. egories. Performance drops roughly 5% on the test fold, which suggests that the finetuning is not enough for BERT to learn to generalize to novel activities or how-to categories. Last, we see that WikiHow is a much harder do- main that ActivityNet for machines: 45% Bert- Large performance, versus 96.5% for humans. Curiously, it is on this source dataset that we see the smallest gap between OpenAI GPT and BERT. In fact, OpenAI GPT outperforms BERT on Wiki- How, but the reverse is true for ActivityNet. One possibility is that the left-to-right structure of GPT is the right inductive bias for WikiHow - perhaps reasoning bidirectionally over long contexts is too much for a 12-layer transformer to learn. 5.1 SWAG to HellaSwag transfer Given the shared goals and partial domains of SWAG and HellaSwag, it is natural to ask to what extent models can transfer between the two datasets. In Figure 9 we show the results from transfer experiments: models are trained on one dataset and evaluated on the other.11 The best models are trained on the same training on dataset that they are evaluated on: SWAG and evaluating on HellaSwag lowers per- formance by 12%; vice versa lowers performance by 15%. The missing domain for HellaSwag mod- els is movie descriptions (LSMDC), still, Hella- Swag models obtain 69% accuracy. On the other hand, SWAG models do not generalize at all to their missing domain, WikiHow (28%), suggest- ing that learning general commonsense reasoning 11Note that the ActivityNet splits are different for each dataset. To avoid skewing the results, we report only on the validation video captions that are not in the training sets of either dataset. The overall accuracy is then a weighted average, where ActivityNet examples are weighted propor- tionately more. This gives a slight advantage to training on SWAG, as it sees all the ActivityNet categories when training. 7 Category: Shaving (ActivityNet; In-domain) A bearded man is seen speaking to the camera and making several faces. the man a) then switches off and shows himself via the washer and dryer rolling down a towel and scrubbing the floor. (0.0%) b) then rubs and wipes down an individual’s face and leads into another man playing another person’s flute. (0.0%) c) is then seen eating food on a ladder while still speaking. (0.0%) d) then holds up a razor and begins shaving his face. (100.0%) Category: Sharpening knives (ActivityNet; Zero-Shot) Two men are in a room and the man with a blue shirt takes out a bench stone and with a little lubricant on the stone takes an knife and explains how to sharpen it. then he a) uses a sharpener to smooth out the stone using the knife. (100.0%) b) shows how to cut the bottom with the knife and place a tube on the inner and corner. (0.0%) c) bends down and grabs the knife and remove the appliance. (0.0%) d) stops sharpening the knife and takes out some pieces of paper to show how sharp the knife is as he cuts slivers of paper with the knife. (0.0%) Category: Youth (WikiHow; In-Domain) How to make up a good excuse for your homework not being finished Blame technology. One of the easiest and most believable ex- cuses is simply blaming technology. You can say your computer crashed, your printer broke, your internet was down, or any number of problems. a) Your excuses will hardly seem believable. [substeps] This doesn’t mean you are lying, just only that you don’t have all the details of how your computer ran at the time of the accident. (0.0%) b) The simplest one to have in a classroom is to blame you entire classroom, not just lab. If you can think of yourself as the victim, why not blame it on technology. (9.4%) c) Most people, your teacher included, have experienced set- backs due to technological problems. [substeps] This is a great excuse if you had a paper you needed to type and print. (29.1%) d) It may also be more believable if you are fully aware that you may be flying at high speed on a plane and need someone to give you traffic report. Your problem might be your laptop failing to charge after a long flight. (61.5%) Figure 10: Example questions answered by BERT- Large. Correct model predictions are blue, incorrect predictions are red. The right answers are bolded. was hardly necessary to solve SWAG. # 5.2 Qualitative examples We show several qualitative examples in Fig- ure 10, along with BERT-Large’s predictions. BERT does well on some ActivityNet contexts, such as in the first row, where it correctly pre- dicts the ending for a shaving caption. Whereas shaving is in-domain, the second example about sharpening knives is zero-shot. In this con- text, BERT’s answer suggests that one would use a knife to sharpen a stone, rather than vice versa. The last example comes from WikiHow, which appears to be incredibly challenging for BERT. BERT picks answer d, which has more words that match the context of technology (planes, traffic, laptop), but is incoherent.12 12Among other issues, why would someone suddenly be aware that they are ‘flying at high speed on a plane...?’ 100 - Accuracy of the filtering model before AF 90 - =m Accuracy of the filtering model after AF mm BERT-Large accuracy after AF 80 - _ = Ea. nse +0 ew < 50- = a wo | 90 a id lai = | Stylistic ELMo+ GPT BERTBase BERTLarge Ensemble LSTM acy (% .ccul Figure 11: Performance on the WikiHow subset of al- ternative variations of HellaSwag, where different Ad- versarial Filters are used (but without human valida- tion). We consider the shallow stylistic adversaries used by Zellers et al. (2018) (Stylistic Ensemble), as well as an LSTM with ELMo embeddings, GPT, BERT-Base, and BERT-Large. For each adversarial fil- tering model, we record the accuracy of that model be- fore and after AF is used. We also evaluate each al- ternative dataset using BERT-Large. The results sug- gest that using a a stronger model at test time (over the model used for AF) improves performance, but is not enough to solve the task. # 6 Discussion Our results suggest that HellaSwag is a challenging testbed for state-of-the-art NLI models, even those built on extensive pretraining. The question still remains, though, of where will the field go next? # 6.1 How easy might HellaSwag be for future discriminators? In this paper, we showed the existence of a Goldilocks zone of text complexity – in which generations are nonsensical, but existing state- of-the-art NLP models cannot tell the difference. How hard will the dataset be for future, even more powerful, models? Answering this question is challenging because these models don’t exist (or are unavailable) at the time of writing. However, one remedy is to perform an ablation study on the Adversarial Fil- tering model used, comparing weaker filters with stronger discriminators. We present our results in Figure 11, and find that while weak discrim- inators (like the stylistic ensemble used to make SWAG) only marginally reduce the accuracy of BERT-Large, increasing the gap between the filter and the final discriminator is not enough to solve the task. For instance, using a discriminator with 3x the parameters as the adversarial filter (BERT- Large vs. BERT-Base) results in 63% machine ac- curacy. 8 Human performance? 10° 3 ° o a °°. Ss /BERT-L BERT-Base “a ayge Pretraining Hours (Estimate) 10° - (éno 1 r r r r r 1 30 640 50 60 70 80 90 100 Overall Accuracy on HellaSwag Figure 12: Estimated pretraining hours required to reach a desired accuracy on HellaSwag. We estimate perfomance with respect to a RTX 2080 Ti - a modern, fast GPU, and fit a log-linear regression line. An ex- trapolation suggests that to reach human-level perfor- mance on HellaSwag, without algorithmic or computa- tional improvements, would require 109 GPU-hours of pretraining (over 100k GPU years). # 6.2 How well does pretraining scale? Overall, the current paradigm of pretraining large models on lots of data has made immense progress on NLP benchmarks. Though we expect this trend to continue, it also behooves us to con- If more compute is indeed the sider its limits. answer for human-level commonsense inference, what would the compute requirements of this hy- pothetical massive model look like? We investigate this in Figure 12 by compar- ing the accuracies of known models on Hella- Swag with their computational needs. This estima- tion is a rough estimate: we convert reported TPU runtimes to our benchmark RTX 2080 Ti GPU us- ing the Roofline model (Williams et al., 2009), which focuses primarily on the bottleneck of load- ing tensors into GPU memory. Extrapolating from an exponential fit suggests that reaching human- level performance on our dataset would require 109 GPU hours, or 100k years – unless algorith- mic improvements are made. What might these algorithmic improvements look like? These could include architectural ad- vances, better pretraining objectives, and beyond. However, these improvements share the bottle- neck of the data source. To answer some Hella- Swag questions correctly without reasoning deeply – like knowing that it is a bad idea to stop at a red light for ‘at most two seconds’ – might require an exponential number of samples, due to prob- lems of reporting bias (Gordon and Van Durme, 2013). Alternatively, future models might answer correctly only by picking up on spurious patterns, in which case a new development of the bench- mark – using these models as adversaries – would place us in the same position as we are right now. Put another way, for humans to answer Hella- Swag questions requires abstracting away from language and modeling world states instead. We postulate that this is what separates solving the task of commonsense NLI, as opposed to a par- ticular dataset. Indeed, we find that existing deep methods often get fooled by lexical false friends. For example, in the WikiHow example from Fig- ure 10, BERT chooses an ending that matches the technology words in the context, rather than matching the deeper topic: using technology as an excuse for not doing homework. # 6.3 Towards a future of evolving benchmarks What happens when HellaSwag gets solved? We believe the answer is simple: crowdsource another dataset, with the same exact format, and see where models fail. Indeed, in our work we found this to be straightforward from an algorithmic perspec- tive: by throwing in the best known generator (GPT) and the best known discriminator (BERT- Large), we made a dataset that is adversarial - not just to BERT, but to all models we have access to. While this was easy algorithmically, care must be taken from a data curation standpoint. Indeed, we find success exists within a Goldilocks zone: the data source must be complex enough that state- of-the-art generators often make mistakes, while simple enough such that discriminators often fail to catch them. This ties the future of SWAG- style benchmarks to progress on language gener- ation: until generation is solved, commonsense NLI will remain unsolved. Even recent promis- ing results on scaling up language models (Rad- ford et al., 2019) find problems in terms of consis- tency, with the best curated examples requiring 25 random seeds. # 7 Conclusion In this paper, we presented HellaSwag, a new dataset for physically situated commonsense rea- soning. By constructing the dataset through ad- versarial filtering, combined with state-of-the-art models for language generation and discrimina- tion, we produced a dataset that is adversarial to 9 the most robust models available – even when models are evaluated on items from the train- ing distribution. In turn, we provided insight into the inner workings of pretrained models, and suggest a path for NLP progress going forward: towards benchmarks that adversarially co-evolve with evolving state-of-the-art models. # Acknowledgments We thank the reviewers, as well as Jesse Thoma- son, for their helpful feedback. We thank the Mechanical Turk workers for their great work during dataset collection. Thanks also to Zak Stone and the Google Cloud TPU team for help with the computing infrastructure. This work was supported by the National Science Foundation through a Graduate Research Fellowship (DGE- 1256082) and NSF grants (IIS-1524371, 1637479, 165205, 1703166), the DARPA CwC program through ARO (W911NF-15-1-0543), the IARPA DIVA program through D17PC00343, the Sloan Research Foundation through a Sloan Fellowship, the Allen Institute for Artificial Intelligence, the NVIDIA Artificial Intelligence Lab, and gifts by Google and Facebook. The views and conclu- sions contained herein are those of the authors and should not be interpreted as representing endorse- ments of IARPA, DOI/IBC, or the U.S. Govern- ment. # References Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In ICLR. ICLR. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for In Proceedings of the natural language inference. 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1657–1668. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- In Proceedings of quire simple lexical inferences. the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650–655. Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 workshop on Automated knowledge base construction, pages 25–30. ACM. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in nat- ural language inference data. In Proc. of NAACL. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. arXiv preprint arXiv:1904.09751. Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021–2031. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text In Proceedings of the 15th Confer- classification. ence of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 427–431. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-Captioning In International Conference on Events in Videos. Computer Vision (ICCV). Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language in- In Proceedings of the Seventh Joint Con- ference. ference on Lexical and Computational Semantics, pages 180–191. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Technical re- port, OpenAI. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report, OpenAI. Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon, Christopher Pal, Hugo Larochelle, Aaron Courville, and Bernt Schiele. 2017. Movie Description. International Journal of Computer Vi- sion, 123(1):94–120. Rachel Rudinger, Vera Demberg, Ashutosh Modi, Benjamin Van Durme, and Manfred Pinkal. 2015. Learning to predict script events from domain- In Proceedings of the Fourth Joint specific text. 10 Conference on Lexical and Computational Seman- tics, pages 205–210. Samuel Williams, Andrew Waterman, and David Patterson. 2009. Roofline: An insightful vi- sual performance model for floating-point programs Technical report, and multicore architectures. Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States). Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watch- In arXiv preprint ing movies and reading books. arXiv:1506.06724. # Supplemental Material # A Adversarial Filtering Setup In this subsection, we provide some more details regarding the Adversarial Filtering experiments. Our version of Adversarial Filtering is mostly the same as Zellers et al. (2018). Details: a. On each iteration, we split the dataset up into 80% training and 20% testing. We don’t do anything special for this split (like looking at the video/article IDs). b. For ActivityNet, we use k “ 9 assigned in- dices for every example. (This corresponds to the number of red columns in Figure 2). For WikiHow, we used k “ 5, since we found that there were fewer good endings produced by the generators after scaling up the sequence length. c. Similarly to Zellers et al. (2018), we train the AF models in a multi-way fashion. Since we use BERT-Large as the discriminator, this matches Devlin et al. (2018)’s model for SWAG: on each training example, the model is given exactly one positive ending and sev- eral negative endings, and the model com- putes probability distribution over the endings through a softmax. However, we also wanted to always report 4-way probability for simplic- ity. To do this, we train in a 4-way setting (the training set is constructed by subsampling 3 wrong answers from the set of k that are cur- rently assigned to each example). The accu- racy values that are reported are done so using the first 3 assigned negatives in dataset Dtest. d. Sometimes, BERT never converges (accuracy around 25%), so when this happens, we don’t do the reassignment. # B GPT Setup We generate our dataset examples from OpenAI GPT. We finetune the model for two epochs on WikiHow, and 5 epochs on ActivityNet, using the default learning rate of (Radford et al., 2018). Im- portantly, we generate randomly according to the language model distribution, rather than perform- ing beam search – this would bias the genera- tions towards common words. For the WikiHow endings, we used Nucleus Sampling with p “ 0.98, which means that the probability weights for the tail (those tokens with cumulative probabil- ity mass ă 0.02) are zeroed out (Holtzman et al., 11 2019). # C BERT setup We extensively study BERT in this paper, and make no changes to the underlying architecture or pretraining. For all of the experiments where we provide context, we set up the input to the BERT model like this: [CLS] A woman is outside with a bucket and a dog. The dog is running around trying to avoid a bath. [SEP] She gets the dog wet, then it runs away again [SEP] In the case where only the ending is pro- vided, we adopt the BERT-style ‘single-span’ set- ting: [CLS] She gets the dog wet, then it runs away again [SEP] # D A discussion on BERT # Hyperparameters and Instability It is worth noting that many of our experiments some instability. On the SWAG experiments, we use the same hyperparameters as (Devlin et al., 2018) - these generally work very well.13 How- ever, we find that they become a bit unstable when crossing over to make HellaSwag. Here, we dis- cuss some strategies and insight that we picked up on. a. We use a batch size of 64 examples rather than 16, and warm the model up for 20% of the dataset (rather than 10%). This helps the model adapt to SWAG more gradually, with- out diverging early on. b. For the Adversarial Filtering experiments (for both WikiHow and ActivityNet), we random- ize some of the hyperaparmeters on each it- eration. We sample a learning rate between 1e-5 and 4e-5, using a log-uniform distribu- tion. These outer ranges were recommended from the original BERT paper. Additionally, with probability 0.5 we use the cased model (where the input isn’t originally lowercased before tokenization), rather than the uncased model. c. During adversarial filtering, we used 3 epochs. However, we found that adding more epochs 13The only exception is for the plots where we vary the number of training examples. In this case, we don’t want to disadvantage the trials without much training data (since this would allow for fewer parameter updates). To remedy this, we continue training for 10 epochs and report the best validation performance over the entire training history. helped the model during fine-tuning on the fi- nal dataset HellaSwag. Our best configuration uses 10 epochs. d. While fine-tuning on HellaSwag we used a learning rate of 2e-5. # E Human validation We performed human validation using the same setup as (Zellers et al., 2018). Humans get six an- swers to choose from, of which exactly one is the true ending and the other five are from AF. We found that multiple rounds of human validation were especially helpful on ActivityNet. However, it helps to do the human validation in an intelli- gent way: if the first worker is confused, the an- swer should be replaced before it goes to the next worker. This is a hard problem, so we adopt the following approach: a. We use best practices on mechanical turk, pay- ing workers fairly (up to 37 cents per HIT on WikiHow). We also used a qualification HIT that was autograded to help filter for workers who are good at the task. Workers who tended to prefer the generated endings over the real ones were dequalified from participating. b. For each worker, we use the summary to estimate of Ppanswer i is right|worker rates i as bestq. We can then use this to estimate how confident we are in each answer choice: we want to be con- fident that workers will not prefer the wrong answers. Also, this allows us to aggregate per- formance across crowd workers, by multiply- ing the probabilities for each answer choice. c. On each round of filtering, we keep the 3 wrong endings that workers least prefer (based on the probability scores, along with the right ending. The other two endings are new ones. Particularly on ActivityNet, we found that there are some contexts where the ground truth answer isn’t liked by workers. To fix this, we end up tak- ing the best 25k examples from ActivityNet and the best 45k from WikiHow. (By best, we mean the ones with the highest probability that work- ers will predict the true answer, versus the three easiest-to-guess negatives, as judged by the Naive Bayes model). We make Figure 7 (‘The road to HellaSwag’) by doing this process (taking the best examples) for each dataset, while varying the number of annotators that are used for getting the scores for each ending. (In the case where there 12 are 0 annotators, we get a random sample). # F Human Evaluation We do a human evaluation while giving workers the exact same task as is given to the models. Workers are given five endings, and must pick the best one. We obtain human evaluation numbers by combining 5 turkers together, with a majority vote. We found that the biggest differences in diffi- culty in humans were due to domain (WikiHow is easier than ActivityNet). To account for this, we did the human evaluation over 200 examples from WikiHow, and 200 examples from ActivityNet, for each number of previous validators as shown in Figure 7 (0, 1, or 2). To report the accuracy of a split that’s mixed between WikiHow and Activity- Net, we use the following formula: accWikiHow ¨ NWikiHow ` accActivityNet ¨ NActivityNet NWikiHow ` NActivityNet Here, acc refers to the accuracy on each dataset as judged by humans, and N is the number of exam- ples from that dataset in the split. # G More examples We additionally have more validation examples, shown in Figure 2. # H In-Domain and Zero-Shot categories See Figure 13 for a closer look at the dataset cate- gories. Category: Preparing pasta (activitynet; indomain) A kitchen is shown followed by various ingredients and a woman speaking to the camera. She begins showing the ingredients and putting them into a hot boiling pot and stirring around. she Category: Doing crunches (activitynet; indomain) We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. the man a) shows off the oven and begins assembling the cookies in the oven by pushing a button on the oven. (2.2%) b) continues mixing up more ingredients and then puts them all together in a bowl, serving the dish ad sprinkling olive oil around it. (97.8%) c) shows raising and lowering the pot until adding more water and corn syrup. (0.0%) d) places an omelette onto the screen and puts it in the oven to bake. (0.0%) a) demonstrates how to increase efficient exercise work by running up and down balls. (0.0%) b) moves all his arms and legs and builds up a lot of muscle. (80.9%) c) then plays the ball and we see a graphics and hedge trimming demonstration. (0.0%) d) performs sits ups while on the ball and talking. (19.1%) Category: Sharpening knives (activitynet; zeroshot) A man is seen spinning a blade with his foot on a machine and moving his hands up with down holding a knife. the camera Category: Layup drill in basketball (activitynet; zeroshot) A female basketball coach is seen instructing a group of girl basketball players who are standing in line on a basketball court. the first girl a) pans around and shows a woman moving around in a jump rope machine. (0.0%) b) captures him from several angles while he sharpens the knife with complete concentration. (81.6%) c) pans around and points to a man standing inside the machine as the man continues to move on the machine. (18.4%) d) then pans around to a woman and her daughter who also dance at the show. (0.0%) a) passes to another coach and then runs to the net and takes a layup. (0.0%) b) trying to get the ball to go far past the basket and hit it back towards the basket while her coach con- tinues teaching her. (100.0%) c) walks across the court with the ball and keeps walking then pulling the girls to the other side of the court and the girls begin playing volleyball rhyth- mically rolling on the floor as the coach helps them follow how to properly do things. (0.0%) d) line up and stand behind a dummy dummy. (0.0%) Category: Family Life (wikihow; zeroshot) [header] How to raise your children to be helpers [title] Call them helpers when you ask for things. [step] Instead of asking for help, ask your child to ” be a helper. ” all people, children included, are more motivated when their identity is in play. Category: Youth (wikihow; indomain) [header] How to make up a good excuse for your homework not being finished [title] Blame technology. [step] One of the easiest and most believable excuses is simply blaming technology. You can say your computer crashed, your printer broke, your internet was down, or any number of problems. a) You can start doing this with your children as early as two years old. [substeps] You might say, ” jayden, can you be a helper and clean your bed- room before grandma comes over? ” or ” please be a helper and stay quiet while your sister naps. (0.1%) b) When you call your child helpers, describe what they do and what they need to be helped for. [sub- steps] You could say, ” i need you to help dad during his lunch break at work. (99.9%) c) If you ask your child for things they have access to, it encourages them to put more effort into making things happen. [substeps] To make sure they under- stand exactly what’s expected of them, you could try saying, ” i’m looking for helpers who can be helpers. (0.0%) d) Call them when you need them for help or for monetary help. [substeps] For example, if you need help with something you don’t know how to do, let your child know you’re excited to help with this. (0.0%) a) Your excuses will hardly seem believable. [sub- steps] This doesn’t mean you are lying, just only that you don’t have all the details of how your computer ran at the time of the accident. (0.0%) b) The simplest one to have in a classroom is to blame you entire classroom, not just lab. If you can think of yourself as the victim, why not blame it on technology. (9.4%) c) Most people, your teacher included, have expe- rienced setbacks due to technological problems. [substeps] This is a great excuse if you had a pa- per you needed to type and print. (29.1%) d) It may also be more believable if you are fully aware that you may be flying at high speed on a plane and need someone to give you traffic report. Your problem might be your laptop failing to charge after a long flight. (61.5%) Table 2: Example questions answered by BERT-Large. Correct model predictions are in blue, incorrect model predictions are red. The right answers are bolded. 13 wikihow 3,192 Food and Entertaining (500) Youth (161) — — i i Finance and Business (265) {ducation and Communications || activitynet 1,809 2%ng coKes 3) Dise dog 2) Making aleronaese (32) Peg 900125) tsa @s) | Hanear aan ea) Crevteadra @8) Srewigenw (i 5h) a ecm SS | ee ee ee eS i eoeaee = = _ cone _ =a ae — = srronemanva | 2 sewnonn | ° _ | Powerockrg (4) * ———* a ‘Seinns(t) wikihow 3,607 Personal Care and Style (2,627) Family Life (980) Leese Having an ice cream (116) Basketball layups (111) ———— Playing harmonica (97) | Figure 13: Examples on the in-domain validation set of HellaSwag, grouped by category label. Our evaluation setup equally weights performance on categories seen during training as well as out-of-domain. 14
{ "id": "1506.06724" }
1905.07504
Story Ending Prediction by Transferable BERT
Recent advances, such as GPT and BERT, have shown success in incorporating a pre-trained transformer language model and fine-tuning operation to improve downstream NLP systems. However, this framework still has some fundamental problems in effectively incorporating supervised knowledge from other related tasks. In this study, we investigate a transferable BERT (TransBERT) training framework, which can transfer not only general language knowledge from large-scale unlabeled data but also specific kinds of knowledge from various semantically related supervised tasks, for a target task. Particularly, we propose utilizing three kinds of transfer tasks, including natural language inference, sentiment classification, and next action prediction, to further train BERT based on a pre-trained model. This enables the model to get a better initialization for the target task. We take story ending prediction as the target task to conduct experiments. The final result, an accuracy of 91.8%, dramatically outperforms previous state-of-the-art baseline methods. Several comparative experiments give some helpful suggestions on how to select transfer tasks. Error analysis shows what are the strength and weakness of BERT-based models for story ending prediction.
http://arxiv.org/pdf/1905.07504
Zhongyang Li, Xiao Ding, Ting Liu
cs.CL, cs.LG
Accepted and to appear in IJCAI 2019
null
cs.CL
20190517
20190521
9 1 0 2 y a M 1 2 ] L C . s c [ 2 v 4 0 5 7 0 . 5 0 9 1 : v i X r a # Story Ending Prediction by Transferable BERT Zhongyang Li , Xiao Ding and Ting Liu∗ Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology {zyli, xding, tliu}@ir.hit.edu.cn # Abstract Recent advances, such as GPT and BERT, have shown success in incorporating a pre-trained trans- former language model and fine-tuning operation to improve downstream NLP systems. However, this framework still has some fundamental problems in effectively incorporating supervised knowledge from other related tasks. In this study, we inves- tigate a transferable BERT (TransBERT) training framework, which can transfer not only general lan- guage knowledge from large-scale unlabeled data but also specific kinds of knowledge from vari- ous semantically related supervised tasks, for a tar- get task. Particularly, we propose utilizing three kinds of transfer tasks, including natural language inference, sentiment classification, and next action prediction, to further train BERT based on a pre- trained model. This enables the model to get a better initialization for the target task. We take story ending prediction as the target task to con- duct experiments. The final result, an accuracy of 91.8%, dramatically outperforms previous state-of- the-art baseline methods. Several comparative ex- periments give some helpful suggestions on how to select transfer tasks. Error analysis shows what are the strength and weakness of BERT-based models for story ending prediction. It was my final performance in marching band. I was playing the snare drum in the band. We played Thriller and Radar Love. The performance was flawless. Right ending: I was very proud of my performance. |contiic>{ Wrong ending: I was very ashamed of myperformance. Figure 1: This figure shows a typical example from the develop- ment set of Story Cloze Test. There is an obvious entailment relation between the story context and the right ending, and a contradiction relation between the context and the wrong ending. the human performance, demonstrating the hardness of this task. Until very recently, GPT [Radford et al., 2018] and BERT [Devlin et al., 2018] have shown that a two-stage framework — pre-training a language model on large-scale unsupervised corpora and fine-tuning on target tasks — can bring promising improvements to various natural language understanding tasks, such as reading comprehension [Rad- ford et al., 2018] and natural language inference (NLI) [De- vlin et al., 2018]. Benefiting from these advances, the SCT performance has been pushed to a new level [Radford et al., 2018], though there is still a gap with the human performance. However, we argue that the general knowledge obtained from unsupervised language model pre-training is not suffi- cient for learning a set of perfect initial parameters for every target task. Inspired by transfer learning techniques [Pan and Yang, 2009], we consider incorporating supervised knowl- edge into this conventional pre-training framework to find a better initialization for the target task. Nevertheless, there still remain two fundamental problems that should be addressed: • How can the pre-training framework better utilize super- 1 Introduction Story ending prediction, also known as the Story Cloze Test (SCT) [Mostafazadeh et al., 2016], is an open task for eval- uating story comprehension. This task requires a model to select the right ending from two candidate endings (one is wrong and the other is right) given a story context. The goal behind SCT is to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, which is essential for Artificial Intel- ligence. There have been a variety of models trying to solve SCT so far [Schwartz et al., 2017; Chaturvedi et al., 2017; Zhou et al., 2019; Li et al., 2018b]. However, these stud- ies did not achieve very salient progress compared with vised knowledge? • What basic rules need to follow to find appropriate super- vised knowledge for a target task? Recently, [Phang et al., 2018] gave a possible solution for the first question. With a lot of crossing experiments over four intermediate tasks and nine GLUE tasks [Wang et al., 2018], they demonstrate that further pre-training on super- vised datasets can improve the performance of GPT on down- stream tasks. The MT-DNN model [Liu et al., 2019] also tries to answer the first question by incorporating the multi- task learning framework into BERT. However, we still have no idea for answering the second challenging question from their experiments. ∗Contact Author In this study, we take SCT as an example and try to answer the above two challenging questions through extensive ex- periments. We follow the idea from [Phang et al., 2018] and present a three-stage transferable BERT (TransBERT) frame- work to transfer knowledge from semantically related tasks for SCT. As shown in Figure 1, the reader can easily find that the story context entails the right story ending. In contrast, the story context conflicts with the wrong ending. This suggests that the SCT task has a strong correlation with NLI. In addi- tion, we also notice that a lot of candidate story endings in SCT are about describing human mental states and the next action following the story context. Hence, we propose uti- lizing three semantically related supervised tasks, including NLI, sentiment classification, and next action prediction to further pre-train the BERT model. Then the model is fine- tuned with minimal task-specific parameters to solve SCT. This paper makes the following three contributions: • This study presents a TransBERT framework which en- ables the BERT model to transfer knowledge from both unsupervised corpora and existing supervised tasks. We achieve new state-of-the-art results on the widely used SCT v1.0 dataset and recently revised SCT v1.5 blind test dataset, which are much closer to the human performance. • Based on extensive comparative experiments, we give some helpful suggestions on how to select transfer tasks to improve BERT. Error analysis shows what are the strength and weakness of BERT-based models for SCT. 2 Background Language model pre-training has shown to be very effective for learning universal language representations by leveraging large amounts of unlabeled data. Some of the most promi- nent models are ELMo [Peters et al., 2018], GPT [Radford et al., 2018], and BERT [Devlin et al., 2018]. Among these, ELMo uses a bidirectional LSTM architecture, GPT exploits a left-to-right transformer architecture, while BERT uses the bidirectional transformer architecture. There are two existing strategies for applying pre-trained language models to down- feature-based and fine-tuning. The feature- stream tasks: based approach, such as ELMo, uses task-specific architec- tures that include the pre-trained representations as input fea- tures. The fine-tuning approaches, such as GPT and BERT, introduce minimal task-specific parameters and train on the downstream tasks by jointly fine-tuning the pre-trained pa- rameters and task-specific parameters. This two-stage frame- work has been demonstrated to be very effective in various natural language processing tasks, such as reading compre- hension [Radford et al., 2018] and NLI [Devlin et al., 2018]. In this paper, our TransBERT training framework is based on the BERT encoder [Devlin et al., 2018], which exploits transformer block [Vaswani et al., 2017] as the basic compu- tational unit. Here, we describe the main components of the BERT encoder shown in Figure 2. The input X, which is a word sequence (either a sentence or two sentences concatenated together) is first represented as a sequence of input embeddings, one for each word, in L1. Then the BERT encoder captures the contextual information for each word via self-attention and generates a sequence of output contextual embeddings in L2. ( softmax output layer C L, : task-specific linear layer ) ( L,: output contextual embeddings for each token C L,: input embeddings for each token J ( Lexicon Encoder layer (word, position, segment) ) ( Input X: one sentence or two [SEP] concatenated sentences Figure 2: The BERT model has a lexicon encoder (L1), a bidirec- tional transformer encoder (L2), and a task specific linear layer (L3). Lexicon Encoder (L1): The input X = {x1, ..., xn} is a sequence of tokens of length n. The first token x1 is always a special [CLS] token. If X is concatenated by two sentences X1 and X2, they will be separated by a special token [SEP]. The lexicon encoder maps X into a sequence of input em- beddings, one for each token, constructed by summing the corresponding word, segment, and position embeddings. Bidirectional Transformer Encoder (L2): BERT uses a multilayer bidirectional transformer encoder [Vaswani et al., 2017] to map the input embeddings from L1 into a sequence of contextual embeddings V ∈ Rd·n (d is the word embed- ding size). The BERT model [Devlin et al., 2018] learns the lexicon encoder and transformer encoder parameters by language model pre-training, and applies it to each down- stream task by fine-tuning with minimal task-specific param- eters (L3). Suppose v1 is the output contextual embedding of the first token [CLS], which can be seen as the semantic representa- tion of the whole input X. Take the NLI task as an example, the probability that X is labeled as class c (i.e., the entail- ment) is computed by a logistic regression with softmax: P(c|X) = softmax(Wypy + U1) where Wy, is the task-specific parameter matrix in L3. For the task of SCT, just take the whole story context and a candidate ending as an input sentence pair, and get the output score S via the BERT model. The right ending can be selected by comparing the two output scores Sr and Sw, and choosing the ending with a higher score as the answer. 3 The TransBERT Training Framework Figure 3 shows the three-stage TransBERT training frame- work. The bottom task is unsupervised pre-training with lan- guage modeling and other related tasks, such as the next sen- tence prediction. In the middle of the architecture are various semantically target-related supervised tasks, which are used to further pre-train the pre-trained BERT encoder. We call such a supervised task as a Transfer Task. On the top is the target task, specifically, SCT in this paper. The three cor- responding stages can be summarized as unsupervised pre- training, supervised pre-training, and supervised fine-tuning. ( Supervised Target Task: Story Ending Prediction LY Supervised Task 1: Supervised Task 2: ) .... {Supervised Task N: MNLI MC _MNLI SWAG Large-scale Unsupervised Pre-training Tasks: Language Modeling and Next Sentence Prediction co Figure 3: The three-stage TransBERT training framework. In this framework, we only care about the performance of the target task. The BERT model walk through a single path from bottom to the top, such as the red path shown in the figure. Hence, the model utilizes one kind of supervised knowledge each time. 3.1 Transfer Tasks for SCT We believe that only when the source and the target tasks are semantically associated with each other, then the source task can be used as a transfer task. In other words, they need to share common knowledge, and this knowledge can be ex- ploited to solve both of them. Here, we give more intuitions why we choose NLI, senti- ment classification, and next action prediction as the transfer tasks for SCT. Elementary analysis of randomly selected ex- amples suggests that there are three typical story evolvement styles: 1. The preceding part of the context entails the wrong ending while conflicts the right ending, the succeeding part is just the opposite; 2. The preceding part of the story con- text has a neutral relation to both the right and wrong ending, while the succeeding part entails the right and conflicts with the wrong ending; 3. The whole story context consistently entails the right and conflicts with the wrong ending. Natu- rally, our intuition is that a model that can well solve the NLI task tends to have a good performance on SCT. In addition, a lot of stories especially the story endings describe human mental states or the next action following the story context. Hence, we suppose that a model that can handle the senti- ment or predict the next action well, tends to have a good performance on SCT. Figure 4 shows three typical examples from the development set of SCT v1.0, which are annotated with entailment, mental states, and actions information. Natural Language Inference Given a premise-hypothesis pair, the goal of NLI is to predict whether the hypothesis has an entailment, a contradiction or a neutral relation with the premise. SNLI (Stanford Natural Language Inference) dataset con- tains 570k human annotated sentence pairs, in which the premises are drawn from captions of Flickr30 corpus and hypotheses are manually annotated [Bowman et al., 2015]. • MNLI (Multi-Genre Natural Language Inference) is a 410k crowd-sourced multi-genre entailment classification dataset [Williams et al., 2018]. • MC NLI stands for Multiple-Choice Natural Language In- ference. This dataset is a recast version of the MNLI dataset. Given a premise, we construct three kinds of hy- pothesis pairs: {entailment, neutral}, {entailment, contra- diction}, and {neutral, contradiction}. The problem is to _——— ental conflict, onttick ‘A Negative: Next Action conflict, anal} postive: Next Action noultal Right: Amber was so hurried a she left the list at home, ‘Wrong: Amber enjoyed a relaxing two hour brunch, | made a list ofall the places she needed to | She inurried to get ready. She was worried that she | would not have enough tim Positive: Mental State Right: I was very proud of my performance. Wrong: I was very ashamed of my performance. Negative: Mental State Figure 4: Three typical examples from the development set, which are annotated with entailment, mental states and actions information. choose the entailment, entailment, and neutral hypothesis as the ‘right’ hypothesis from the three kinds of hypothe- sis pairs, respectively. This dataset is used to investigate whether the transfer task having the same problem defini- tion with the target task can provide additional benefits. Sentiment Classification • IMDB [Maas et al., 2011] contains 25K polar movie re- views for a binary sentiment classification. • Twitter is a sentiment classification dataset1 containing 1.6M tweets, which are labeled with positive and negative sentiment polarity labels. Next Action Prediction SWAG (Situations With Adversarial Generations) contains 113k sentence-pair completion examples that evaluate com- monsense inference [Zellers et al., 2018]. Given a sentence from a video captioning dataset, the task is to decide among four choices the most plausible continuation. 3.2 Training Process The training procedure of TransBERT consists of three stages: unsupervised pre-training, supervised pre-training, and supervised fine-tuning. the BERT model [Devlin et al., 2018]. The parameters of the lexicon en- coder and transformer encoder are learned using two unsuper- vised prediction tasks: masked language modeling and next sentence prediction. This stage allows the model to capture general knowledge and representations about the language. In this study, we use the publicly released pre-trained BERT models [Devlin et al., 2018]. In the second stage, we apply the pre-trained BERT model from the first stage on various supervised tasks proposed above. For each task, minimal task-specific parameters will be introduced. These parameters will be updated jointly with the parameters of the lexicon encoder and Transformer en- coder. When the model achieves the best performance on the corresponding development dataset, we save the parameters of the lexicon encoder and Transformer encoder. This stage enables the model to transfer different task-specific knowl- edge from various supervised tasks, and get a better model 1http://help.sentiment140.com/ Dataset Training Development Test SCT v1.0 SCT v1.5 1,771 1,871 100 1,571 1,871 1,571 Table 1: Statistics of the datasets used in our experiments. initialization for the target task. Finally, the model is fine- tuned to solve SCT with new task-specific parameters, similar to the second stage. We train each transfer task and the SCT with 3 epochs monitoring on the development set, using a cross-entropy ob- jective2. Other hyper parameters follow [Devlin et al., 2018]. 4 Evaluation We evaluate the effectiveness of our model by comparing with several state-of-the-art baseline methods. Accuracy (%) of choosing the right ending is used as the evaluation metric. 4.1 Baselines We compare our model with the following baseline methods. To the best of our knowledge, most of the recent advances on Story Cloze Test are listed here. • DSSM [Huang et al., 2013] measures the cosine similarity between the vector representations of the story context and the candidate endings. • CGAN [Wang et al., 2017] encodes the story context and each ending by GRUs and computes an entail score. • HBiLSTM [Cai et al., 2017] uses LSTM to encode both the word sequence and the sentence sequence, and get the modified context vector via attention. Msap [Schwartz et al., 2017] trains a logistic regression model which uses language model and stylistic features, including length, word n-grams, and character n-grams. • HCM [Chaturvedi et al., 2017] trains a logistic regres- sion model which combines features from event sequence, sentiment-trajectory, and topic consistency of the story. • HintNet [Zhou et al., 2019] exploits the hints which can be obtained by comparing two candidate endings to help select the correct story ending. • SeqMANN [Li et al., 2018a] uses a multi-attention neu- ral network and introduces semantic sequence information extracted from SemLM as external knowledge. • ISCK [Chen et al., 2019] is a neural model that integrates narrative sequence, sentiment evolution, and commonsense knowledge. • GPT [Radford et al., 2018] and BERT [Devlin et al., 2018] both can solve the SCT by pre-training a language model using a multilayer transformer on open domain un- labeled corpora, followed by discriminative fine-tuning. 4.2 Dataset To evaluate the effectiveness of our method, we experiment on two-version SCT datasets. SCT v1.0 [Mostafazadeh et 2All of our experiments are based https://github.com/huggingface/pytorch-pretrained-BERT. on Methods BERTBASE (multilingual, uncased) BERTBASE (multilingual, cased) BERTBASE (monolingual, cased) BERTBASE (monolingual, uncased) BERTLARGE (monolingual, uncased) BERTLARGE (monolingual, cased) Accuracy (%) 75.9 80.2 87.4 88.1 89.2 90.0 Table 2: Experimental results with all the publicly released pre- trained BERT models on SCT v1.0 test dataset. ‘Uncased’ means all words in the training corpus will be transformed into lower case form. ‘Cased’ means keeping all the words in their original form. al., 2016] is the widely used version. It contains 98,162 five-sentence coherent stories in the training dataset (a large unlabeled stories dataset), 1,871 four-sentence story contexts along with a right ending and a wrong ending in the devel- opment and test datasets, respectively. Here we only use the development and test datasets, and split development set into 1,771 instances for training and 100 instances for de- velopment purposes. SCT v1.5 [Sharma et al., 2018] is a recently released revised version in order to overcome the human-authorship biases [Schwartz et al., 2017] discovered in SCT v1.0. It contains 1,571 four-sentence story contexts along with a right ending and a wrong ending in the devel- opment and the blind test datasets, respectively. Here we use the 1,871 SCT v1.0 test dataset for training purpose. Actually, with the released SCT v1.5 dataset, this paper treats the SCT v1.0 as a development dataset, while treats the whole SCT v1.5 as the real test dataset. The detailed dataset statistics are shown in Table 1. 5 Results and Analysis There are several pre-trained BERT models available to the community [Devlin et al., 2018]. They differ in how many layers and parameters are used in the model (the basic ver- sion has 12-layer transformer blocks, 768 hidden-size, and 12 self-attention heads, totally 110M parameters; the large ver- sion has 24-layer transformer blocks, 1024 hidden-size, and 16 self-attention heads, totally 340M parameters), and what kind of datasets are used to pre-train the model (multilingual or monolingual). We first conduct several comparative exper- iments on SCT v1.0 dataset to study the effectiveness of dif- ferent BERT versions. Results are shown in Table 2. We find that the two multilingual models perform much worse than the monolingual models. An uncased BERTBASE performs better than the cased BERTBASE, but a cased BERTLARGE is better than the uncased BERTLARGE. The reasons are that the multilingual BERT model doesn’t improve the performance on the monolingual SCT english dataset; the BERTLARGE model can handle a larger cased vocabulary with much more parameters but the BERTBASE model cannot. In the following experiments, BERTBASE refers to the uncased monolingual version of BERTBASE model, and BERTLARGE refers to the cased monolingual version of BERTLARGE model. 5.1 Overall Results Table 3 shows the overall experimental results on SCT v1.0 test dataset. The best previously reported result is from Method DSSM [Huang et al., 2013] CGAN [Wang et al., 2017] HBiLSTM [Cai et al., 2017] Msap [Schwartz et al., 2017] HCM [Chaturvedi et al., 2017] HintNet [Zhou et al., 2019] SeqMANN [Li et al., 2018a] GPT [Radford et al., 2018] ISCK [Chen et al., 2019] BERTBASE (Our Implementation) BERTLARGE (Our Implementation) BERTBASE + SNLI (Ours) BERTBASE + IMDB (Ours) BERTBASE + SWAG (Ours) BERTBASE + Twitter (Ours) BERTBASE + MC MNLI (Ours) BERTBASE + MNLI (Ours) BERTLARGE + MNLI (Ours) Human [Mostafazadeh et al., 2016] Accuracy (%) 58.5 60.9 74.7 75.2 77.6 79.2 84.7 86.5 87.6 88.1 90.0 85.9 87.6 88.6 88.7 89.5 90.6 91.8 100.0 Table 3: Experimental results of story ending prediction on SCT v1.0 test dataset. Differences between our best method and all baseline methods are significant (p < 0.01) using t-test. BERTLARGE + MNLI also gets the SOTA performance of 90.3% on the newly re- leased SCT v1.5 blind test dataset, which is not shown in this table. ISCK [Chen et al., 2019], which is an accuracy of 87.6%. We implemented the same BERT model as [Devlin et al., 2018] and got the best baseline results on SCT, which are 88.1% and 90.0% from BERTBASE and BERTLARGE models, respectively. This is because the BERT model can obtain gen- eral language knowledge from pre-training. From Table 3 we can also find that most of our transfer tasks can further im- prove BERT, except SNLI and IMDB. The MNLI-enhanced BERT models achieved the best accuracies of 90.5% and 91.8%, which are the new state-of-the-art performances on SCT v1.0. This is because our method can learn task-specific knowledge from transfer tasks, which is helpful for SCT, and MNLI is the most informative task. Table 3 also shows some interesting results. Comparing the SNLI and MNLI-enhanced BERT models, we find that though NLI can help SCT intuitively, the data source still plays an important role. MNLI is a multi-genre dataset, while SNLI data is from the specific image caption domain. Hence, the MNLI tends to help the open domain SCT but SNLI does not. Comparing the IMDB and Twitter-enhanced BERT mod- els, we can get similar conclusions that the open domain Twit- ter can improve the performance of BERT on SCT, while the specific domain IMDB hurts the model’s performance. Com- paring the MC MNLI and MNLI-enhanced BERT models, we find that MNLI helps more for SCT (multiple choice task). Hence, we can get the conclusion that the transfer task doesn’t need to have the same problem definition as the target task. This is mainly because the model can get a better knowledge about entailment when NLI is formulated as a classification task (MNLI), other than a multiple choice task (MC MNLI). 5.2 Comparative Experiments Several comparative experiments are conducted to investigate some fine-grained aspects. Method BERTBASE (ending only) BERTBASE (4) BERTBASE (3,4) BERTBASE (2,3,4) BERTBASE (1,2,3,4) BERTBASE + MNLI (ending only) BERTBASE + MNLI (4) BERTBASE + MNLI (3,4) BERTBASE + MNLI (2,3,4) BERTBASE + MNLI (1,2,3,4) Accuracy (%) 77.9 86.4 87.4 87.7 88.1 78.3 88.5 88.7 88.6 90.6 Table 4: Experimental results with different sentences combination as the story context. (3,4) means only the third and the fourth sen- tences are used as the story context, and other settings are similar. Method BERTBASE BERTBASE + MNLI (EN only) BERTBASE + MNLI (NC only) BERTBASE + MNLI (EC only) BERTBASE + MNLI Accuracy (%) 88.1 86.2 88.8 89.2 90.6 Table 5: Experimental results with different natural language in- ference categories on SCT v1.0 test dataset. (EN only) means this setting only considers the Entailment and Neutral realtions, with the Contradiction relation filtered out. Whether All Four Sentences in the Story Context Are Useful for BERT to Choose the Right Ending? Different from NLI and SWAG, in which there are only two sentences in an instance, the SCT has a longer four-sentence context. We experiment to investigate whether the BERT- based models can make full use of the long story context. Experimental results are shown in Table 4. We find that all the sentences in the story context are useful and the BERT- based models can make full use of them to infer the correct ending. This is mainly because the BERT-based models have the ability to handle long distance dependency with the self- attention mechanism [Vaswani et al., 2017]. Explore the Effectiveness of Different MNLI Categories Our experiments suggest that we can achieve the best perfor- mance when using MNLI as the transfer task. But we also want to know which category among the Entailment, Neutral and Contradiction is the most informative for SCT. The re- sults are shown in Table 5. We find that the contradiction re- lation is the most informative one, then entailment, and neu- tral the least. It’s interesting that the performance even drops a lot without the contradiction. The reason is that the abil- ity to recognize conflict endings enables the model to pick up the right ending more easily. Finally, the best performance is achieved by using all three relations together, demonstrating that each relation can help SCT from different aspects. 5.3 Discussion and Analysis The MNLI-enhanced BERT models push the performance to 91.8% and 90.3% accuracies on SCT v1.0 and SCT v1.5 test datasets, respectively, which are much closer to the human performance (100%). Though very effective in natural lan- guage understanding, there are still about 9% of the test in- stances that the BERT-based models cannot handle. First, we are curious about why MNLI can improve SCT with such a large margin. Hence, we trained a model on the MNLI dataset and directly applied it to solve the SCT task. Surprisingly, this simple method got a relatively high accuracy of 63.4% on SCT v1.0 test set, even better than the DSSM and CGAN models which were trained on the SCT dataset. This demonstrates the high correlation between MNLI and SCT. We argue that the SCT task can be seen as a more complicated NLI task, where the premise is a more complex four-sentence evolving context. The goal is to find the right ending that can be entailed with a higher probability than the wrong ending, with respect to the story context. Error analysis of the unsolved instances shows that BERT- based models make a lot of mistakes when one of the two candidate endings is about mental state while the other one describes the next action. This is mainly because BERT- based models are good at distinguishing from two homoge- neous endings (e.g. both describe human mental states). But they cannot handle two heterogeneous endings well. Better models will be needed to handle this properly. Here we try to answer the above two challenging questions: • How can the pre-training framework better utilize su- pervised knowledge: One way is to add a second pre- training stage to integrate knowledge from existing super- vised tasks, like what the STILTs [Phang et al., 2018] and TransBERT do. But this method can only exploit one sin- gle supervised task each time. Another way is to pre-train the transfer tasks in a multi-task learning manner [Liu et al., 2019] (e.g. train MNLI, Twitter, and SWAG simultane- ously). But it’s unknown whether this multi-task learning manner can bring more improvement to SCT, even if each of the three tasks is helpful. We leave this as future work. • What basic rules need to follow to find appropriate super- vised knowledge for a target task: First, the transfer task and the target task need to be semantically associated with each other and share common knowledge between them. This knowledge can be exploited to solve both of them. Second, this paper explores transferring knowledge from different supervised tasks to SCT, showing that a specific domain dataset (SNLI) is not sufficient for improving an open domain target task (SCT), even though they are se- mantically associated with each other. Third, the transfer task doesn’t need to have the same problem definition as the target task. A classification transfer task (MNLI) can help a multiple choice target task (SCT). 6 Related Work The Story Cloze Test Story Cloze Test [Mostafazadeh et al., 2016] is a task for evaluating story understanding. Pre- vious methods on this task can be roughly categorized into feature-based methods [Schwartz et al., 2017; two lines: Chaturvedi et al., 2017] and neural models [Cai et al., 2017]. Feature-based methods for SCT [Schwartz et al., 2017] adopted shallow semantic features, such as n-grams and POS tags, and trained a linear regression model to determine whether a candidate ending is plausible. HCM [Chaturvedi et al., 2017] further integrated event, sentiment and topic into feature-based methods. Neural models [Cai et al., 2017; Zhou et al., 2019] for SCT learn embeddings for the story context and candidate endings, and select the right ending by computing the embeddings’ similarity. SeqMANN [Li et al., 2018a] integrated external knowledge into a multi-attention network. GPT [Radford et al., 2018] pre-trained a transformer language model and fine- tuned the model to solve SCT. ISCK [Chen et al., 2019] used a neural model that integrated narrative sequence, sentiment evolution, and commonsense knowledge. Instead of choosing the right ending, several previous studies aimed to directly generate a reasonable ending [Li et al., 2018c]. Different from the previous commonsense models, we try to incorporate knowledge from other supervised tasks into the most advanced BERT representation model. Learning Universal Language Representations Language model pre-training has shown to be very effective for learn- ing universal language representations. Among these mod- els, ELMo [Peters et al., 2018] and ULMFiT [Howard and Ruder, 2018] used a BiLSTM architecture, while GPT [Rad- ford et al., 2018] and BERT [Devlin et al., 2018] utilized the transformer architecture [Vaswani et al., 2017]. Unlike most earlier approaches, such as ELMo, where the weights of the encoder were frozen after pre-training, ULMFiT, GPT and BERT jointly fine-tuned the encoder and task-specific param- eters on the downstream tasks. STILTs [Phang et al., 2018] fine-tuned a GPT model on some intermediate tasks to get better performance on the GLUE [Wang et al., 2018] benchmark. However, they gave little analysis of this transfer mechanism. Take SCT as an ex- ample, we give some helpful suggestions and our insights on how to select transfer tasks. Transfer Learning and Multi-task Learning Transfer learning [Pan and Yang, 2009] is widely adopted in the NLP community, such as dialogue system [Mo et al., 2018] and text style transfer [Fu et al., 2018]. This work is also re- lated to multi-task learning [Liu et al., 2015], where multiple tasks were jointly trained to get an overall performance im- provement. MT-DNN [Liu et al., 2019] extended multi-task learning by incorporating a pre-trained BERT model, which is very close to the work of this paper. 7 Conclusion In this paper, we present a three-stage training framework TransBERT, which can transfer not only general language knowledge from large-scale unlabeled data but also specific kinds of knowledge from various semantically associated su- pervised tasks for a target task, such as SCT. This training framework can enable a better and task-specific initializa- tion for different target tasks, which is superior to the widely used two-stage pre-training and fine-tuning framework. The MNLI-enhanced BERT model pushes the SCT v1.0 task to 91.8% accuracy, which is much closer to human performance. It also gets the SOTA performance of 90.3% on SCT v1.5. Acknowledgments This work is supported by the National Natural Science Foundation of China via grants 61632011, 61702137 and 61772153. Thanks to the reviewers’ insightful comments. # References [Bowman et al., 2015] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642, 2015. [Cai et al., 2017] Zheng Cai, Lifu Tu, and Kevin Gimpel. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In ACL, pages 616–622, 2017. [Chaturvedi et al., 2017] Snigdha Chaturvedi, Haoruo Peng, and Dan Roth. Story comprehension for predicting what happens next. In EMNLP, pages 1603–1614, 2017. [Chen et al., 2019] Jiaao Chen, Jianshu Chen, and Zhou Yu. Incorporating structured commonsense knowledge in story completion. AAAI, 2019. [Devlin et al., 2018] Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805, 2018. [Fu et al., 2018] Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. Style transfer in text: Ex- ploration and evaluation. In AAAI, 2018. [Howard and Ruder, 2018] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text clas- sification. In ACL, volume 1, pages 328–339, 2018. [Huang et al., 2013] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. Learning deep structured semantic models for web search using click- through data. In CIKM, pages 2333–2338. ACM, 2013. [Li et al., 2018a] Qian Li, Ziwei Li, Jin-Mao Wei, Yanhui Gu, Adam Jatowt, and Zhenglu Yang. A multi-attention based neural network with external knowledge for story ending predicting task. In Coling, pages 1754–1762, 2018. [Li et al., 2018b] Zhongyang Li, Xiao Ding, and Ting Liu. Constructing narrative event evolutionary graph for script event prediction. In IJCAI, pages 4201–4207, 2018. [Li et al., 2018c] Zhongyang Li, Xiao Ding, and Ting Liu. Generating reasonable and diversified story ending using sequence to sequence model with adversarial training. In Coling, pages 1033–1043. ACL, August 2018. [Liu et al., 2015] Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-yi Wang. Representation learning using multi-task deep neural networks for seman- tic classification and information retrieval. pages 912–921, 2015. [Liu et al., 2019] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural net- works for natural language understanding. arXiv preprint arXiv:1901.11504, 2019. [Maas et al., 2011] Andrew L Maas, Raymond E Daly, Pe- ter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL, pages 142–150. ACL, 2011. [Mo et al., 2018] Kaixiang Mo, Yu Zhang, Shuangyin Li, Ji- ajun Li, and Qiang Yang. Personalizing a dialogue system with transfer reinforcement learning. In AAAI, 2018. Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and cloze evaluation for deeper understanding of commonsense stories. NAACL, pages 740–750, 2016. [Pan and Yang, 2009] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. TKDE, 22(10):1345–1359, 2009. [Peters et al., 2018] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representa- tions. In NAACL, volume 1, pages 2227–2237, 2018. [Phang et al., 2018] Jason Phang, Thibault F´evry, and Samuel R Bowman. Sentence encoders on stilts: Supple- mentary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018. [Radford et al., 2018] Alec Radford, Karthik Narasimhan, Improving language Tim Salimans, and Ilya Sutskever. understanding by generative pre-training. 2018. [Schwartz et al., 2017] Roy Schwartz, Maarten Sap, Ioan- nis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. Story cloze task: Uw nlp system. In LSDSem, pages 52– 55, 2017. [Sharma et al., 2018] Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. Tackling the In ACL, vol- story ending biases in the story cloze test. ume 2, pages 752–757, 2018. [Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pages 5998–6008, 2017. [Wang et al., 2017] Bingning Wang, Kang Liu, and Jun Zhao. Conditional generative adversarial networks for In IJCAI, pages commonsense machine comprehension. 4123–4129, 2017. [Wang et al., 2018] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for In EMNLP Workshop, natural language understanding. pages 353–355, 2018. [Williams et al., 2018] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL, vol- ume 1, pages 1112–1122, 2018. [Zellers et al., 2018] Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In EMNLP, pages 93–104, 2018. [Zhou et al., 2019] Mantong Zhou, Minlie Huang, and Xi- aoyan Zhu. Story ending selection by finding hints from pairwise candidate endings. TASLP, 2019.
{ "id": "1811.01088" }
1905.06290
A Surprisingly Robust Trick for Winograd Schema Challenge
The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).
http://arxiv.org/pdf/1905.06290
Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz
cs.CL
Appeared as part of the ACL 2019 conference
null
cs.CL
20190515
20190804
9 1 0 2 g u A 4 ] L C . s c [ 2 v 0 9 2 6 0 . 5 0 9 1 : v i X r a # A Surprisingly Robust Trick for the Winograd Schema Challenge Vid Kocijan1, Ana-Maria Cret¸u2, Oana-Maria Camburu1,3, Yordan Yordanov1, Thomas Lukasiewicz1,3 1University of Oxford 2Imperial College London 3Alan Turing Institute, London [email protected], [email protected] # Abstract The Winograd Schema Challenge (WSC) da- taset WSC273 and its inference counterpart WNLI are popular benchmarks for natural lan- guage understanding and commonsense rea- soning. In this paper, we show that the perfor- mance of three language models on WSC273 consistently and robustly improves when fine- tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We addi- tionally generate a large unsupervised WSC- like dataset. By fine-tuning the BERT lan- guage model both on the introduced and on the WSCR dataset, we achieve overall accu- racies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the- art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more accurate on the “complex” subsets of WSC273, introduced by Trichelair et al. (2018). # Introduction The Winograd Schema Challenge (WSC) (Leves- que et al., 2012, 2011) was introduced for testing AI agents for commonsense knowledge. Here, we refer to the most popular collection of such sen- tences as WSC273, to avoid confusion with other slightly modified datasets, such as PDP60, (Davis et al., 2017) and the Definite Pronoun Resolution dataset (Rahman and Ng, 2012), denoted WSCR in the sequel. WSC273 consists of 273 instan- ces of the pronoun disambiguation problem (PDP) (Morgenstern et al., 2016). Each is a sentence (or two) with a pronoun referring to one of the two or more nouns; the goal is to predict the correct one. The task is challenging, since WSC examples are constructed to require human-like commonsense knowledge and reasoning. The best known solu- tions use deep learning with an accuracy of 63.7% (Opitz and Frank, 2018; Trinh and Le, 2018). The problem is difficult to solve not only because of the commonsense reasoning challenge, but also due to the small existing datasets making it difficult to train neural networks directly on the task. Neural networks have proven highly effective in natural language processing (NLP) tasks, out- performing other machine learning methods and even matching human performance (Hassan et al., 2018; Nangia and Bowman, 2018). However, su- pervised models require many per-task annotated training examples for a good performance. For tasks with scarce data, transfer learning is often applied (Howard and Ruder, 2018; Johnson and Zhang, 2017), i.e., a model that is already trained on one NLP task is used as a starting point for other NLP tasks. A common approach to transfer learning in NLP is to train a language model (LM) on large amounts of unsupervised text (Howard and Ruder, 2018) and use it, with or without further fine-tu- ning, to solve other downstream tasks. Build- ing on top of a LM has proven to be very suc- cessful, producing state-of-the-art (SOTA) results (Liu et al., 2019; Trinh and Le, 2018) on bench- mark datasets like GLUE (Wang et al., 2019) or WSC273 (Levesque et al., 2011). In this work, we first show that fine-tuning existing LMs on WSCR is a robust method of improving the capabilities of the LM to tackle WSC273 and WNLI. This is surprising, because previous attempts to generalize from the WSCR dataset to WSC273 did not achieve a major im- provement (Opitz and Frank, 2018). Secondly, we introduce a method for generating large-scale WSC-like examples. We use this method to create a 2.4M dataset from English Wikipedia1, which we further use together with WSCR for fine-tuning the pre-trained BERT LM (Devlin et al., 2018). The dataset and the code have been made pub- 1https://dumps.wikimedia.org/enwiki/ dump id: enwiki-20181201 licly available2. We achieve accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous best solutions by 8.8% and 9.6%, respec- tively. # 2 Background This section introduces the main LM used in our work, BERT (Devlin et al., 2018), followed by a detailed description of WSC and its relaxed form, the Definite Pronoun Resolution problem. BERT. Our work uses the pre-trained Bidirec- tional Encoder Representations from Transform- ers (BERT) LM (Devlin et al., 2018) based on the transformer architecture (Vaswani et al., 2017). Due to its high performance on natural language understanding (NLU) benchmarks and the sim- plicity to adapt its objective function to our fine- tuning needs, we use BERT throughout this work. BERT is originally trained on two tasks: masked token prediction, where the goal is to predict the missing tokens from the input sequence, and next sentence prediction, where the model is given two sequences and asked to predict whether the second sequence follows after the first one. We focus on the first task to fine-tune BERT us- ing WSC-like examples. We use masked token prediction on a set of sentences that follow the WSC structure, where we aim to determine which of the candidates is the correct replacement for the masked pronoun. Winograd Schema Challenge. Having introdu- ced the goal of the Winograd Schema Challenge in Section 1, we illustrate it with the following ex- ample: The trophy didn’t fit into the suitcase because it was too [large/small]. Question: What was too [large/small]? Answer: the trophy / the suitcase The pronoun “it” refers to a different noun, based on the word in the brackets. To correct- ly answer both versions, one must understand the meaning of the sentence and its relation to the changed word. More specifically, a text must meet the following criteria to be considered for a Wino- grad Schema (Levesque et al., 2011): 1. Two parties must appear in the text. 2The code can be found at https://github.com/ >The code can be found at https://github.com/ vid-koci/bert-—commonsense. vid-koci/bert-commonsense. The dataset and the models can be obtained from https://ora.ox.ac.uk/objects/uuid: 9b34602b-c982-4b49-b4f4-6555b5a82c3d 2. A pronoun or a possessive adjective appears It in the sentence and refers to one party. would be grammatically correct if it referred to the other. 3. The question asks to determine what party the pronoun or the possessive adjective refers to. 4. A “special word” appears in the sentence. When switched to an “alternative word”, the sentence remains grammatically correct, but the referent of the pronoun changes. Additionally, commonsense reasoning must be required to answer the question. A detailed analysis by Trichelair et al. (2018) shows that not all WSC273 examples are equally difficult. They introduce two complexity mea- sures (associativity and switchability) and, based on them, refine evaluation metrics for WSC273. In associative examples, one of the parties is more commonly associated with the rest of the question than the other one. Such examples are seen as “easier” than the rest and represent 13.5% of WSC273. The remaining 86.5% of WSC273 is called non-associative. 47% of the examples are “switchable”, because the roles of the parties can be changed, and ex- amples still make sense. A model is tested on the original, “unswitched” switchable subset and on the same subset with switched parties. The con- sistency between the two results is computed by comparing how often the model correctly changes the answer when the parties are switched. Definite Pronoun Resolution. Since collecting examples that meet the criteria for WSC is hard, Rahman and Ng (2012) relax the criteria and construct the Definite Pronoun Resolution (DPR) dataset, following the structure of WSC, but also accepting easier examples. The dataset, referred throughout the paper as WSCR, is split into a train- ing set with 1322 examples and test set with 564 examples. Six examples in the WSCR training set reappear in WSC273. We remove these examples from WSCR. We use the WSCR training and test sets for fine-tuning the LMs and for validation, re- spectively. WNLI. One of the 9 GLUE benchmark tasks (Wang et al., 2019), WNLI is very similar to the WSC273 dataset, but is phrased as an entailment problem instead. A WSC schema is given as a premise. The hypothesis is constructed by extract- ing the sentence part where the pronoun is, and replacing the pronoun with one candidate. The la- bel is 1, if the candidate is the correct replacement, and 0, otherwise. # 3 Related Work There have been several attempts at solving WSC273. Previous work is based on Google queries for knowledge (Emami et al., 2018) (58%), sequence ranking (Opitz and Frank, 2018) (63%), and using an ensemble of LMs (Trinh and Le, 2018) (63%). A critical analysis (Trichelair et al., 2018) showed that the main reason for success when us- ing an ensemble of LMs (Trinh and Le, 2018) was largely due to imperfections in WSC273, as dis- cussed in Section 2. The only dataset similar to WSC273 is an eas- ier but larger (1886 examples) variation published by Rahman and Ng (2012) and earlier introduced as WSCR. The sequence ranking approach uses WSCR for training and attempts to generalize to WSC273. The gap in performance scores between WSCR and WSC273 (76% vs. 63%) implies that examples in WSC273 are much harder. We note that Opitz and Frank (2018) do not report remov- ing the overlapping examples between WSCR and WSC273. Another important NLU benchmark is GLUE (Wang et al., 2019), which gathers 9 tasks and is commonly used to evaluate LMs. The best score has seen a huge jump from 0.69 to over 0.82 in a single year. However, WNLI is a notoriously diffi- cult task in GLUE and remains unsolved by the ex- isting approaches. None of the models have beaten the majority baseline at 65.1, while human perfor- mance lies at 95.9 (Nangia and Bowman, 2018). # 4 Our Approach WSC Approach. We approach WSC by fine- tuning the pre-trained BERT LM (Devlin et al., 2018) on the WSCR training set and further on a very large Winograd-like dataset that we intro- duce. Below, we present our fine-tuning objective function and the introduced dataset. Given a training sentence s, the pronoun to be resolved is masked out from the sentence, and the LM is used to predict the correct candidate in the place of the masked pronoun. Let c1 and c2 be the two candidates. BERT for Masked Token Predic- tion is used to find P(c1|s) and P(c2|s). If a candi- date consists of several tokens, the corresponding number of [MASK] tokens is used in the masked sentence. Then, log P(c|s) is computed as the av- erage of log-probabilities of each composing to- ken. If c1 is correct, and c2 is not, the loss is: # L = − log P(c1|s) + (1) + α · max(0, log P(c2|s) − log P(c1|s) + β), where α and β are hyperparameters. MaskedWiki Dataset. To get more data for fine-tuning, we automatically generate a large- scale collection of sentences similar to WSC. More specifically, our procedure searches a large text corpus for sentences that contain (at least) two occurrences of the same noun. We mask the sec- ond occurrence of this noun with the [MASK] to- ken. Several possible replacements for the masked token are given, for each noun in the sentence dif- ferent from the replaced noun. We thus obtain examples that are structurally similar to those in WSC, although we cannot ensure that they fulfill all the requirements (see Section 2). To generate such sentences, we choose the En- glish Wikipedia as source text corpus, as it is a large-scale and grammatically correct collection of text with diverse information. We use the Stan- ford POS tagger (Manning et al., 2014) for find- ing nouns. We obtain a dataset with approximately 130M examples. We downsample the dataset uni- formly at random to obtain a dataset of manage- able size. After downsampling, the dataset con- sists of 2.4M examples. All experiments are con- ducted with this downsampled dataset only. To determine the quality of the dataset, 200 ran- dom examples are manually categorized into 4 cat- egories: • Unsolvable: the masked word cannot be un- ambiguously selected with the given context. Example: Palmer and Crenshaw both used Wilson 8802 putters , with [MASK] ’s receiv- ing the moniker “ Little Ben ” due to his pro- ficiency with it . [Palmer/Crenshaw] • Hard: the answer is not trivial to figure out. Example: At the time of Plath ’s suicide , As- sia was pregnant with Hughes ’s child , but she had an abortion soon after [MASK] ’s death . [Plath/Assia] • Easy: The alternative sentence is grammati- cally incorrect or is very visibly an inferior choice. Example: The syllables are pro- nounced strongly by Gaga in syncopation while her vibrato complemented Bennett’s characteristic jazz vocals and swing . Olivier added , “ [MASK] ’s voice , when stripped of its bells and whistles, showcases a time- lessness that lends itself well to the genre . ” [Gaga/syncopation] • Noise: The example is a result of a parsing error. In the analyzed subset, 8.5% of examples were un- solvable, 45% were hard, 45.5% were easy, and 1% fell into the noise category. WNLI Approach. Models additionally tested on the test set of the WNLI dataset. To use the same evaluation approach as for the WSC273 dataset, we transform the examples in WNLI from the premise–hypothesis format into the masked words format. Since each hypothesis is just a substring of the premise with the pronoun replaced for the candidate, finding the replaced pronoun and one candidate can be done by finding the hypothesis as a substring of the premise. All other nouns in the sentence are treated as alternative candidates. The Stanford POS-tagger (Manning et al., 2014) is used to find the nouns in the sentence. The probability for each candidate is computed to determine whether the candidate in the hypothesis is the best match. Only the test set of the WNLI dataset is used, because it does not overlap with WSC273. We do not train or validate on the WNLI training and validation sets, because some of the examples share the premise. Indeed, when upper rephrasing of the examples is used, the training, validation, and test sets start to overlap. # 5 Evaluation In this work, we use the PyTorch implementa- tion3 of Devlin et al.’s (2018) pre-trained model, BERT-large. To obtain BERT WIKI, we train on MaskedWiki starting from the pre-trained BERT. The training procedure differs from the training of BERT (Devlin et al., 2018) in a few points. The model is trained with a single epoch of the MaskedWiki dataset, using batches of size 64 (dis- tributed on 8 GPUs), Adam optimizer, a learn- ing rate of 5.0 · 10−6, and hyperparameter val- 3https://github.com/huggingface/ pytorch-pretrained-BERT ues α = 20 and β = 0.2 in the loss function (Eq. (1)). The values were selected from α ∈ {5, 10, 20} and β ∈ {0.1, 0.2, 0.4} and learning rate from {3 · 10−5, 1 · 10−5, 5 · 10−6, 3 · 10−6} using grid search. To speed up the hyperparame- ter search, the training (for hyperparameter search only) is done on a randomly selected subset of size 100, 000. The performance is then compared on the WSCR test set. Both BERT and BERT WIKI are fine-tuned on the WSCR training dataset to create BERT WSCR and BERT WIKI WSCR. The WSCR test set was used as the validation set. The fine-tuning procedure was the same as the training procedure on MaskedWiki, except that 30 epochs were used. The model was validated af- ter every epoch, and the model with highest per- formance on the validation set was retained. The hyperparameters α and β and learning rate were selected with grid search from the same sets as for MaskedWiki training. For comparison, experiments are also con- ducted on two other LMs, BERT-base (BERT with less parameters) and General Pre-trained Trans- former (GPT) by Radford et al. (2018). The train- ing on BERT-base was conducted in the same way as for the other models. When using GPT, the probability of a word belonging to the sentence P(c|s) is computed as partial loss in the same way as by Trinh and Le (2018). Due to WSC’s “special word” property, exam- ples come in pairs. A pair of examples only differs in a single word (but the correct answers are dif- ferent). The model BERT WIKI WSCR no pairs is the BERT WIKI model, fine-tuned on WSCR, where only a single example from each pair is retained. The size of WSCR is thus halved. The model BERT WIKI WSCR pairs is obtained by fine-tuning BERT WIKI on half of the WSCR dataset. This time, all examples in the subset come in pairs, just like in the unreduced WSCR dataset. We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2. The re- sults are reported in Table 1 and will be discussed next. Discussion. Firstly, we note that models that are fine-tuned on the WSCR dataset consistently out- perform their non-fine-tuned counterparts. The BERT WIKI WSCR model outperforms other lan- guage models on 5 out of 6 sets that they are com- BERT WIKI BERT WIKI WSCR BERT BERT WSCR BERT-base BERT-base WSCR GPT GPT WSCR BERT WIKI WSCR no pairs BERT WIKI WSCR pairs LM ensemble Knowledge Hunter WSC273 non-assoc. assoc. unswitched switched consist. WNLI 0.712 0.389 0.550 0.747 0.458 0.658 0.550 0.719 0.443 0.630 0.443 0.705 0.466 0.641 0.511 0.565 0.443 0.901 0.619 0.725 0.619 0.714 0.564 0.623 0.553 0.674 0.663 0.703 0.637 0.571 0.597 0.720 0.602 0.699 0.551 0.606 0.525 0.653 0.669 0.695 0.606 0.583 0.757 0.757 0.730 0.811 0.649 0.730 0.730 0.811 0.622 0.757 0.838 0.5 0.573 0.732 0.595 0.695 0.527 0.611 0.595 0.664 0.672 0.718 0.634 0.588 0.603 0.710 0.573 0.702 0.565 0.634 0.519 0.580 0.641 0.710 0.534 0.588 – – – – – – Table 1: Results on WSC273 and its subsets. The comparison between each language model and its WSCR-tuned model is given. For each column, the better result of the two is in bold. The best result in the column overall is underlined. Results for the LM ensemble and Knowledge Hunter are taken from Trichelair et al. (2018). All models consistently improve their accuracy when fine-tuned on the WSCR dataset. pared on. In comparison to the LM ensemble by Trinh and Le (2018), the accuracy is more consis- tent between associative and non-associative sub- sets and less affected by the switched parties. However, it remains fairly inconsistent, which is a general property of LMs. significance of MaskedWiki’s impact and its ap- plications to different tasks will be investigated. Furthermore, to further improve the results on WSC273, data-filtering procedures may be intro- duced to find harder WSC-like examples. Secondly, the results of BERT WIKI seem to in- dicate that this dataset alone does not help BERT. However, when additionally fine-tuned to WSCR, the accuracy consistently improves. the results of BERT WIKI no pairs and BERT WIKI pairs show that the existence of WSC-like pairs in the training data affects the per- formance of the trained model. MaskedWiki does not contain such pairs. # Acknowledgments This work was supported by the Alan Turing Insti- tute under the UK EPSRC grant EP/N510129/1, by the EPSRC grant EP/R013667/1, by the EPSRC studentship OUCS/EPSRC-NPIF/VK/ 1123106, and by an EPSRC Vacation Bursary. We also acknowledge the use of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1). # 6 Summary and Outlook # References This work achieves new SOTA results on the WSC273 and WNLI datasets by fine-tuning the BERT language model on the WSCR dataset and a newly introduced MaskedWiki dataset. The previ- ous SOTA results on WSC273 and WNLI are im- proved by 8.8% and 9.6%, respectively. To our knowledge, this is the first model that beats the majority baseline on WNLI. Ernest Davis, Leora Morgenstern, and Charles L. Or- tiz. 2017. The first Winograd Schema Challenge at IJCAI-16. AI Magazine, 38(3):97–98. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and BERT: Pre-training Kristina Toutanova. 2018. of deep bidirectional transformers for language understanding. Computing Research Repository, arXiv:1810.04805. We show that by fine-tuning on WSC-like data, the language model’s performance on WSC con- sistently improves. The consistent improvement of several language models indicates the robust- ness of this method. This is particularly surprising, because previous work (Opitz and Frank, 2018) implies that generalizing to WSC273 is hard. In future work, other uses and the statistical Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. A knowledge hunting framework for common sense reasoning. Computing Research Repository, arXiv:1810.01375. Hany Hassan, Anthony Aue, Chang Chen, Vishal Jonathan Clark, Christian Feder- Chowdhary, mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic Chinese to English news translation. Computing Re- search Repository, arXiv:1803.05567. Jeremy Howard and Sebastian Ruder. 2018. Fine-tuned language models for text classification. Computing Research Repository, arXiv:1801.06146. Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categoriza- tion. In Proceedings of ACL, pages 562–570. ACL. Hector J. Levesque, Ernest Davis, and Leora Mor- genstern. 2011. The Winograd Schema Challenge. AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, 46. Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The Winograd Schema Challenge. In Proceedings of KR. AAAI Press. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. Computing Re- search Repository, arXiv:1901.11504. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55–60. Leora Morgenstern, Ernest Davis, and Charles L. Ortiz. 2016. Planning, executing and evaluating the Wino- grad Schema Challenge. AI Magazine. Nikita Nangia and Samuel R. Bowman. 2018. A con- servative human baseline estimate for GLUE: Peo- ple still (mostly) beat machines. Juri Opitz and Anette Frank. 2018. Addressing the Winograd Schema Challenge as a sequence rank- ing task. In Proceedings of the First International Workshop on Language Cognition and Computa- tional Models, pages 41–52. ACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Altaf Rahman and Vincent Ng. 2012. Resolving com- plex cases of definite pronouns: The Winograd Schema Challenge. In Proceedings of EMNLP. Paul Trichelair, Ali Emami, Jackie Chi Kit Cheung, Adam Trischler, Kaheer Suleman, and Fernando Diaz. 2018. On the evaluation of common-sense reasoning in natural language understanding. Com- puting Research Repository, arXiv:1811.01778. T. H. Trinh and Q. V. Le. 2018. A Simple Method for Commonsense Reasoning. Computing Research repository, arXiv:1806.02847. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Computing Research Repository, arXiv:1706.03762. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- In Pro- form for natural language understanding. ceedings of ICLR.
{ "id": "1810.04805" }
1905.05965
Autonomous Penetration Testing using Reinforcement Learning
Penetration testing (pentesting) involves performing a controlled attack on a computer system in order to assess it's security. Although an effective method for testing security, pentesting requires highly skilled practitioners and currently there is a growing shortage of skilled cyber security professionals. One avenue for alleviating this problem is automate the pentesting process using artificial intelligence techniques. Current approaches to automated pentesting have relied on model-based planning, however the cyber security landscape is rapidly changing making maintaining up-to-date models of exploits a challenge. This project investigated the application of model-free Reinforcement Learning (RL) to automated pentesting. Model-free RL has the key advantage over model-based planning of not requiring a model of the environment, instead learning the best policy through interaction with the environment. We first designed and built a fast, low compute simulator for training and testing autonomous pentesting agents. We did this by framing pentesting as a Markov Decision Process with the known configuration of the network as states, the available scans and exploits as actions, the reward determined by the value of machines on the network. We then used this simulator to investigate the application of model-free RL to pentesting. We tested the standard Q-learning algorithm using both tabular and neural network based implementations. We found that within the simulated environment both tabular and neural network implementations were able to find optimal attack paths for a range of different network topologies and sizes without having a model of action behaviour. However, the implemented algorithms were only practical for smaller networks and numbers of actions. Further work is needed in developing scalable RL algorithms and testing these algorithms in larger and higher fidelity environments.
http://arxiv.org/pdf/1905.05965
Jonathon Schwartz, Hanna Kurniawati
cs.CR, cs.AI, cs.LG
null
null
cs.CR
20190515
20190515
bo THE UNIVERSITY OF QUEENSLAND AUSTRALIA AUSTRALIA # Autonomous Penetration Testing using Reinforcement Learning # by # Jonathon Schwartz # School of Information Technology and Electrical Engineering, University of Queensland. # Submitted for the degree of Bachelor of Science (Honours) in the division of Computer Science. Date of Submission 16 th November, 2018 1 2 16th November, 2018 Prof Michael Brünig Head of School School of Information Technology and Electrical Engineering The University of Queensland St Lucia QLD 4072 Dear Professor Brünig, In accordance with the requirements of the Degree of Bachelor of Science (Honours) majoring in Computer Science in the School of Information Technology and Electrical Engineering, I submit the following thesis entitled “Autonomous Penetration Testing using Reinforcement Learning” The thesis was performed under the supervision of Dr Hanna Kurniawati. I declare that the work submitted in the thesis is my own, except as acknowledged in the text and footnotes, and that it has not previously been submitted for a degree at the University of Queensland or any other institution. Yours sincerely, ___________________ Jonathon Schwarz 3 4 # Acknowledgements I would like to acknowledge my supervisor, Dr Hanna Kurniawati, for her guidance and patience throughout this project as well as for the support she gave for the direction I took in the project. I would also like to acknowledge post-doctoral fellow in the robotics group, Troy McMahon, for his advice while working on this project. I would also like to thank the School of ITEE administrative staff for their help and coordination, which made this year much easier. 5 6 # Abstract Penetration testing involves performing a controlled attack on a computer system in order to assess it’s security. It is currently one of the key methods employed by organizations for strengthening their defences against cyber threats. However, network penetration testing requires a significant amount of training and time to perform well and presently there is a growing shortage of skilled cyber security professionals. One avenue for trying to solve this problem is to apply Artificial Intelligence (AI) techniques to the cyber security domain in order to automate the penetration testing process. Current approaches to automated penetration testing have relied on methods which require a model of the exploit outcomes, however the cyber security landscape is rapidly changing as new software and attack vectors are developed which makes producing and maintaining up-to-date models a challenge. To try and address the need for exploit models this project investigated the application of Reinforcement Learning (RL) to automated penetration testing. RL is an AI optimization technique that has the key advantage that it does not require a model of the environment in order to produce an attack policy, and instead learns the best policy through interaction with the environment. In the first stage of this study we designed and built a fast, light-weight and open-source network attack simulator that can be used to train and test autonomous agents for penetration testing. We did this by framing penetration testing as a Markov Decision Process (MDP) with the known configuration of the network as states, the available scans and exploits as actions, the reward determined by the value of machines on the network and using non-deterministic actions to model the outcomes of scans and exploits against machines. In the second stage of the project we used the network attack simulator to investigate the application of RL to penetration testing. We tested the standard Q-learning RL algorithm using both tabular and neural network based implementations. We found that within the simulated environment both tabular and neural network RL algorithms were able to find optimal attack paths for a range of different network topologies and sizes given only knowledge of the network topology and the set of available scans and exploits. This finding provided some justification for the use of RL for penetration testing. However, the implemented algorithms were only practical for smaller networks and numbers of actions and would not be able to scale to truly large networks, so there is still much room for improvement. This study was the first that the authors are aware of that has applied reinforcement learning to automated penetration testing. The versatility of RL to solving MDPs where the model is unknown, lends itself well to the changing nature of cyber security and could offer a valuable tool for reducing the workload on a cyber security professionals. Further work into developing scalable RL algorithms and testing these algorithms in higher fidelity simulators will be the next steps required before RL is ready to be applied in commercial environments. 7 8 # Contents # Acknowledgements Introduction Literature Review 2.1 Penetration testing 2.2 Tools used for penetration testing 2.2 Automated pentesting: attack graphs 2.3 Automated pentesting: using MDP 2.4 Automated pentesting: modelling uncertainty with POMDP 2.5 Automated pentesting: when the world model is unknown The Network Attack Simulator 3.1 Introduction 3.2 Design 3.2.1 The network model 3.2.2 The Environment 3.3 Implementation 3.4 Results 3.5 Discussion 3.5 Conclusion Penetration Testing using Reinforcement Learning 4.1 Introduction 4.2 Background 4.3 Penetration testing using Reinforcement learning 4.4 Experiments 4.5 Results 4.6 Discussion 4.7 Conclusion 7 1 1 1 3 1 5 19 20 21 22 22 2 2 3 2 2 2 2 2 3 3 4 4 4 5 5 6 6 1 5 2 5 6 7 4 4 7 49 0 5 5 5 1 6 8 6 1 7 Abstract # List of Figures # List of Tables # Chapter 1 # Chapter 2 # Chapter 3 # Chapter 4 9 5 Conclusions 5.1 Summary and conclusions 5.2 Possible future work Program listing A.1 Environment A.2 Agents A.3 Experiments 7 7 7 3 3 4 7 7 7 7 5 6 7 8 79 # Chapter 5 # Appendix A # Bibliography 10 # List of Figures # Number Description 2.1.1 The main steps of a penetration test 3.2.1 Example network 3.2.2 Example network topology 3.2.3 Example machine definition on a network 3.2.4 Example set of exploitable services on a network 3.2.5 Example firewall on a network 3.2.6 Example network and state 3.2.7 Example scan and exploit actions 3.2.8 Example state transition 3.3.1 Network Attack Simulator program architecture 3.3.2 Example network and initial state 3.3.3 Simulator rendering output of example network 3.3.4 Example episode rendered using the Network Attack Simulator 3.3.5 Example network configuration file and generated network topology 3.3.6 Example generated network 3.4.1 Network Attack Simulator load times versus number of machines and services 3.4.2 Scaling performance of Network Attack Simulator 4.2.1 The Reinforcement Learning cycle 4.3.1 Schematic illustration of the Deep Q-network 4.4.1 The standard network scenario 20 27 28 29 30 30 32 33 34 36 37 38 39 41 42 43 44 49 55 57 4.4.2 The single site network scenario 11 # Page 58 4.4.3 The multi-site network scenario 4.5.1 # Mean episode reward versus training episode 4.5.2 Mean episode reward versus training time Evaluation performance of Reinforcement Learning algorithms for each scenario 4.5.4 Reinforcement Learning algorithm performance versus number of machines 67 4.5.5 Reinforcement Learning algorithm performance versus number of exploitable services 68 12 59 62 63 # List of Tables # Number Description 3.3.1 Description of parameters required for defining custom network 4.2.1 Reinforcement learning MDP definition 4.4.1 Experiment scenario parameters and their values 4.4.2 List of hyperparameters used for each algorithm and their values 4.5.1 Number of training episodes for each Reinforcement Learning algorithm and scenario 4.5.2 Solve proportion and max reward achieved for each scenario A1 Files and description in the program Environment module A2 Files and description in the program Agent module A3 Files and description in the program Experiment module 40 51 58 60 64 66 76 77 78 13 # Page 14 # Chapter 1 # Introduction The increasing interconnection of the digital world via the use of the internet has changed the way businesses, governments and individuals operate and has lead to significant economic and social benefits more opportunities for cyber criminals to launch malicious attacks in the hopes of gaining access to sensitive data for their own gain state-sponsored attacks, such as the attempts to disrupt the US election, to simple attacks on individuals in the hopes of gaining password or credit card details for monetary gain [2], [3] more organisations and individuals come to rely on globally connected computer systems the ability to secure these systems against malicious attacks is becoming ever more important. Cyber Security, which is the safeguarding of computer systems against unauthorized access or attack, is now a matter of global importance and interest official policies in place regarding cyber security and some are investing significant amounts into the domain into cyber security research over the next seven years by governments and large organization emphasizes the serious threat cyber crimes pose for businesses, governments and individuals. It is important that effective methods and technologies are developed for securing computer systems against these threats. system is penetration testing (pentesting). Pentesting involves performing an authorized controlled attack on a system in order to find any security vulnerabilities that could be exploited . This method can be very effective for evaluating a systems security since by an attacker it is essentially a simulation of what real world attackers would do in practice. This effectiveness, however, comes with one main drawback which is that it has a high cost in terms of time and skills required to perform it. This high cost is becoming more of an issue as digital systems grown in size, complexity and quantity which is causing a demand for security professionals, a demand that is not being met fast enough. In 2015 Cisco, one of the world's leading IT and 15 networking companies, estimated there were more than 1 million unfilled security jobs worldwide . Given this shortage of professionals and the necessity for pentesting in securing systems it is becoming crucial that tools and methods are developed for making pentesting more efficient. help improve their efficiency. These tools include network and vulnerability scanners as well as libraries of known security vulnerabilities source Metasploit framework which has been in development since 2003 framework contains a rich library of known exploits of system vulnerabilities along with other useful tools such as scanners, which are used for information gathering on a target. Tools such as these allow the pentester to work at a higher level of abstraction where they are mainly focussed on finding vulnerabilities and selecting exploits rather than having to work at the low level of manually developing exploits. This enables pentesters to work faster and also makes security assessment more accessible to non-experts . These tools have certainly been a boon to the cyber security industry, however, even with the great benefits these tools have provided they still rely on trained user, which are in short supply. Additionally, as systems grow in complexity the task of manually assessing security will become much harder to do systematically. pentesting is to apply techniques from the Artificial Intelligence (AI) planning domain to pentesting in order to automate the process. The original concept for this took the form of “attack graphs”, which modeled an existing computer network as a graph of connected computers, where attacks can then be simulated on the network using known vulnerabilities and exploits Attack graphs can be effective in learning the possible ways an attacker can breach a system, however using these graphs requires complete knowledge of the system, which is unrealistic from a real world attackers point of view, and also require the manual construction of the attack graph for each system being assessed. Another approach taken involved modelling an attack on a computer as a Partially Observable Markov Decision Process (POMDP) . Modelling an attack as a POMDP introduces the attackers incomplete knowledge into the simulation and allows simulations to remove the assumption that the configuration of each host is known and instead models the observation of the configurations as the attack progresses. This approach can work well in practice against a single host machine but due to the computational properties of POMDPs, does not scale well can handle the uncertainty inherent in any system while still being computationally feasible more research into novel methods is required. A proposed solution to this problem is to simulate pentesting using an MDP. This approach would ignore the uncertainty about the state of the host computer's configuration and . This instead introduce the uncertainty into the success probability of each possible attack type of model is computationally more feasible than the POMDP approach and does not require complete knowledge of the network and host configurations. However, it requires prior 16 knowledge about the success probabilities of each possible action and it treats each target computer as exactly the same instead of utilizing information gathered about the target to produce more tailored attacks. Consequently, this approach addresses the issues of computational complexity and incomplete knowledge of network and host configurations but at the cost of accuracy of picking the best actions for each host. require information about the transition model of the environment is Reinforcement Learning (RL) . RL requires only the state space representation, the set of actions that can be performed and a reward function which defines what the RL agent is trying to achieve. The agent then learns a policy of actions to take from any given state through interaction with its environment. The use of RL has gained a lot of attention in recent years with its use in producing World Go champion beating agents , and although not as widely applied as other supervised machine learning approaches it has been successfully applied in the real world for in a number of robotics tasks environment is not known. can be performed, and can frame the task into a reward function. However, due to the complex and ever changing nature of the cyber network environment, with constant updates to software and available exploits it becomes very difficult to maintain an accurate up-to-date model for the outcomes of performing any action. This combination sets RL as a good candidate approach for automated pentesting. However, RL offers its own challenges. It’s generality comes at the cost of requiring a large amount of data in order to learn the best policy of actions. This data is typically gained from using simulations in order to train the RL agent. In this study we aimed to investigate the use of RL in automated pentesting. The first part of the project was to develop a network attack simulator that can be used for testing and training of automated pentesting agents and algorithms. The second part of the study was to investigate the use of RL algorithms to finding policies for penetration testing in a simulated environment. testing and some background information useful for the later chapters of the thesis. Chapter 3 covers the rationale, design and implementation of the network attack simulator. In chapter 4 we provide a report on my investigation of using RL for automated penetration testing. Finally, chapter 5 provides a brief summary and conclusion of the the study and some directions for further research. 17 18 # Chapter 2 # Literature review # 2.1 Penetration testing Pentesting has been around for more than four decades and is a critical process in the development of secure cyber systems of software or a network in order to evaluate its security. The process of pentesting, when applied to a network, is typically divided into a sequence of steps in order to methodically assess the given system (fig. 2.1.1). The specific steps can vary from attack to attack but generally a , where the aim is to find a vulnerability information gathering penetration test will first involve (a flaw in the system) in the network, followed by where the discovered attack and penetration vulnerability is exploited and then, lastly, using the newly gained access to repeat this process until the desired target is reached tools such as traffic monitoring, port scanning and operating system (OS) detection in order to collect relevant information that can be used to determine if the system contains a vulnerability , which exploit that can be exploited. The attack and penetrate phase involves executing a known can be a program or certain data, to take advantage of the discovered vulnerability and to cause unintended behaviour in the system with the ultimate aim of compromising the target and gaining privileged access to it. Once a successful attack is performed the specific sequence of attack actions can then be reported and used by system administrators and developers to fix the vulnerabilities. Even though the systems and networks that are evaluated using pentesting can differ immensely, in each case the same general steps are followed. This has allowed for the development of a number of tools and frameworks to help make pentesting more efficient. 19 \ Information \ \Attack and \ Local . } Information > > FNM Pivot ) ) Clean up Gathering , Penetrate / ; / / escalation / / | / / Gathering , , Gathering of Launch remote Gather relevant Attempt to Use compromised Erase all evidence relevant information| |exploits in order to information about penetrate further computer and of penetration in about the target, compromise the successfully into compromised escalated privileges} |order to avoid including IP target system. compromised computer using to launch attacks detection. address, Operating computer. local exploits in deeper into the system and order to gain network. Repeating available services. administrative previous steps until privileges. desired access is gained Figure 2.1.1 | The main steps of a penetration test # 2.2 Tools used for penetration testing The cyber security industry is a very large and active community where there are numerous tools that exist to aid in pentesting, here we will mainly focus on the most commonly used tools for network pentesting. For the information gathering stage the aim is to find useful information about the network, this is typically done using a network scanner, with the best know network scanner being Nmap ( as OS, open ports and services currently running on the system. This information can then be used to check for any vulnerabilities in the system using vulnerability scanners such as Nessus ( www.openvas.org www.tenable.com/nessus/professional These scanners can be used to determine if a vulnerability exists but in order to test the vulnerability an exploit needs to be used. pentesting frameworks such as Metasploit ( software is a collection of tools and exploits that can be used from a single environment . Metasploit is an open source project that was started in 2003 and was purchased by Rapid7 in 2009 and is regularly updated with new exploits and tools. The user can use information gathered during the information gathering stage to search for and launch exploits from the Metasploit software, allowing the user to focus on the tactical operation (which exploit to launch) rather than technical level (how to find and develop an exploit). This automation at the technical level allows for a huge increase in efficiency during pentesting. The tools now available to pentesters have had great benefits in terms of improved efficiency, however, it still requires a non-trivial amount of expertise and time to execute a 20 successful pentest. With the growing demand for security experts new approaches are necessary to ensure security evaluation demands can be met. # 2.2 Automated pentesting: attack graphs The idea of pentesting automation has been around for many years with the original approaches taking the form of affected by specific exploits. In an attack graph, the nodes are typically the state of a system, where the state is defined by the current system configuration (i.e. OS, permissions, network connections, etc) and the edges connecting the nodes are known exploits . Once this graph is constructed it becomes possible to search for sequences of attacker actions (exploits), known as an an these attack paths can be done using classical AI planning techniques The main issue with this approach is that it requires complete knowledge of the network topology and each machines configuration, so is not realistic from an attackers point of view, and also requires manually setting up the graph for each new system being evaluated. # 2.3 Automated pentesting: using MDP Another approach to modelling and planning attacks against a system is to use a Markov Decision Process (MDP) to simulate the environment for modeling discrete decision making problems under uncertainty [23] we do so over the tuple { is the action space, 𝓐 transition function step the system will be in some state, resulting in two things: (1) a transition to a new state transition function attempting to solve a MDP is to find the optimal mapping from total accumulated reward. This mapping is known as the decision policy π. When applied to pentesting the state space of the MDP becomes the possible configurations of the target machines or of the network, the actions are the possible exploits or scans available and the reward will be based on the cost of an action and the value gained when successfully compromising a system. So far there has been limited application of MDPs to automated pentesting. One approach that has been used is to ignore the configuration of the target system entirely and instead rely on formulating the attackers uncertainty in the form of possible action outcomes [13] . Attacks are 21 then planned based on the attack success probabilities, where each action receives a probability of success based on prior knowledge of the given action some non-determinism to an attack graph representation of a system. uncertainty while still being computationally feasible to solve. However, this approach does not take into account known knowledge about the configuration of a system, which is a key step in effective penetration testing, and instead treats all machines as identical. Additionally, it also requires some prior knowledge of the attack outcome probabilities (the transition model) before it can be used and these probabilities can vary widely depending on the systems it is being used against (e.g. Windows or Linux) and will change over time as new software and exploits are developed. # 2.4 Automated pentesting: modelling uncertainty with POMDP Another approach to automating pentesting aimed to address the assumption of full network knowledge required for attack graphs while still accounting for the uncertainty of the attacker by is to model the pentesting problem as a POMDP [12] A POMDP, or partially observable markov decision process, is a MDP in which there is uncertainty about the exact state the system is in and so it models the current state as a probability distribution over all possible states are the same as in an MDP, while } where 𝓢, 𝓐, Ω, 𝓣,𝓞, 𝓡, b { 𝓡 o b and 𝓞 (s’, a, o) = P (o|s’,a) observation space, is the observation function 𝓞 o probability distribution over states . Similar to an MDP, following each time step there will be a transition to a new state s’ and a reward, but in addition taking a step will also result in an observation, . The aim of the problem is 𝓞 the same as for an MDP, which is to find an optimal decision policy π When applied to pentesting the state space of the POMDP becomes the possible configurations of the target machine or of the network, the actions are the possible exploits or scans available, the observation space is the possible information that is received when a exploit or scan is used (e.g. ports open, or exploit failure/success), while the reward will be based on the cost of an action and the value gained when successfully compromising a system Using the POMDP approach allows modelling of many relevant properties of real world hacking, since in a real attack the attacker would have incomplete knowledge about what the actual configuration of the system is and also whether certain exploits or scans will succeed. Additionally, it also means that the same automated pentesting approach can be used to test a 22 system even if the system changes since knowledge of the exact system configuration is not assumed . This differs when compared to attack graphs which would need to be updated every time something in the system is changed. [12] implemented this approach using a POMDP in a simplified simulated network environment where the aim of the attack was to penetrate the system in order to reach a number of sensitive machines on the network. The approach was able to learn to intelligently mix scans and exploits and was tested on networks of varying size. Whether this approach performed better than the classical planning approach using attack graphs was uncertain but it definitely had the advantage of no assumed knowledge of the system configurations. attacker, however, it has one critical issue in that POMDP based solvers do not scale very well and quickly become computationally infeasible as the size of the state space grows. In the they had to approach the network penetration test by approach by Sarraute et al. decomposing the network into individual POMDPs against single target machines only. So although the approach was more realistic from attackers point of view it is currently computationally too expensive. # 2.5 Automated pentesting: when the world model is unknown Both MDP and POMDP provide a general framework for modeling the uncertainty inherent when performing penetration testing and have a number of techniques available for solving them. An MDP is the simpler version of the POMDP where the key difference is that there is no uncertainty about the current state of the environment (it is fully observable). The advantage of MDPs is that they are much more computationally tractable to solve when compared to POMDPs and there are many efficient algorithms for solving them allow them to be more useful in practice. However, this efficiency does come at the cost of some of the ability to model uncertainty. There main form of uncertainty that remains for MDPs is the uncertainty relating to non-deterministic actions as dictated by the transition function, or model of the world. Learning (RL) order to optimize performance. The key advantages of RL over classical planning approaches is its ability to handle large environments and when a model of the environment is not known or a solution using the model is not available due to it being computationally intractable 23 major challenge to create and maintain an accurate model of how exploits will affect any given system. This is due to the constant evolution of attacks and the systems themselves. This property makes pentesting a good candidate for using RL, since it is possible to define penetration testing as an MDP but using RL we do not require the transition model. The main challenge facing RL is that it requires many interactions with the environment in order to learn an optimal policy. This feature has lead to many of the successful applications of RL being done in simulated or game environments where the RL agents is able to rapidly interact with its surroundings testing that can be used to train and test RL agents. This desire to apply RL to automated pentesting and the lack of training environment lead to the design and aims of this study. In the next chapter, we cover the design and implementation of a network attack simulator that can be used for training RL agents. Then in chapter 4 we cover the application of RL to penetration testing in more detail. 24 # Chapter 3 # The Network Attack Simulator # 3.1 Introduction The recent improvements to AI techniques have been greatly aided by the establishment of well-known, freely available performance benchmarks. These benchmarks can take many forms depending on the application domain, for example there is the arcade learning environment for testing generalised Reinforcement Learning algorithms computer vision compare the performance of algorithms with each other and over time. Currently, for network penetration testing there exists no light-weight, freely available benchmark that can be used for developing automated pentesting algorithms. a network simulator that allowed agents to affect the network through scans and exploits. Presently there are numerous widely-used and freely available network traffic simulators, such as These simulators are light-weight, capable of running effectively on a NS3 single machine, and able to emulate network topology and traffic with high-fidelity by using a very minimal virtualization of an actual OS. They do not, however, allow for modeling an attack on the network through launching exploits and gaining access to machines and so are not suitable for assessing penetration testing performance. Another option is to use a network of virtual machines (VM). This has the advantage of having high fidelity to the real world while also being flexible. The main disadvantage of using a VMs is the relatively high computational cost of running each VM, which can slow down the training of certain types of AI algorithms such as RL, and also the considerable computational power required to run larger networks. The best . This system operates the system currently available appears to be the Core Insight system simulation at the OS level but only supports a subset of system calls, similar to the lightweight 25 network traffic simulators, in this way it is able to scale to networks of hundreds of machines on a single computer. However, this software is not open-source or free to the public and cost tens of thousands of dollars for a licence, which is out of the question for many researchers In this work, we design and build a new open-source network attack simulator (NAS). The NAS is designed to be easy to install with minimal dependencies and able to run fast on a single personal computer. It is also designed to model network pentesting at a higher level of abstraction in order to allow fast prototyping and testing of algorithms. # 3.2 Design The NAS is designed as two components. Firstly, the network, which contains all the data structures and logic for modeling traffic and attacks on the network. The second, is the environment which acts as the layer between the attacker and the network and models the attackers knowledge as an MDP. # 3.2.1 The network model The network model defines the organization, connection and configuration of machines on the network and is defined by the tuple . An {subnetworks, topology, machines, services, firewalls} example network made up of five subnetworks, 11 machines and firewalls between each subnetwork is shown in figure 3.2.1. This network could run any number of services as this is defined at the level of machine. We provide more details for each component in the following paragraphs. This model aims to abstract away some details of a real-world network that are not required when developing autonomous agents such as specific types of connections between machines and the location of switches and routers in the network. The reason for this abstraction is to try and keep the simulator as simple as possible and at the level of abstraction that the agent is expected to work at which is determining which scans or exploits to use against which machine and in what order. The specific details of performing each action, for example which port to communicate with, are details that can be handled by application specific implementations when moving towards higher fidelity systems. Penetration testing is already moving in this direction with frameworks such as Metasploit which abstract away exactly how an exploit is performed and simply provide a way to find if the exploit is applicable to the scenario and launch it, taking care of all the lower level details of the exploit network model is also used in order to keep it as general and easily scalable as possible. 26 subnet_1 subnet_3 I BHR External Network (1, 0) (3,0) (3,1) (,2) : = e [ i _ subnet_2 subnet_4 subnet_5 N w ARH HAe (2, 0) (4,0) (4,1) (4,2) (5,0) (5,1) (5,2) Figure 3.2.1 | Example network with five subnets, 11 machines and the sensitive documents located on machines (2, 0) and (5, 0). Subnetworks Each network is made up of multiple sub-networks or subnets. A subnet is a smaller network within the larger network that is composed of a group of one or more machines that are all able to communicate fully with each other. Each subnet has its own subnet address, which is indicated as the first number in any machines address (e.g. the 4 in the address (4, 0)). This is a simplification of IP addresses which use a 32-bit string and a seperate 32-bit subnet mask to define the network, subnet and machine address. For the purpose of the NAS, it makes sense to use a simpler system since we are only dealing with a single network as opposed to IP addresses which deal with millions of machines on thousands of networks across the internet. Although all machines within a subnet can communicate fully, communication between machines on different subnets is restricted. Inter-subnet communication is controlled by the network topology and firewall settings. # Topology The network topology defines how the different subnets are connected and controls which subnets can communicate directly with each other and with the external network. As an example, in the network in figure 3.2.1 subnet 1 is the only network that is connected to the external world and subnets 1, 2 and 3 are all connected to each other while its only possible to communicate with machines on subnets 4 and 5 via a machine on subnet 3. In this way an attacker may have to navigate through machines on different subnets in order to be able reach the goal machines. We can view the network topology as an undirected graph with subnets as its vertices and 27 connections as edges. As such we can represent it using an adjacency matrix, with rows and columns representing the different subnets, an example matrix is as shown in figure 3.2.2 example network in figure 3.2.1. subnet 0 1 2 3 4 5 0 1 1 0 0 0 0 1 1 1 1 1 0 0 2 0 1 1 1 0 0 3 0 1 1 1 1 1 4 0 0 0 1 1 0 5 0 0 0 1 0 1 for the network in figure 3.2.1, represented using an Subnet 0 is used to represent the external network, while other subnets are represented . adjacency matrix Figure 3.2.2 | Example network topology by their ID number (i.e. 1 corresponds to subnet_1). Machines The most primitive building block of the network model is the machine. A machine in the NAS represents any device that may be connected to the network and hence be communicated with and exploited. Each machine is defined by its address, in the form of a (subnet_ID, machine_ID) tuple, it value and it’s configuration. An example machine definition can be seen in figure 3.2.3. The value of a machine is defined by the user with higher values given to sensitive machines, that is machines that the attacker wants to gain access to or that the owner wants to protect. Each machine runs services that can be communicated with from other machines within the same subnet or on neighbouring subnets, firewall permitting. The services available on each machine define its configuration and each machine on the network will not necessarily have the same configuration. This is included since not every machine on the network will be the same as some can be expected to be used for different purposes E.g. Web servers, file storage, user machines. The services present on a machine also define its points of vulnerability, since the services are what the attacker is aiming to exploit. 28 Machine: { address: (1, 2), value: 0, configuration: { ftp: true, ssh: true, http: true, Figure 3.2.3 | Example machine definition on the network. The example is for the 2nd machine in subnet 1, which has no value (i.e. not one of the goal machines) and is running ftp, ssh and http services running that the attacker has an exploit for. Services Services are used to represent any software running on a machine that communicates with the network. They are analogous to software that would be listening on an open port on a computer or connected device. Within the NAS services are considered to be the vulnerable points on any given machine, and can be thought of as services that have a known exploit which the attacker is aware of. In a real world scenario it would be the same as keeping track only of services that an attacker has a known exploit for, while ignoring any other non-vulnerable services. Based on this reasoning within the NAS, we assume each service is exploitable by one action, so the agents job is to find which service is running on a machine and select the correct exploit against it. the cost of using the exploit. Figure 3.2.3 shows an example machine in a network scenario where the attacker has exploits for the ftp, ssh and http services, while figure 3.2.4 shows the set of exploitable services and the associated success probability and cost of their exploits. The ID of each service can be any unique value and does not necessarily have to be a name related to a real world service. In this way it is easy for the NAS to generate test scenarios with any number of machines and services to aid in testing the scaling performance of agents by simply generating service IDs as needed. When investigating the application to more real world settings, the ID would be replaced with a specific service name and version, so it would be possible to track vulnerabilities and know what services require patching (e.g. Samba version 3.5.0). 29 Exploitable_services: { ftp: { probability: 0.8, cost: 3 h ssh: { probability: 0.5, cost = 2 h http: { probability: 0.2, cost: 1 } Figure 3.2.4 | Example set of exploitable services on a network. Each service definition contains a probability of its associated exploit working successfully and a cost for launching the exploit. Firewalls The final component of the network model are the firewalls that exist along the connections between any subnets and also between the network and the external environment. Firewalls act to control which services can be communicated with on machines in a given subnet from any other connection point outside of the subnet. They function to allow certain services to be used and accessed from machines within a subnet with the correct permissions, while blocking access to that service from unwanted entry points. Each firewall is defined by a set of rules which dictate which service traffic is permitted for each direction along a connection between any two subnets or from the external network. Figure 3.2.5 shows an example firewall that sits between subnets 1 and 3 and which allows access to the ssh service on machines on subnet 3 from machines on subnet 1 and access to ftp and http services on machines on subnet 1 from machines on subnet 3. In a real world setting firewall rules are typically set by defining which port can be accessed, however for simplicity and since for most cases the same services are run on the same port numbers, we have decided to instead define rules by service rather than port. Firewall_3: { Firewall_3: { connection : (1, 3) connection : (3, 1) permitted: {ssh} permitted: {ftp, http} } } Figure 3.2.5 | Example firewall on a network. The example is for the firewall located on the connection between subnets 1 and 3 and defines which services are permitted in each direction. 30 # 3.2.2 The Environment The environment component of the NAS is built on top of the network model and acts as the interface between the attacker and the network. It is responsible for modeling the attackers current knowledge and position during an attack on the network. For instance it tracks information about which machines the attacker has successfully compromised, which machines they can reach and what knowledge they have about services present on each machine. Its main function is to control the running of an attack from the beginning, where the attacker has not yet interacted with the network and has no information about any machine on the network, to the end of the episode where the attacker either gives up or is successful in compromising the goal machines on the network. We model the environment component of the NAS as an MDP since this framework is highly versatile and a building block used by many AI algorithms. # MDP overview The environment component models network pentesting problem as an MDP and as such is defined by the tuple defined as the current knowledge and position of the attacker in the network. Actions are the available scans and exploits that the attacker can perform for each machine on network. The reward function is simply the value of any machines exploited minus the cost of actions performed. The transition function controls the result of any given action and takes into account action type, connectivity, firewalls and probabilistic nature of exploits. # State space ∊ A state, , is defined as the collection of all known information for each machine on the s 𝓢 network. That is, the state includes for each machine, if the machine is compromised or not, reachable or not and for each service, whether the service is present, absent or its existence is unknown. A machine is considered to be compromised if an exploit has successfully been used against it. While a machine is considered to be reachable if it is in a subnet that is publicly accessible (connected directly to external network), in the same subnet as a compromised machine or in a subnet directly connected to a subnet that contains a compromised machine. The state space is therefore all possible combinations of compromised, reachable and service knowledge for each service and for each machine. Hence, the state space grows exponentially with the number of machines and services on the network. Equation (3.1), shows the size property of the state space, is the |M| number of machines in the network. The base of for the exponential is 3, since for each exploitable service the agents knowledge can have one of three values: unknown. 31 ∊ |𝓢 | |E||M| ( 3 O ) |S| € OC 3") GB.1) (3.1) attacker has successfully compromised the machine at address communicate with machines on subnets 2 and 3, which is indicated by reachable being set to is (3, 1) for each machine on those subnets. Additionally, the configuration of the machine at known which would have been gained through a scan action, while the configuration for machine . Note that the state does not include any information about the unknown firewall settings, since this would require privileged access to determine. For this simulator we assume it is not possible for the attacker to gain this information and it must instead learn it indirectly through the success and failure of exploit actions. # true Figure 3.2.6 | Example network and state , where the first machine in the network has been Figure 3.2.6 | Example network and state , where the first machine in the network has been compromised and machine (3, 0) has been scanned to get service information. No information has been gathered regarding services on machine (2, 0) 32 # Action space The action space, action and an exploit for each service and each machine on the network. The scan action is designed to mimic the Nmap complete scan, which returns information about which services are running on each port of a given machine and also versions of each service . In reality more targeted scanning may be required to discover complete information about specific services, but for many use cases, Nmap scans return the information required to determine which service is running. Scan actions are considered to be deterministic, always returning information about the presence or absence of a service. action can be deterministic or non-deterministic depending on the configuration of the environment chosen by the user. A successful exploit action will result in the target machine becoming compromised. The success of any exploit is determined by whether the target machine is reachable, the target service is present, if that service is blocked by the firewall or not and also the success probability of the action. environment. This cost can be used to represent any metric such as the time, skill, monetary cost or noise generated for a given action, depending on what performance metric is trying to be optimized for. Figure 3.2.7, shows example scan and exploit action definitions. Both actions target the same machine at address an exploit for the ssh service and has a cost of 3 and a probability of success of 0.8. Action 1; { Action 2: { target: (1, 0), target: (1, 0) type: scan, type: exploit cost: 1 service: ssh f cost: 3 probability: 0.8 Figure 3.2.7 | Example scan and exploit actions against machine (1, 0). Exploit is targeting the ssh service. Reward The reward function is used to define the goals of the autonomous agent and what is trying to be , so starting from one 𝓡 (s, a, s’) optimized by the agent. The reward is defined over a transition state (eq. 3.2). The reward for any transition s’ and ending in the resulting state a taking action s is equal to the value of any newly compromised machine in the next state # minus the cost of s’ 33 . So if no machine was compromised, then the reward is simply the cost of the action a action performed. With this reward function, the goal of the attacker becomes to try and compromise all machines with positive value on the network while minimizing the number or cost of actions used. This mimics a real world nefarious attacker, whos goal we assume is to to retrieve privileged information or gain privileged access on the system. Using the NAS it is possible to set these goals by changing the value of certain machines on the network, so for example machines that may contain sensitive documents or contain privileged control on the network. 𝓡 (s’, a, s) = value(s’, s) - cost(a) (3.2) Where, returns the value of any newly compromised machines in value(s’, s) new machines were compromised and # returns the cost of action cost(a) State: { NextState: { (, O): compromised: true, reachable: true, fip: present, ssh: absent, http: absent, compromised: false, reachable: true, ip: absent, ssh: absent, http: present, (1, O): § compromised: true, reachable: true, Jip: present, ssh: absent, http: absent, } j (2, O): { Action_2: { (2,0): f compromised: false, target: (3, 0) compromised: false, reachable: true, type: exploit success reachable: true, ftp: unknown, service: http >| Jip: unknown, ssh: unknown, cost: 3 ssh: unknown, http: unknown, probability; 0.8 http: unknown, } } } (3, O): { (3, O): f compromised: true, reachable: true, fip: absent, ssh: absent, http: present, Figure 3.2.8 | Example state transition following a successful exploit launched against the http service on machine (3, 0). 34 # Transition function The transition function, performed. For the NAS the next state depends on whether an action was successful or not, which in turn depends on a number of factors. Specifically, whether the action target is reachable, whether the action is a scan or exploit, if traffic is allowed for the target service between a compromised machine and the target (i.e. if target is on seperate subnet), whether the target machine is running target service and finally, for non-deterministic exploits, the success probability of the exploit being used. Figure 3.2.8 shows an example transition, where an attacker successfully exploits the represents this success through machine # 3.3 Implementation The goals of the NAS are to be fast, easy to install and able to be used for fast prototyping of AI agents. To help meet these goals the program was written entirely using the Python programming www.python.org language (Python Software Foundation, source libraries. Specifically, the libraries used were numPy for fast array based computation and Matplotlib and NetworkX for rendering . It would have been possible to build a faster simulator using a lower-level language such as C++, however another reason to use Python was it’s popularity for machine learning research and its well-supported deep learning libraries such as Tensorflow learning agents such as the one used in the next chapter of this thesis. Please see Appendix A for details on and for access to source code for this project. A diagram of the NAS architecture is shown in figure 3.3.1. There are a number of different modules which handle the network model, MDP and other functions with the main component being the Environment module. The Environment module is the main interaction point for an agent and has four main functions: load, reset, step and render. The load function loads a new environment scenario either by generating a network from a standard formula or loading a network from a configuration file (more details of each are provided in the following section). 35 Configuration Files Loader Render | Generator a Agents se Environment x x J v v | Action | | State | | Network Machine | Network Model MDP Simulator # Figure 3.3.1 | Network Attack Simulator program architecture The reset function sets the environment to its initial state and returns the starting state to the agent and is synonymous with starting a new attack on the network. The initial state is where the agent has not compromised any machines on the network, only networks connected to external network are reachable and no information is known about services running on any machines on the network. An example network and it’s associated initial state is shown in figure 3.3.2. an action and performs the transition given the the current state of the simulator and returns the next state, reward for performing the action and whether the goal has been reached. A typical cycle of training or testing of an autonomous agent involves: i) resetting the NAS to get the start state, ii) using the state to choose an action, iii) executing the action against the environment using the step function to receiving the next state and reward, iv) repeat steps (ii) and (iii) until the goal is reached or the time limit expires. This complete cycle would be a single episode and an agent may then reset the environment and repeat this for as many episodes as desired. 36 Figure 3.3.2 | Example network and initial state , where the network contains three exploitable services: and also an attack episode on the network. The first option is to simply render the network as a graph, where each vertices is a machine in the network and machines on the same subnet are grouped together, while edges are connections between each machine within a subnet and between subnets (fig. 3.3.3). This option allows the user to visualize the topology of the network. The second option allows the user to visualize how the state of the network changes over the course of an attack episode (fig. 3.3.4). This mode shows the state of the network, along with the action and reward received for each step and can be used to visualize the attack policy of an agent and identify exploited services along the attack path. 37 Each node represents a Figure 3.3.3 | machine on the network, which machines clustered by subnet. Each edge between nodes with the same subnet ID represents connectivity within a subnet, while edges between nodes with different subnet ID represent inter-subnet connectivity. Pink nodes are the goal machines that contain “sensitive documents”, red and pink nodes both represent machines that are not reachable by agent, blue nodes represent machines reachable by agent and the black node represents the position of the agent. 38 t=0 Action: target=(1, 0), cost=1, type=exploit, service=ssh Reward = -1.0 Mmm Agent Ml Sensitive (S) Hl Compromised (C) @@M™H Reachable (R) Mm s&c Mmm s&R MMH not S,CorrR J t=1 t=2 Action: target=(3, 0), cost=1, type=exploit, service=ssh Action: target=(2, 0), cost=1, type=exploit, service=ssh Reward = 9.0 Reward = -1 \ >» \ \ \ el \ — — t=3 t=4 Action: target=(2, 0), cost=1, type=exploit, service=ssh Goal reached Reward = 9.0 total reward = 16.0 The episode shows Figure 3.3.4 | An example episode rendered using the Network Attack Simulator. each timestep from start (t = 0) till the end of the attack (t = 4), showing the action performed and reward received. The different node colours represent changing states of machines on the network. 39 Configuring a new environment The simulator is capable of running any arbitrary network structure as defined by the user in a configuration file. A custom network is defined by the tuple: {s machines, services, service exploits, machine configurations, firewalls }. Table 3.3.1 provides a description of each required parameter and figure 3.3.5 shows an example file. The configuration files are written using the YAML ain’t markup language (YAML, easy to read, write and well supported by Python. # Table 3.3.1 | Description of parameters required for defining a custom network # Parameter # Parameter # Description subnets The number and size of each subnet topology Connectivity of each subnet defined using an adjacency matrix sensitive machines services The number of possible services running on any given machine service exploits The cost and success probability of each service exploit machine configurations Which services are running on each machine in the network firewalls Which service traffic is permitted along each subnet connection on network Generating an environment In order to allow for rapid testing of agents on different sized networks and include a standard network topology that can be utilized as a benchmark for researchers to compare agents performance, we included the option to automatically generate a network for a given number of machines, , and services, M developed by Sarraute et al subsequently used by Backes et al The network is divided into three main subnetworks: i) the demilitarized zone (DMZ), ii) the sensitive subnet and the iii) user subnetworks (fig. 3.3.6). The each subnet as follows, one machine in each of the DMZ and sensitive subnets and the remaining - 2 machines in the user subnetworks which are connected in a binary tree structure with a M machines are divided into M 40 max of five machines per user subnet and connections only between parent and child nodes in the tree. The DMZ, sensitive and root user subnets are all connected, and the DMZ is connected to the external network. There are network contains two sensitive machines to reach; one located in the sensitive subnet and the other on in a leaf subnet of the user tree. subnets: [1 topology: [ subnet_1 BREE BRRRO External Network BL 1 [) 1 BS t°) 1 9, , 1)] sensitive_machines: [[2, 0, 10], [3, 9, 10]] num_services: 1 i service_exploits: ssh: - 0.8 =i machine_configurations: (1, 8): [ssh] (2, 0): [ssh] (3, 8): [ssh] subnet_2 subnet_3 firewall: @, 1): h b> w (oo: y., (1, 2): [] (25-722 h bl (4 3}: eed (2,0) (3, 0) CS 2): Lesh (2, 3): [ssh] (3, 2): [ssh] Dae Figure 3.3.5 | Example network configuration file and generated network topology. The file is defined using YAML Nested Dirichlet Process, so that across the network machines will have correlated configurations (i.e. certain services/configurations will be more common across machines on the network) configurations seen in real-world networks, where most machines on a network will be running the same services. randomly sample probabilities from a distribution based on the attack complexity score distribution of the top 10 most common exploits used in 2017 is a metric generated by the Common Vulnerability Scoring System (CVSS) and is used to reflect how hard it is to find an exploitable machine along with the success probability and the skill required to use a given exploit . Specifically, the probabilities were chosen based on the attack complexity score for CVSSv2, which scores exploits as having either ‘low’, ‘medium’ or ‘high’ attack complexity probability of success to be 0.2, 0.5 and 0.8 for ‘low’, ‘medium’ or ‘high’ attack complexity 41 respectively. Using this approach we hope to try and model the distribution of exploit success probabilities found in the real world. subnets permit all traffic while the permitted services for each firewall along connections between the DMZ, sensitive and root user subnets are chosen at random, but in such a way that there is always at least one service that can be exploited on at least one machine for any given subnet from any connected subnet. Permitted services are similarly chosen for the connection between the external network and the DMZ. Additionally, it possible to set the restrictiveness of the firewalls which limits the max number of services that are allowed for any given firewall so a user can model a more or less regulated network. DMZ user_1 External = g 8 geane (1, 0) (3,0) (3,1) (3,2) (3,3) (3,4) _| ot Z SJ SJ -—— | _ sensitive user_2 user_3 N x RH RHih i Mk (2, 0) (4,0) (4,1) (4,2) (4,3) (4,4) (5,0) (5,1) (5, 2) Figure 3.3.6 | Example generated network with M = 15 # 3.4 Results In this section we provide the results of some experiments run on the NAS to test its performance and scaling properties. The metrics used were actions per second and load time and we measured these versus the number of machines and services on the network. All experiments were conducted using a single core on a personal laptop computer running the Linux Ubuntu 18.04 operating system. The test machine had 16GB of RAM. 42 of services from 10 to 1000. Figure 3.4.1 shows the mean load time for the NAS versus the number of machines and services. For mean load time, the minimum time was 0.0007 ± 0.00007 sec (mean ± standard deviation) found in the smallest simulator size tested with 10 machines and 10 services and the maximum time was was 3.557 ± 0.06 sec and was for the largest simulator size of 1000 machines and 1000 services. For comparison the load time of a single VM was also measured. We used Oracle VM VirtualBox virtualization software system was 0.249 ± 0.023 sec, averaged over 10 runs. This load time is over 300 times larger than loading 10 machines and services in the NAS. [42] running the linux based Metasploitable 2 operating [43] as the test VM. Mean load time for a single machine with no GUI (headless mode) N Mean Load time (sec) 300 Macp;,. 89° 8Chines 900 0 Figure 3.4.1 | Network Attack Simulator load times versus number of machines and services. were the average of 10 different runs for each tested pair of machines and services, using different random seeds. Times To test the scaling properties of the simulator during regular use we measured the number of actions per second versus NAS size. We ran experiments on simulators using a range of machines (10 to 480) and services (10 to 480) and measured the actions per second and averaged this over a number of runs (fig. 3.4.2). For the mean actions per second of the settings tested, the 43 minimum was 17329 ± 1907 actions per second for the simulator with 480 machines and 10 services, while the maximum was 126383 ± 3843 actions per second for the simulator with 10 machines and 30 services. Services =e ge ON goer — 240 — 480 100000 + 4 3 All z 100000 ° °o uv uv wo vo “ “ ie . vo vo a a a) uw e c a 2 5 5 UV (<] o o < 50000 + & 50000 4 ; 2 4 Machines —— 10 wae 240 — 480 — All 0 150 300 450 (e) 150 300 450 Machines Services Figure 3.4.2 | Scaling performance of Network Attack Simulator in terms of mean actions per second versus the number of machines and services. Results are averaged over 10 runs with 10,000 actions per run, using deterministic actions. Actions performed are selected from action space in sequence, to try and keep consistency across runs. The left figure shows performance versus number of machines for networks running 10, 240 and 480 services. The right figure shows performance versus number of services for networks with 10, 240 and 480 machines. In both figure the red ‘All’ line is averaged over all values of services (left) or machines (right) tested. As a comparison, we measured the mean time required to perform a scan and an exploit between two VMs. We used one attacker VM running Kali Linux ( ) and one vulnerable VM running the Metasploitable OS. We used a standard Nmap scan of a single port, specifically port 21 which was running the ftp service, while the exploit used was for a backdoor in version 2.3.4 of the VSFTPD ftp server https://www.exploit-db.com/exploits/17491/ ( time required for the Nmap scan was 0.253 ± 0.03 sec, averaged over 10 executions. This is equivalent to approximately 4 actions per second. It was not possible to get an exact measure of time required for performing the exploit, as it was being used from within the Metasploit framework, however based on external timing (using a stopwatch) the time required from when 44 the exploit was launched until a shell was opened on the vulnerable machine was between 3 and 5 seconds, or roughly 0.2 to 0.3 actions per second. # 3.5 Discussion The aim for this part of the study was to design and build a fast and easy to install simulator for automated pentesting. The implementation of the NAS which only requires the Python programming language and common open source libraries is easy to deploy and fast, when compared with using VMs. In this section, we further discuss the performance, advantages and disadvantages of the simulator in context with other options that currently exist. We measured the performance of the simulator in regards to two metrics: load time and time taken to perform an action (i.e. actions per second). In terms of load time, the average time required increased linearly with the number of machines and number of services present in the network (fig 3.4.1). In terms of practical use, for worst case where there was 1000 machines and 1000 services the load time was approximately 3.5 sec which is many orders of magnitude lower than the time that an agent would spend training on the environment and so would not be a bottleneck in terms of time. Additionally, when compared with the load time of a single VM, which was roughly 0.25 seconds to launch the VM and would require additional time for the OS to load, the NAS is significantly faster. The benefits of this rapid load time, means it is possible to gather more training data faster which will speed up training time for interactive agents such as RL. 3.4.2). Performance decreased as the number of machines in the network increased, which is expected due to the state space growing with the number of machines and the simulator having to check more machines and also generate a larger state as the number of machines grows. Conversely, performance improves with the number of services when the number of machines in the network is larger. After further investigation this is mainly due to there being a much smaller number of successful exploit actions when the number of services increases, with successful exploits requiring an update to the state and so requiring more time to process. Based off of this the performance expected would be less than reported assuming the agents are performing a higher proportion of successful actions. In practical terms, the worst case performance of the simulator sizes tested was around 17,000 actions per second for a network with 480 machines and running 10 services, which is orders of magnitude faster than using a VM network which performed could perform roughly 4 Nmap scan actions and 0.3 exploit actions per second for a simple network of two VMs. For AI algorithms which rely on interaction with the environment for learning an attack path, such as RL, the speed in which these interactions occur has significant impact on how fast they are able to generate a useful attack plan. For these algorithms 45 the speed of the NAS compared with what might be expected from a VM will be a great benefit when designing these algorithms. Currently, there do exist fast higher fidelity network simulators that make use of lightweight virtualizations. Most of these simulators are currently applicable only for modelling network traffic (e.g. NS3 Insight simulator lightweight virtualized network system. As mentioned previously, this system however requires a commercial licence and quite expensive. But even ignoring the price tag, the higher fidelity also means more information is required to configure the simulator, so there is some loss in versatility. One advantage of the NAS over these kinds of lightweight virtualizations is its ability to model arbitrary services and exploits and generate networks of any size. This comparison between higher fidelity simulator and the more abstract simpler NAS presents a trade-off between fidelity and versatility. Obviously, higher fidelity is key when it comes to applying technologies to a real-world setting, however during the early stage of development it is useful to have a versatile simulator especially when designing interactive AI agents. # 3.5 Conclusion Overall, the presented NAS offers an easy to deploy, fast and flexible penetration testing testbed for developing automated agents. The main weakness of this approach is it’s fidelity to a real environment which contains much more complexity in terms of the exploits, services, machine configurations and network dynamics, however it offers an abstract environment that has a number of use cases. Specifically, for rapid prototyping of agent design, in terms of representing state, action and reward and handling and learning of the world dynamics. Also for investigating the performance properties of algorithms as network topology and size changes. As a next step it will be necessary to develop higher fidelity systems to further test and develop autonomous agents before they can be used in real world applications. 46 # Chapter 4 # Penetration Testing using Reinforcement Learning # 4.1 Introduction The second part of this research aims to investigate the use of Reinforcement Learning (RL) for automated pentesting. RL is a field of study as well as a class of solution methods for learning the mapping from states to actions in order to maximize a reward environment and an agent who interacts with the environment with the aim of learning the optimal actions to take from each state. There are four main components of a RL system, other than the environment and agent, these are the policy function , to the action space, from the state space, . We want the agent to find the optimal policy π* 𝓐 𝓢 , that maximizes the total expected discounted reward. s , from any state, a that chooses the action, The reward defines the immediate reward for the current state and is sent by the environment on each time step. The value function specifies the value of a state over the long run, such that the ) is the total accumulated reward the agent can expect to receive in V(s) value of a state, the long term starting from that state. The model of an environment is something that tells the agent something about how the environment will behave and allows the agent to make inferences. The transition model, information about the expected future state given the current state and chosen action. In RL when a model is present it is know as RL. Model-based problems are typically solved using planning while model-free model-free problems must rely on trial-and-error in order to find the optimal policy for the environment. 47 So far there has been no published applications of RL to automated penetration testing. One of the main advantages of using a RL approach is that it allows us to approach the problem with no assumed prior knowledge, or model, about the action outcome probabilities for any given state and instead allows these to be learned by the agent. This provides a solution to one of the challenges of automated pentesting which is producing and maintaining an up-to-date accurate model of the exploit outcomes. The fast evolving and diverse nature of software systems and exploits means in order to produce an accurate model it would be necessary to test any exploit on a wide range of systems and repeat this process over time. Using RL on the other hand, requires only a definition the state representation, the set of actions an agent can take and a reward function. The agent then explicitly learns the model of the environment through interaction. This means that as the cyber security space evolves it would only be necessary to update the actions that an agent can take and leave the modelling to the agent. [14] we will be using Q-learning which is a RL algorithm for learning an optimal policy in a model-free problem from state a which tells the agent the expected reward if they perform action proven that given enough time and exploration of the environment this algorithm will converge on the optimal Q-values for each state and action pair. Once the Q-value function converges we can then use it to determine the optimal action for a state simply by choosing the action with the highest Q-value and hence use it to find the optimal policy for a given environment. RL is a powerful and versatile approach for solving MDPs however it can be harder to implement and performance can be variable depending on the rate of convergence of the value function. This difficulty may be one of the key reasons it has not yet been utilized in automated pentesting. This study aims to investigate the applicability of RL to the penetration testing domain. We do this by first modelling penetration testing as an MDP where the transition model 𝓣 is unknown and then using RL in a simulated environment to generate attack policies. study, then in section 4.3 we frame automated pentesting as an RL problem using a MDP. In section 4.4 and 4.5 we use the network attack simulator (NAS) we developed to test the capabilities of RL in terms of whether it is capable of finding an attack path in an network environment, how optimal is the attack path, scaling performance and generality. Finally, section 4.6 provides a discussion of the experimental results and section 4.7 provides some concluding remarks. 48 # 4.2 Background RL algorithms learn optimal policies through interaction with the environment. This done by starting from some initial, typically random, policy then iteratively learning the values of taking a certain action for a given state, applying that action to the environment, then updating the state-action value, the received experience (fig. 4.2.1) [14] actions, update their value estimates for the value function and the form of the value function. ( Agent | Environment Update value estimates Figure 4.2.1 | The Reinforcement learning cycle. commonly used, and the ones used for this study, are ε-greedy and upper confidence bound (UCB) action selection exploitation. The ε-greedy action selection strategy does this by choosing a random action with ε probability, and choosing the current best action the rest of the time (eq. 4.1). This forces the agent to randomly explore actions that it does not think are valuable given the current experience. It is common to implement ε-greedy along with ε-decay which decreases the value of ε over time so that as the agents estimates of action values improve it chooses random actions less frequently. UCB action selection, on the other hand, uses an extra exploration term when doing action selection (eq. 4.2). This extra term increases the value of actions that have been taken less frequently and acts to measure the uncertainty of action value estimates. For this study we implement algorithms that make use of both these action selection methods in order to have a 49 comparison and investigate how action selection might affect the use of RL in automated pentesting. argmax Q(a) with pil —e«) a= acA randoma€é A with ple) (4.1) (4.2) / O(a) |_Iné = argmaxr |Q(a) + c,/— iy . \ Ni (a) a Q-learning is an off-policy temporal-difference algorithm for learning the action-state values and is defined by the update function in equation (4.3) Where α is the step size, which controls . how much to move current estimate towards the new estimate and 𝛾 is the discount factor which controls how much to weight immediate versus future rewards. Q-learning has been shown to converge to the optimal state-action values, as the number of visits to state-action pair approaches ∞. Q t At ← Q t At + α R (S , ) (S , ) [ t+1 + γ max Q(S a t+1 a − Q t At ] ) , ) (S , (4.3) the value function, takes. There are two main options: i) Q(s, a), Tabular methods use a table like data-structure to store the state-action . [14] approximation value for each state-action pair with pairs being updated as the agent receives experience. Function approximation methods, on the other hand, use a function to generate the state-action values and update the parameters of the function to improve these estimates as the agent gains more experience. In this study we implement both tabular and function approximation methods to investigate how both these methods perform when applied to network pentesting. Tabular methods have the advantage that they are simple to implement and can often find exact optimal solutions. Their use, however, is limited to problems with relatively small state sizes since each state-action pair must be stored. They also treat each state independently and so cannot make use of knowledge learned about different but similar states to generalize to unseen states. Function approximation methods on the other hand can be used with arbitrarily large state spaces and are capable to generate to unseen states. However, they are more complicated to implement, requiring the use of an extra function to approximate the value function with 50 performance strongly affected by the function representation used. There are many options for how to represent the function, however the method gaining most attention recently and for which the biggest RL improvements have been seen in recent years has been the using of deep neural networks approximation is known as Deep Q-learning and has been used to produce state of the art results in a number of environments including the game of Go and on many Atari video games [46] # 4.3 Penetration testing using Reinforcement learning In order to use model-free RL for Automated Pentesting we need to frame the problem as an MDP leaving the model unknown. As discussed previously, an MDP is defined by the tuple { 𝓐, 𝓡, 𝓣} . We represent states, actions and reward as presented in Chapter 3, with states being the status and configuration knowledge of the agent for each machine on the network, actions as the available scans and exploits for each machine and reward given by the value of newly compromised machines minus the cost of action (fig. 4.2.1). The transition model, unknown. Table 4.2.1 | Reinforcement learning MDP definition, where represents the number of machines on |M| the network and represents the number of exploitable services. |E| # Component # Component # Definition |M| x {compromised} x {reachable} x |E| x {machine service knowledge} # 𝓢 Where: compromised ∊ {true, false} reachable ∊ {true, false} Machine service knowledge ∊ {absent, present, unknown} # 𝓐 # |M| x {scan, exploit} x |E| 𝓡 (s’, a, s) # value(s’, s) - cost(a) 𝓣 (s’, a, s) # unknown 51 Since this is a primary investigation into the use of RL for pentesting, we use one key assumption: the agent has complete knowledge of the network topology. This means the agent knows the address of each machine on the network and their connectivity. This assumption is made in most attack planning approaches, and is based on the availability of the topology data from a client requesting a penetration testing assessment in principle, but would mean the state representation vector would change over time since we would not know beforehand how many machines are on the network and so the size of a state would grow as more machines are discovered. Applying RL to problems where the state representation is dynamic is much more difficult and the vast majority of work into RL involves stationary state representations. For this reason we believe it is better to have this assumption at this early stage of the research and then if RL proves promising, perhaps attempt to apply it with no knowledge of the network topology and mimic exactly the point of view of a real attacker. # Reinforcement learning algorithms As there are currently no studies investigation the use of RL for automated pentesting we decided to implement a range of RL algorithms. Specifically, we make use of three different Q-learning algorithms: tabular Q-learning using ε-greedy action selection (tabular ε-greedy), tabular Q-learning using UCB action selection (tabular UCB) and deep Q-learning using a single layer neural network and ε-greedy action selection (DQL). These algorithms were chosen as they provide both tabular and function approximation RL implementations, while using tabular UCB also provides the opportunity to investigate how action selection strategy affects performance in the pentesting domain. Algorithm 1 and Algorithm 2, respectively. These are standard implementations of Q-learning based on work presented in Sutton and Barto . The action-value function is stored using a [14] hashmap with states as keys and action-values as values. The key difference between the two tabular implementations is the action selection method along with the use of an extra data structure to store visit counts for each state-action pair in the tabular UCB implementation. Otherwise the algorithms are the same. 52 Algorithm 1 Tabular Q-learning with e-greedy action selection 1: Initialize Q(s, a), for all s € S,a € A(s), arbitrarily 2: for episode — 1, V do: 3: S, = initial state 4 5 for step — 1, T do: With probability € select random action a; 6: otherwise select a; = arg max, Q(s;, a) 7: Execute action a; in simulator and observe reward r; and state 5,41 8: Q(st, a4) = Q(st, az) + a[re + ymaxa Q(st41, a) — Q(S2, at)] 9: if s;41 is terminal then 10: end episode 11: end if 12: St = St4i 13: end for 14: end for Algorithm 2 Tabular Q-learning with UCB action selection 1: Initialize Q(s,a), for all s € S,a € A(s), arbitrarily 2: Initialize N(s,a) = 0, for all s € S,a € A(s) 3: for episode = 1, V do: 4: S$, = initial state 5 for step = 1, T do: 6: a, = arg max,[Q(si,a) +¢ ateeal : Execute action a; in simulator and observe reward r; and state 5444 8: Q(st,44) = Q(s¢, a4) + afr, + ymax, Q(s141,a) — Q(s:, a4)] 9: N(s¢,4t) = N(st,a¢) +1 10: if s,41 is terminal then 11: end episode 12: end if 13: St = St41 14: end for 15: end for as first presented by Mnih et al. algorithm was chosen compared with the original DQL approach, which used only a single neural network, as it had improved performance in terms of learning rate and stability We use a fully-connected single layer neural network for both the main and target neural networks, which takes a state vector as input and outputs the predicted value for each action (fig 4.3.1). The state vector is an ordered array of information for each machine on the network, 53 while the output is an ordered array of values for each scan and exploit for each machine on the network. We utilize ε-greedy action selection for the DQL algorithm so that for a given state, the next action is chosen by selecting the action with the highest predicted value with probability 1 - ε, and a random action uniformly at random with ε probability. Algorithm 3 Deep Q-learning with experience replay 1: Initialize replay memory D to capacity Nv 2: Initialize action-value function Q, with random weights 6 3: Initialize target action-value function Q, with weights 0 =0 4 5 : for episode = 1, V do: : S1 = initial state 6: for step = 1, T do: i With probability € select a random action ay otherwise select a; = arg max, Q(s¢, a; 0) 9: Execute action a; in simulator and observe reward r; and state s;41 10: Store transition (s¢,a¢, 74, $¢41) in D 11: Sample random minibatch of transitions (s;,a;,17;,5j41) from D ii Sat i — ( ; ; if sj is tenia rj +ymaxy Q(sj+41,0';0), otherwise 13: Perform a gradient descent step on (y; — Q(s;,a;;9)) with respect to weights 0 14: Every C steps reset Q =i) 15: if s;,, is terminal then 16: end episode 17: end if 18: St = St41 19: end for 20: end for 54 State | Input Hidden Output ( Action | r Ma, 0) [ex] Me, 0) db Mey) Mary) Figure 4.3.1 | Schematic illustration of the Deep Q-network. Input layer is a state represented by a 1D vector of information for each machine in network. The single hidden layer is fully connected to both input and output layers. The output layer is the action value for each scan and exploit for all machines in network. # 4.4 Experiments This study aimed to investigate the application of RL to Automated Penetration Testing. The main questions we wanted to answer were: 1) Can RL be used to find an attack path through a network when one exists? 2) How optimal is the attack path generated by the RL agent, as defined by the reward function? 3) How does RL scale with increased network size and number of exploits? 4) How general is the RL approach? Does it work on different network configurations? 5) How do the different RL approaches compare? exploiting the sensitive machines on the network. For the case of our experiments (2) will be optimizing over the action cost. As discussed in chapter 3, action cost can be used to represent any desired property of an exploit, e.g. time required, monetary cost or chance of detection. For (3) network size means the number of machines, |M|. on the network. It is worth noting that both the state and action space grow with # and the number of exploitable services, |M| # , so |E| 55 increasing affect the dynamics of the system the same so it was important we investigated how performance was affected by both increasing and |M| We investigated the performance of each RL algorithm by measuring performance on a range of scenarios. The general experimental procedure was to choose a scenario (i.e. network size, topology, number of services, etc..), train the agent for a set period of time and then evaluate the final policy generated by the agent. The following sections describe each of these steps in detail. Experiment scenarios We tested the different RL algorithms on a number of different network scenarios using the NAS we developed. In particular we used three different computer network designs: i) the standard network described in chapter 3 ii) single site network and iii) multi-site wide-area network. Unfortunately, we were unable to find specific case study network designs to use for our scenarios apart from the standard network design which was based on practical commercial experience of the Sarraute et al. a simple single location network and a multi-location network of the same size. This architecture has been used in previous studies testing automated pentesting agents in simulated environments means of comparison. Additionally, the NAS supports generation of random scenarios using this design based on the number of machines and number of exploits. We use this feature when investigating the scaling properties of the different RL algorithms. The single site and multi-site network scenarios are shown in figure 4.4.2 and 4.4.3. All three network scenarios contain 16 machines, five exploitable services and two sensitive machines. This was chosen so we could investigate the effect of different network architectures on the RL algorithm performance. Similarly, all scenarios require a minimum of three machines to be exploited in order to gain access to all the sensitive documents and complete the scenario. The three fixed scenarios described are used to test questions (1), (2), (4) and (5) outlined at the start of this section. In order to investigate question (3) as well as (1) and (5), we tested the RL algorithms against the standard generated network scenarios while varying either the number of machines in the network and using a constant number of exploitable services or varying the number of exploitable services and keeping the number of machines constant . 56 User-1 a - =Q— QOGe J (3,0) (3,1) (3,2) (G3) (3,4) fe oi = tPtptp r= ii ig! i | 8 8 8 | = ie ‘smtp: | ‘smtp: CE ~~ User-2 User-3 (2,0) 4,0) 41) 4,2) & 3) CS 4) (, 0) ey G,2) 6,3) smtp ssh ssh ssh a ee ssh http ssh ssh Figure 4.4.1 Labels beneath each machine indicate, the exploitable services running on that machine. Labels along edge to firewall from subnet indicates exploitable service traffic allowed through firewall. Documents above a machine indicate a valuable machine. 57 Main Network HHHARGGR a9 @) @2 G3) @4 @5) G6 «a7 tp tp tp tp | Pp tp SQ2aennag (1, 8) (1,9) (4, 10) (1) (1,12) (1,13) (1,14) (1, 15) samba ftp ssh ftp ae ftp ftp http Me & Figure 4.4.2 | The single site network scenario (see figure 4.4.1 for diagram information). elucidate the effect of the variable of interest. Table 4.4.1, provides details of the different parameter values used. The max steps value chosen as this tended to give good performance across a range of scenario sizes during preliminary testing. Similarly for the values of sensitive machines and action costs. # Experiment scenario parameters and their values Parameter Value Description Max steps 500 Exploit cost 1 Scan cost 1 Cost of performing a scan action Sensitive machine value 10 Reward received when sensitive machine is exploited Maximum number of steps allowed per episode, before environment is reset to start state 58 Main Site DMZ User | Bes Boaeoaae 2,0) ean} 3,0) e G2 G3) G4 G5) ssh ssh ssh http ssh ssh ssh ssh il Saver 1 | ~ | 4 8&— BB ~2 (2,0) (2,1) smip smtp ‘smtp——_ an (____smtp—— 9 Sl 9 i i ae Remote-1 Remote-2 Remote-3 (4,0) (4,1) 6,0) 6,1) (6,0) (6,1) ftp ftp ftp ftp 1p, samba | samba » a ) Figure 4.4.3 | The multi-site network scenario (see figure 4.4.1 for diagram information). Training details For each scenario we trained each agent for two minutes before evaluating the learned policy. We chose to train the agents for a set time period since the time per step of the tabular agents is significantly faster than the DQL algorithms and since, in practical terms, if applying RL in a commercial setting we would be most interested in how long it takes an agent to find an attack path in terms of time compared to training episodes. The time limit of two minutes was chosen for practical reasons, as it allowed a large enough variety of scenarios to be tested and solved while still being short enough to not make running of many experiments too time consuming. 59 chosen are shown in table 4.4.2. Hyperparameters selection was done using a mixture of informal search on different size standard generated networks, selecting default choices from the machine learning library used 𝜺-greedy and DQL we also used 𝜺-decay, which acts to reduce 𝜺, and hence the probability of choosing a random action, over time from the initial value, 𝜺 0.05, respectively for both Tabular 𝜺-greedy and DQL algorithms) (eq. 4.4). εt = εmin + ( max − εmin ε ) e −λt # Table 4.4.2 | List of hyperparameters used for each algorithm and their values Hyperparameter Tabular 𝜀𝜀-greedy Tabular UCB Step size, 𝛼 0.1 0.1 - Discount factor, 𝛾 0.99 0.99 0.99 Initial 𝜀 value, 𝜀 Final 𝜀 value, 𝜀 max min 1.0 0.05 - - 1.0 0.05 𝜀 decay rate, 𝜆 0.0001 - 0.0001 Confidence, c - 0.5 - Minibatch size - - 32 Hidden layer size - - 256 Replay memory size - - 10 000 Target network update frequency - - 1000 steps RMSprop learning rate - - 0.00025 - - 0.9 # Deep Q-learning 60 (4.4) # Evaluation procedure Following training the agents performance was evaluated by running its trained policy against the network scenario in the NAS either 10 or 30 times depending on the experiment. The policy was tested using 𝜺-greedy action selection with 𝜺 = 0.05, so to avoid the chance of the agent getting stuck in a state forever and to avoid overfitting of the policy to the scenario. For comparison, where applicable, we also used a random agent which selected actions uniformly at random in each state. For the custom scenarios used (standard, single site and multi-site), we trained the agent once against each scenario and then ran 30 evaluations of the trained agent. For the experiments where we generated scenarios using the NAS, we did 10 separate runs for each scenario using a different random seed each run and evaluated each of the runs 10 times. We chose to generate multiple versions of the same scenario since performance varied significantly depending on the generated configurations, even for the same network topology. This is due to differences in the number of exploitable services available on any given machine and also the exploit probabilities, since both these factors were randomly generated. # Experiment setup All experiments were conducted on single core on a personal laptop computer running the Linux Ubuntu 18.04 operating system. The test machine was running an Intel Core i7 7th Gen CPU at 2.7 GHz and had 16GB of RAM. All RL algorithms were implemented in the Python 3 programming language (Python Software Foundation, well-supported open source libraries. Specifically, the libraries used were numPy array based computation and Pandas [47] results. For the DQL algorithm we utilized the Keras Neural Network library the TensorFlow library backend only, with no GPU acceleration. Please see Appendix A for details on and access to source code for this project. # 4.5 Results Custom Scenarios For each of the different constructed network scenarios: i) standard ii) single site and iii) multi-site, we measured the episodic reward during training as well as final performance of the trained policy of each RL algorithm. Figure 4.5.1 shows the mean episodic reward versus the training episode. We averaged the reward over the last 100 episodes, in order to produce a 61 smoother reward signal. All three algorithms converged on the approximate optimal solution within the training time limit, shown in the plots by the convergence of mean episodic reward to the theoretical max (red dashed line). The theoretical max is based on deterministic exploits and is the total value of the sensitive machines minus the total cost of the minimum number of exploits required to compromise them from the starting state. For the single-site and standard network scenarios, convergence occurred after a similar number of episodes for all three algorithms (~1000 episodes for single site and ~ 150 episodes for standard network). For the multi-site network scenario, the DQL algorithm converged significantly faster, after ~100 episodes compared with >1000 episodes for the two tabular algorithms. a) multi b) single o4 ~ —100 4 me 2 4 G cI = = 2 (4 @ —200 4 a ao] ao) ro lo} a a a a ® 300 4 o c c G s vo wo = = —400 4 -500 r i r T rT T r T r rT 10° 10? 10? 107 10% 10° 10° 10 10% +10? ~=104 ~=—108 Training Episode Training Episode c) standard — Tabular Q-learning e-greedy —— Tabular Q-learning UCB —— Deep Q-learning P --- Theoretical Max o = 2 vo ao] co] 2 a vo c o vo = 10° 10? 107 107 10% 10° Training Episode Figure 4.5.1 | Mean episodic reward versus training episode for the different RL algorithms and scenarios tested. The dotted red line represents the theoretical max reward obtainable for each scenario. The mean episodic reward is smoothed by averaging over last 100 episodes. 62 (fig. 4.5.2). As opposed to reward versus episode where DQL tended to learn faster, the two tabular algorithms converged to the approximate optimal solution significantly faster than DQL in terms of time. For the single-site and standard network scenarios, the tabular methods converged in <10 seconds, while DQL took ~50 seconds for the standard network and ~75 seconds for the single-site network. For the multi-site network scenario, convergence time was more similar with all algorithms converging in <25 seconds, however DQL was still the slowest by ~5 seconds. a) multi b) single 04 04 7100 ~ -1004 o o = = £ —200 4 £ 3 3 -200 4 ro fo} a a a a ® —300 4 @ c c s § —300 4 = = -400 4 -400 4 —500 4 1 T ; 1 T T T 7 1 T T (e) 20 40 60 80 100 120 (e) 20 40 60 80 100 120 Training time (seconds) Training time (seconds) c) standard 04 — Tabular Q-learning e-greedy —— Tabular Q-learning UCB — Deep Q-learning ~ ~100 4 --- Theoretical Max ic é o —200 4 ao] fo} 8 $ e —300 4 oO o = —400 4 —500 4 20 40 60 80 100 120 Training time (seconds) o4 Figure 4.5.2 | Mean episodic reward versus training time (seconds) for the different RL agents and scenarios tested. The dotted red line represents the theoretical max reward obtainable for each scenario. The mean episodic reward is smoothed by averaging over last 100 episodes. 63 complete an episode. To measure this we recorded the number of training episodes completed by each algorithm within the two minute training period (table 4.5.1). The tabular methods were significantly faster than the DQL algorithm, with Tabular 𝜺-greedy performing 50 times more episodes than DQL in the worst case and Tabular UCB performing 37 times more episodes than DQL in the worst case. This difference is expected due to the additional computation required for the DQL Neural Network computations. The increased speed of the Tabular methods makes up for the slower learning rate per episode and still allows for them to have optimal performance in the tested scenarios. Table 4.5.1 | Number of training episodes for each Reinforcement Learning algorithm and scenario , during two minute training period. Scenario Tabular ε-greedy Tabular UCB Deep Q-Learning Standard 240 999 196 728 3959 Single-site 265 642 242 598 4016 Multi-site 234 042 171 942 4636 measured the performance of the final trained policies using ε-greedy action selection with ε = 0.05. We recorded the proportion of the 30 evaluation runs which were solved by the agent, where a run was considered solved when the agent successfully exploited the sensitive machines on the network within the step limit (500 steps). We also recorded the maximum reward achieved by the agent for each scenario along with the variance in reward over the 30 evaluation runs. Performance of a random policy was also measured for comparison. The results of the trained agent evaluations are shown in figure 4.5.3 and recorded in table 4.5.2. All three algorithms and the random policy were able to solve each scenario on at least some of the runs, with the three RL agents performing significantly better than random on the multi-site and standard network scenarios. For the single-site network scenario, the Tabular ε-greedy agent actually performed worse than the random agent (0.9 vs 0.97 respectively) while the Tabular UCB and DQL algorithms performed equally as good or better. The worse performance of Tabular ε-greedy is likely due to the ε-greedy action selection in the policy evaluation causing the agent to get stuck in a state not usually encountered along it’s normal trajectory to the goal. Additionally, the performance of the random agent is very high for this 64 scenario, due to its simplicity compared with the other scenarios. The only algorithm able to solve 100% of all evaluation runs was the DQL algorithm. All algorithms were able to achieve an approximately optimal max reward for each scenario as shown by the plot of max reward in figure 4.5.3. The random agent was able to solve each scenario however it took significantly more actions each time, as shown by negative max reward achieved. The performance of the DQL algorithm was the most consistent across the different scenarios, likely attributable to the better generalization of it’s policy to unseen states and states not along the optimal attack path. 1.0 § 08 0 alm him alm € 2 2 0.6 g —50 a 2 B04 a ~100 2 = ) Y 92 —150 0.0 —200 multi single standard multi single standard @—@— Tabular Q-learning e-greedy @™® Tabular Q-learning UCB @@™l Deep Q-learning @ Random Figure 4.5.3 | Evaluation performance of Reinforcement Learning algorithms for each scenario. Solved proportion (left), shows proportion of 30 evaluation runs solved. The (right) plot shows the max reward achieved from the 30 evaluation runs, with error bars showing mean standard error of rewards. 65 Table 4.5.2 | Solve proportion and max reward achieved for each scenario and reinforcement learning algorithm following training. Solved proportion is top number while max reward (± mean standard error) is the lower number in each row. # Deep Q-Learning Scenario Random Tabular ε-greedy Tabular UCB Multi-site 0.53 -177 (± 18.94) 1.0 15 (± 0.38) 0.97 15 (± 16.77) 1.0 16 (± 0.30) Single-site 0.97 -49 (± 24.92) 0.9 16 (± 28.64) 0.97 17 (± 17.18) 1.0 17 (± 0.24) Standard 0.4 -98 (± 25.45) 1.0 16 (± 0.32) 0.97 16 (± 16.82) 1.0 16 (± 0.3) Scaling We measured the scaling performance of each algorithm in terms of network size and number of exploits. We measured the effects of both using the standard generated network of the NAS. For each network scenario we tested 10 different random configurations and then measured performance on each configuration following two minutes training using 10 evaluation runs with ε-greedy action selection with ε = 0.05. We measured performance using two metrics. First was mean solved proportion, which was an average of the proportions of evaluation runs solved for each different configuration, where a network was solved when all sensitive machines were exploited within the step limit of 500. The second metric was mean reward, which was the average over runs of the reward achieved during evaluation runs. We also evaluated a random policy for a comparison. 66 1.0 0 S £ 08 ~100 3 z o ag 0-6 = -200 5 v & 3 0.4 S —300 c = w 0.2 os Ss 400 0.0 —500 5 10 15 20 25 30 35 40 45 5 10 15 20 25 30 35 40 45 Machines Machines Tabular Q-learning e-greedy Tabular Q-learning UCB Deep Q-learning Random auto-generated standard networks with number of exploitable services fixed at 5. For each network size tested, performance was averaged over 10 evaluation runs of trained agents using an ε-greedy policy with ε = 0.05 for 10 different runs for each scenario, where machine configurations change between runs. The shaded areas show mean standard error. The effect of network size was tested by increasing the number of machines on the network from 3 to 43 machines in intervals of 5, while keeping the number of exploitable services fixed at 5. Figure 4.5.4 shows the results of the experiments. Performance was equal and better than a random policy for all three algorithms up to networks containing 18 machines. For the tested networks with more than 18 machines, performance of DQL and Tabular UCB algorithms declined rapidly, although both algorithms were still able solve more than 50% of scenarios for networks with 23 and 28 machines with mean reward performance still better than random. Performance for Tabular ε-greedy was consistently high for network up to and including 33 machines, after which performance rapidly dropped to worse than random for networks with 43 machines. We measured the effect of increasing the number of exploits available to the agents by using a fixed size network of 18 machines and increasing the exploitable services available from 1 to 51 in intervals of 5. The results of the experiment is shown in figure 4.5.5. The effect of increased number of exploits differed for each algorithm. Performance was relatively unaffected 67 for the Tabular ε-greedy, which maintained near optimal performance for all numbers of exploits tested. Similarly, Tabular UCB, had lower than optimal performance, but performance remained relatively consistent as the number of exploits increased. The most affected was DQL which had performance comparable with random for all values tested (we leave discussion of why this may have been the case to the next section). 1.0 4 o4 & Pe] 0.8 4 —100 4 8 ze 5 0.64 $ -200 4 i) 2 2 0.44 5 —300 4 a = e€ ® 0.24 —400 4 = 0.0 4 —500 74 a: T T T T T T T T sa: T T T T T T T T O 6 12 18 24 30 36 42 4 O 6 12 18 24 30 36 42 48 Exploitable Services Exploitable Services Tabular Q-learning e-greedy Tabular Q-learning UCB Deep Q-learning Random Figure 4.5.5 | Reinforcement Learning algorithm performance versus number of exploitable in auto-generated standard networks with number of machines in network fixed at 18. For each network setting tested, performance was averaged over 10 evaluation runs of trained agents using an ε-greedy policy with ε = 0.05 for 10 different runs for each scenario, where machine configurations change between runs. The shaded areas show mean standard error. # services # 4.6 Discussion This aim of this study was to investigate the application of RL to pentesting. We did this by developing and testing a number of RL algorithms using the NAS we developed. In order to assess the applicability of RL we broke the problem down into five avenues of investigation. Specifically: 68 1) Can RL be used to find an attack path through a network when one exists? 2) How optimal is the attack path generated by the RL agent, as defined by the reward function? 3) How does RL scale with increased network size and number of exploits? 4) How general is the RL approach? Does it work on different network configurations? 5) How do the different RL approaches compare? We aimed to answer questions (1), (2) and (4) by constructing some custom scenarios and then measuring the performance of the attack policies generated by trained RL agents in those scenarios. The scenarios are shown in figures 4.4.1-3, while the results of our experiments are shown in 4.5.1-3 and tables 4.5.1-2. For each scenario all RL algorithms tested were able to find an attack policy that lead to all sensitive machines on the target network being compromised, while also minimizing the number of actions taken, performing significantly better than a random policy in terms of max reward achieved. Based on these results, within the simulated environment RL is able to find a valid attack path (1) that is also optimal in the reward achieved and hence action cost (2) and is also able to be applied to different network configurations (4). pentesting, it is worth noting some limitations of the experiments. In particular, the scenarios were were limited to networks containing 16 machines with five exploitable services and so the were is relatively small in size compared with large commercial networks of hundreds of machines that can be found in real-world settings. However, along these lines, the agents were only allowed two minutes of training with the slowest time to convergence being roughly 80 seconds for the DQL algorithm (fig. 4.5.2). It would therefore be expected that the RL agents could be applied to larger problem sizes given more training time. Another limitation is that we only tested three different network topologies, while the possible topologies encountered in the real-world are endless. One of the key advantages of RL is its generality, as demonstrated by its success in achieving human-level or better performance across a wide range of very distinct games using a single algorithm design [27], [46] would be able to generalize to other network topologies, since for the RL algorithms the difference in underlying problem structure and complexity between different network topologies is relatively small, the key difference comes in the difference in size of the networks. To investigate how RL performance differed with problem size we looked at the scalability of RL (3). The scalability of the different RL algorithms was measured both in terms of number of machines on the network (fig. 4.5.4) and also the number of exploits available to the agent (fig. 4.5.5). We found that for all the algorithms tested their performance dropped rapidly beyond a certain network size (18 machines for Tabular UCB and DQL, and 38 for Tabular ε-greedy). The drop in performance is expected since for the pentesting problem, how 69 we have defined it, the size of the state space grows exponentially with the number of machines on the network (eq. 3.1) while the training time was kept constant at two minutes. For larger networks, increasing training time would likely lead to improved performance, at least up to a point. Performance of Tabular RL algorithms is known to suffer when the state space grows really large since they do not generalize action-values between similar states seen in the steep drop in performance to below that of a random policy of the Tabular RL algorithms for networks with 43 machines (fig 4.5.4), while the DQL algorithm is still able to outperform a random policy for this size network. It would be expected using DQL or related function-approximation approaches to RL it should be possible to scale up these algorithms to deal with much larger networks given enough training time. As an example of the scalability of DQL approaches, the AlphaGo RL agent is able to find better than human level performance in a . Another approach to handling the exponential increase in the state space size would be to decompose the problem into two seperate agents. One agent would work at the network topology level, deciding which machine to target next, while the second agent works at the individual machine level, deciding which exploits to use against a given machine. This approach has been shown to work well for model-based automated pentesting using POMDPs . One promising approach for model-free automated pentesting would be the use of hierarchical RL, which is a technique that can be applied to MDP to decompose actions into different levels of abstractions, however this is still an area of active research in the RL domain and so it may be some time before it becomes a reliable and efficient method . Between the possibility of scaling up RL via more efficient function-approximation methods or via problem decomposition, it is possible to imagine RL being applied to very large networks. Apart from large network size, another of the challenges facing pentesting is the complexity and rapid evolution of the services present on a given system and the set of exploits available. Every year new exploits are found for software and so in order to be useful automated pentesting agents will need to be able to handle a large growing database of exploits. We investigated the performance of RL algorithms as the number of available exploits increased (fig. 4.5.5). We found that the performance of the tabular RL algorithms were relatively unaffected by increased number of exploits, with the Tabular 𝜺-greedy algorithm maintaining near optimal performance and solving almost 100% of every scenario configuration tested. On the other hand the performance of DQL was not much better than a random policy. The reasoning for this reduced performance in DQL with increasing number of exploits we believe is due to two reasons. Firstly, increasing the number of actions increases the size of the neural network used by the DQL algorithm, which slows down the number of operations per second and hence the number of steps the algorithm can perform within the two minute training period. This is coupled with the fact the state space grows as the number of exploits increases, so slower exploration with a larger space to explore, means the agent cannot explore sufficiently to find the optimal attack plan within the time limit. The second key reason is that unlike states 70 which the neural network is able to generalize well, since as the state space grows so does the number of very similar states as many machines on the network can be ignored as they are not along the optimal path. However, as the number of exploits increases, so do the number of state possibilities along the optimal path since the branching factor for each machine grows with the number of exploits. Decomposing the automated penetration testing problem into a higher level topology agent and a machine exploiting agent would help alleviate this problem, since then adding a new exploit would only mean a single increase in state space size for the machine level agent, rather than an exponential increase as is the case with the current design. Based on these scaling experiments, it would appear that the approach used in this study would not scale well to large networks and number of exploits. The Tabular RL algorithms are able to scale well with the number of actions, while not being able to scale to networks with many machines. Conversely, it may be possible to scale the DQL algorithm to larger networks given more training time and using more advanced algorithms, however this approach does not scale well with increased number of actions. In terms of action selection strategy, 𝜺-greedy action selection learned faster, produced better policies for the majority of scenarios tested and scaled significantly better than UCB for the tabular algorithms, so is a clear choice for future research and application. # 4.7 Conclusion This study was a preliminary investigation into the application of RL to pentesting. We demonstrated that in a simulated environment it is possible to use RL to generate a policy that is capable of successfully and optimally exploiting a target network. Taking this a step further, it would be possible to use the policy information to determine which components of a network need patching and then repeat the process in order to improve the security of a network. The key advantage of using RL over other AI planning techniques, is that it requires no prior knowledge of the transition model of the exploits and network and so is very general in its application, with the same algorithm able to be applied to different network topologies and configurations with varying number of exploits available. automated pentesting there are still a number of hurdles to overcome. In particular, scaling the algorithms with the number of machines on the network and also with the number of exploits available to the agents. Further work into developing more efficient algorithms or decomposing the pentesting problem will help greatly to solve these issues. 71 72 # Chapter 5 # Conclusions # 5.1 Summary and conclusions In this project we wanted to investigate solutions for the challenging problem of cybersecurity, in particular the growing shortage of skilled professionals. As with many industries, automation is an obvious choice for a solution when tackling a skills shortage. We aimed to worked on applying the AI approach of RL to automating network pentesting. This involved two stages, the first of which was to design and build a NAS that could be used to design and test automated pentesting agents in a fast easy to use environment. The second stage was to utilize the NAS to study the application of RL algorithms to automated pentesting. The main finding of this work, was that within a simulated environment it is possible for RL algorithms to find optimal attack paths through a target computer network. We were able to successfully apply RL to a range of network topologies and configurations with the only prior knowledge being the topology of the network and the set of available scans and exploits. This offers a key advantage over current approaches to automated pentesting which rely on having an accurate model of the exploit outcomes, which is hard to produce in the rapidly evolving cyber security landscape. However, we also found that the RL algorithms we implemented are limited in their applicability to relatively small networks and number of exploits. Additionally, RL relies heavily on having a high-fidelity simulator for training agents before they are capable of being used in a real-world scenario. 73 # 5.2 Possible future work such has lead to many more interesting problems to solve before this technology may be practically used in a commercial environment. Firstly, we need to develop more scalable RL algorithms that are able to scale to the size of modern large networks while also handling hundreds if not thousands of possible exploits. The next step after that will be to apply these algorithms in more realistic environments such as VM networks using information from real organizational networks in order to determine how they can be applied in real-world settings. 74 # Appendix A # Program listing All the source code and results for this project can be found at the github repo: https://github.com/Jjschwartz/NetworkAttackSimulator The code for this project is all written in Python 3 and is split into a number of separate modules. The main modules are the Environment, the Agents and the Experiments. There is also a module for generating a POMDP file, however this was not used for the work covered in this thesis and so is ignored for the sake of clarity. 75 # A.1 Environment This module contains all the code for the Network Attack Simulator. The program architecture is shown in figure 3.3.1 in Chapter 3 of this thesis. Table A1 provides a list of each file in the module and a description of its role in the Network Attack Simulator. # Table A1 | Files and description in the program Environment module. # File # Description action.py Contains the Action class which defines an action in the NAS environment.py The main class for the NAS. It controls the network attack model and provides the main function with which to interact with the NAS generator.py Contains all the functionality for automatically generating a network configuration given the number of machines and services to run. loader.py Contains all functionality for loading a custom network configuration from a .yaml file machine.py Contains the Machine class which defines a machine in the network model network.py The network model class, which defines and controls the behaviour and configuration of the network in the NAS render.py Contains functionality for rendering the environment and episodes state.py Contains the State class which defines a state in the NAS 76 # A.2 Agents This module contains all the different Q-learning agents that were implemented for this project. Table A2 provides a description of all files in this module. # Table A2 | Files and description in the program Agent module. File Description agent.py Contains the abstract Agent class which all RL agents inherit from dqn.py Contains the Deep Q-Learning agent class q_learning.py Contains the Tabular Q-learning agents (both 𝜺-greedy and UCB) random.py Contains the Random agent tabular.py Contains the abstract TabularAgent class which the Tabular Q-learning agents inherit from tuner.py Contains some functionality for performing hyperparameter search on the different agents 77 # A.3 Experiments This module contains all the different scripts used to run the experiments on the RL agents and NAS. Table A3 provides a description of all files in this module. # Table A3 | Files and description in the program Experiment module. # File # Description agent_perf_eval_plotter.py Contains functionality for summarizing and displaying the results generating by evaluating trained agents using agent_perf_exp.py agent_perf_exp.py Runs performance and scaling experiments on RL agents. agent_perf_plotter.py Contains functionality for plotting episodic performance of RL agents agent_scaling_plotter.py Contains functionality for plotting the scaling performance of RL agents env_perf_exp.py Contains some functionality for testing the running performance of the NAS env_perf_plotter.py Functionality for plotting the results of experiments run on NAS experiment_util.py results/ Contains the main result files from the study 78 # Bibliography [1] T. J. Holt, O. Smirnova, and Y. T. Chua, “Exploring and Estimating the Revenues and Profits of , vol. 37, no. 4, pp. 353–367, Apr. 2016. Deviant Behav. [2] Symantec Corporation, “Internet Security Threat Report,” Volume 22, Apr. 2017. [3] Australian Cyber Security Centre, “ACSC Threat Report,” Australian Government, 2017. [4] R. von Solms and J. van Niekerk, “From information security to cyber security,” , Comput. Secur. vol. 38, pp. 97–102, Oct. 2013. [5] D. of Industry, “$50 million investment into cyber security research and industry solutions,” Ministers for the Department of Industry, Innovation and Science , 22-Sep-2017. [Online]. Available: http://minister.industry.gov.au/ministers/craiglaundy/media-releases/50-million-investment-cyber-se curity-research-and-industry . [Accessed: 13-Mar-2018]. [6] B. Arkin, S. Stender, and G. McGraw, “Software penetration testing,” 1, pp. 84–87, Jan.-Feb 2005. , vol. 3, no. IEEE Secur. Priv. [7] A. IVan, “Why Attacking Systems Is a Good Idea,” [8] Cisco, “Mitigating the Cyber Skills Shortage,” CISCO, 2015. [9] C. Sarraute, . 2012. Automated Attack Planning [10] C. Phillips and L. P. Swiler, “A graph-based system for network-vulnerability analysis,” in # , 2004. IEEE Secur. Priv. [10 C. Phillips and L. P. Swiler, “A graph-based system for network-vulnerability analysis,” in Proceedings of the 1998 workshop on New security paradigms, 1998, pp. 71-79. , 1998, pp. 71–79. Proceedings of the 1998 workshop on New security paradigms [11] C. Sarraute, O. Buffet, and J. Hoffmann, “Penetration Testing == POMDP Solving?,” in 2011. , SecArt [12] C. Sarraute, O. Buffet, and J. Hoffmann, “POMDPs Make Better Hackers: Accounting for , 2012. AAAI [13] J. Hoffmann, “Simulated penetration testing: From ‘dijkstra’ to ‘turing test++,’” presented at the # Uncertainty in Penetration Testing,” Proceedings International Conference on Automated Planning and Scheduling, ICAPS, 2015, vol. 2015-January, pp. 364–372. [14] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” 2011. , vol. 550, no. 7676, Nature [15] D. Silver pp. 354-359, Oct. 2017. [16] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” Int. J. Rob. , vol. 32, no. 11, pp. 1238–1274, Sep. 2013. Res. [17] R. R. Linde, “Operating system penetration,” in Proceedings of the May 19-22, 1975, national # , 1975, pp. 361–368. computer conference and exposition [18] F. Holik, J. Horalek, O. Marik, S. Neradova, and S. Zitta, “Effective penetration testing with Metasploit framework and methodologies,” presented at the CINTI 2014 - 15th IEEE International Symposium on Computational Intelligence and Informatics, Proceedings, 2014, pp. 237–242. Nmap Network Scanning: Official Nmap Project Guide to Network Discovery and Security [19] G. Lyon, . Insecure.Com, LLC, 2008. Scanning [20] P. Ammann, D. Wijesekera, and S. Kaushik, “Scalable, graph-based network vulnerability analysis,” in 217–224. , 2002, pp. Proceedings of the 9th ACM conference on Computer and communications security [21] M. S. Boddy, J. Gohde, T. Haigh, and S. A. Harp, “Course of Action Generation for Cyber Security Using Classical Planning,” in , 2005, pp. 12–21. ICAPS [22] K. Durkota and V. Lisý, “Computing Optimal Policies for Attack Graphs with Action Failures and 79 Costs,” in , 2014, pp. 101–110. STAIRS [23] R. Bellman, “A Markovian Decision Process,” pp. 679–684, 1957. , vol. 6, no. 5, Journal of Mathematics and Mechanics [24] G. E. Monahan, “State of the art—a survey of partially observable Markov decision processes: theory, models, and algorithms,” , vol. 28, no. 1, pp. 1–16, 1982. Manage. Sci. [25] A. R. Cassandra, Exact and approximate algorithms for partially observable Markov decision # . Brown University, 1998. processes [26] M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender, “Complexity of finite-horizon Markov decision process problems,” , vol. 47, no. 4, pp. 681–720, 2000. J. ACM [27] V. Mnih , “Human-level control through deep reinforcement learning,” et al. Feb. 2015. , vol. 518, p. 529, Nature [28] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, “The Arcade Learning Environment: An , vol. 47, pp. 253–279, Jun. 2013. 1 Evaluation Platform for General Agents,” [29] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 248–255. , 2009, pp. 2009 IEEE Conference on Computer Vision and Pattern Recognition Modeling and Tools for Network Simulation , K. Wehrle, M. Güneş, and J. Gross, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010, pp. 15–34. [31] B. Lantz, B. Heller, and N. McKeown, “A network in a laptop: rapid prototyping for software-defined networks,” in , 2010, p. 19. Networks Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in [32] A. Futoransky, F. Miranda, J. Orlicki, and C. Sarraute, . Simulating Cyber-Attacks for Fun and Profit 2010. [33] T. E. Oliphant, [34] J. D. Hunter, “Matplotlib: A 2D Graphics Environment,” . USA: Trelgol Publishing, 2006. Guide to NumPy , vol. 9, no. 3, pp. 90–95, Comput. Sci. Eng. 2007. [35] A. A. Hagberg, D. A. Schult, and P. J. Swart, “Exploring Network Structure, Dynamics, and , 2008, pp. Proceedings of the 7th Python in Science Conference Function using NetworkX,” in 11–15. [36] Martín Abadi , “TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems.” et al. 2015. [37] F. Chollet and Others, “Keras,” 2015. [Online]. Available: [38] M. Backes, J. Hoffmann, R. Künnemann, P. Speicher, and M. Steinmetz, “Simulated Penetration , May 2017. arXiv:1705.05088 [cs] [39] S. Donnelly, “ ’Soft Target: The Top 10 Vulnerabilities Used by Cybercriminals,” Recorded Future, 2018. [40] P. Mell, K. Scarfone, and S. Romanosky, “Common Vulnerability Scoring System,” IEEE Secur. , vol. 4, no. 6, pp. 85–89, 2006. Priv. # [41] First, “Common Vulnerability Scoring System SIG,” . [Accessed: 31-Oct-2018]. # . [Online]. Available: First [41 First, “Common Vulnerability Scoring System SIG,” First. [Online]. Available: # https://www.first.org/cvss/ [42] Oracle, [43] Rapid, [44] C. J. Watkins and P. Dayan, “Q-learning,” [45] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time Analysis of the Multiarmed Bandit Problem,” . 2017. Oracle VM VirtualBox . 2015. Metasploitable 2 Virtual Machine , vol. 8, no. 3–4, pp. 279–292, 1992. Mach. Learn. [45 P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time Analysis of the Multiarmed Bandit Problem,” Mach. Learn., vol. 47, no. 2, pp. 235-256, May 2002. , vol. 47, no. 2, pp. 235–256, May 2002. Mach. Learn. , “Playing Atari with Deep Reinforcement Learning,” et al. [46 V. Mnih et al., “Playing Atari with Deep Reinforcement Learning,” arXiv [cs.LG/, 19-Dec-2013. [46] V. Mnih [47] W. McKinney, “Data Structures for Statistical Computing in Python,” in , 19-Dec-2013. arXiv [cs.LG] # Proceedings of the 9th 80 # , 2010, pp. 51–56. Python in Science Conference [48] R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” , vol. 112, no. 1, pp. 181–211, Aug. 1999. Artif. Intell. [47] R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” , vol. 112, no. 1, pp. 181–211, Aug. 1999. Artif. Intell. 81
{ "id": "1705.05088" }
1905.05055
Object Detection in 20 Years: A Survey
Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Over the past two decades, we have seen a rapid technological evolution of object detection and its profound impact on the entire computer vision field. If we consider today's object detection technique as a revolution driven by deep learning, then back in the 1990s, we would see the ingenious thinking and long-term perspective design of early computer vision. This paper extensively reviews this fast-moving research field in the light of technical evolution, spanning over a quarter-century's time (from the 1990s to 2022). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed-up techniques, and the recent state-of-the-art detection methods.
http://arxiv.org/pdf/1905.05055
Zhengxia Zou, Keyan Chen, Zhenwei Shi, Yuhong Guo, Jieping Ye
cs.CV
Accepted by Proceedings of the IEEE
null
cs.CV
20190513
20230118
3 2 0 2 n a J 8 1 ] V C . s c [ 3 v 5 5 0 5 0 . 5 0 9 1 : v i X r a Object Detection in 20 Years: A Survey Zhengxia Zou*, Keyan Chen, Zhenwei Shi, Member, IEEE, Yuhong Guo, and Jieping Ye*, Fellow, IEEE Abstract—Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Over the past two decades, we have seen a rapid technological evolution of object detection and its profound impact on the entire computer vision field. If we consider today’s object detection technique as a revolution driven by deep learning, then back in the 1990s, we would see the ingenious thinking and long-term perspective design of early computer vision. This paper extensively reviews this fast-moving research field in the light of technical evolution, spanning over a quarter-century’s time (from the 1990s to 2022). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed-up techniques, and the recent state-of-the-art detection methods. Number of Publications in Object Detection 4000 3500 3000 2500 2000 1500 so || 500 TTL ssgsssgsggsgss 1998 | 1999 | 2000 | 2001 I 2002 I 2003 2004 2005 2006 2007 2008 ml 2009 Ml Fig. 1: The increasing number of publications in object detec- tion from 1998 to 2021. (Data from Google scholar advanced search: allintitle: “object detection” OR “detecting objects”.) Index Terms—Object detection, Computer vision, Deep learn- ing, Convolutional neural networks, Technical evolution. # I. INTRODUCTION that deals with detecting instances of visual objects of a certain class (such as humans, animals, or cars) in digital im- ages. The goal of object detection is to develop computational models and techniques that provide one of the most basic pieces of knowledge needed by computer vision applications: What objects are where? The two most significant metrics for object detection are accuracy (including classification accuracy and localization accuracy) and speed. Object detection serves as a basis for many other computer vision tasks, such as instance segmentation [1–4], image captioning [5–7], object tracking [8], etc. In recent years, the rapid development of deep learning techniques [9] has greatly promoted the progress of object detection, leading to remarkable breakthroughs and propelling it to a research hot- spot with unprecedented attention. Object detection has now been widely used in many real-world applications, such as autonomous driving, robot vision, video surveillance, etc. Fig. 1 shows the growing number of publications that are associated with “object detection” over the past two decades. As different detection tasks have totally different objectives and constraints, their difficulties may vary from each other. In addition to some common challenges in other computer vision tasks such as objects under different viewpoints, illuminations, and intraclass variations, the challenges in object detection include but are not limited to the following aspects: object rotation and scale changes (e.g., small objects), accurate object localization, dense and occluded object detection, speed up of detection, etc. In Sec. IV, we will give a more detailed analysis of these topics. This survey seeks to provide novices with a complete grasp of object detection technology from many viewpoints, with an emphasis on its evolution. The key features are three-folds: A comprehensive review in the light of technical evolutions, an in-depth exploration of the key technologies and the recent state of the arts, and a comprehensive analysis of detection speed-up techniques. The main clue focuses on the past, present, and future, complemented with some other necessary components in object detection, like datasets, metrics, and acceleration techniques. Standing on the technical highway, this survey aims to present the evolution of related technolo- gies, allowing readers to grasp the essential concepts and find potential future directions, while neglecting their technical specifics. The work was supported by the National Natural Science Foundation of China under Grant 62125102, the National Key Research and Development Program of China (Titled “Brain-inspired General Vision Models and Ap- plications”), and the Fundamental Research Funds for the Central Universi- ties. (Corresponding Author: Zhengxia Zou ([email protected]) and Jieping Ye ([email protected])). Zhengxia Zou is with the Department of Guidance, Navigation and Control, School of Astronautics, Beihang University, Beijing 100191, China, and also with Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China. Keyan Chen and Zhenwei Shi are with the Image Processing Center, School of Astronautics, and with the Beijing Key Laboratory of Digital Media, and with the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China, and also with the Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China. Yuhong Guo is with the School of Computer Science, Carleton University, Ottawa, Ontario, K1S 5B6, Canada. Jieping Ye is with the Alibaba Group, Hangzhou 310030, China. The rest of this paper is organized as follows. In Section II, we review the 20 years’ evolution of object detection. In Section III, we review the speed-up techniques in object detection. The state-of-the-art detection methods of the recent three years are reviewed in Section IV. In Section V, we conclude this paper and make a deep analysis of the further research directions. II. OBJECT DETECTION IN 20 YEARS In this section, we will review the history of object detection from multiple views, including milestone detectors, datasets, metrics and the evolution of key techniques. 1 Object Detection Milestones Ps Bounding Box Regression +Multi-resolution Detection | +Hard-negative Mining / +Keypoint Based Detection //* snd to End Detection CornerNet CenterNet (L. Hei et al-1gy / CenterNe' (X. Zhou et al-19) SSD (W. Liu Retina-Net ppm etal-16) (T.Y. Linet al-17) ; HOG Det. (P. Felzenszwalb et al-08, 10) YOLO (J. Redmon / + Beference-free Detection aT) Petection one-stage (N. Dalal et al-05) et al-16,17) pales TH Vs Det. et al-20) detector / +AlexNet 2012 (P. Viola et al-01) 2014 2001 2004 2006 2008 2014 RCNN (R. Girshick et al-14) Traditional Detection Methods Deep Learning based Detection Methods 2015 2015 SPPNet (K. He et al-14) (R. Girshick-15) 2016 2017 2018 2019 2020 2021 2022 2016 2017 2018 2019 2020 2021 2022 Two-stage FPN (T. Y. Lin et al-17) detect. ietector / + Feature Fusion Fast RCNN Faster RCNN (S. Ren et al-15) (/ +Multi-reference Detection (Anchors Boxes) Fig. 2: A road map of object detection. Milestone detectors in this figure: VJ Det. [10, 11], HOG Det. [12], DPM [13– 15], RCNN [16], SPPNet [17], Fast RCNN [18], Faster RCNN [19], YOLO [20–22], SSD [23], FPN [24], Retina-Net [25], CornerNet [26], CenterNet [27], DETR [28]. # A. A Road Map of Object Detection In the past two decades, it is widely accepted that the progress of object detection has generally gone through two historical periods: “traditional object detection period (be- fore 2014)” and “deep learning based detection period (after 2014)”, as shown in Fig. 2. In the following, we will summa- rize the milestone detectors of this period, with the emergence time and performance serving as the main clue to highlight the behind driving technology, seeing Fig. 3. 1) Milestones: Traditional Detectors: If we consider to- day’s object detection technique as a revolution driven by deep learning, then back in the 1990s, we would see the ingenious design and long-term perspective of early computer vision. Most of the early object detection algorithms were built based on handcrafted features. Due to the lack of effective image representation at that time, people have to design sophisticated feature representations and a variety of speed-up skills. Viola Jones Detectors: In 2001, P. Viola and M. Jones achieved real-time detection of human faces for the first time without any constraints (e.g., skin color segmentation) [10, 11]. Running on a 700MHz Pentium III CPU, the detector was tens or even hundreds of times faster than other algorithms in its time under comparable detection accuracy. The VJ detector follows a most straightforward way of detection, i.e., sliding windows: to go through all possible locations and scales in an image to see if any window contains a human face. Although it seems to be a very simple process, the calculation behind it was far beyond the computer’s power of its time. The VJ detector has dramatically improved its detection speed by incorporating three important techniques: “integral image”, “feature selection”, and “detection cascades” (to be introduced in section III). dense grid of uniformly spaced cells and use overlapping local contrast normalization (on “blocks”). Although HOG can be used to detect a variety of object classes, it was motivated primarily by the problem of pedestrian detection. To detect objects of different sizes, the HOG detector rescales the input image for multiple times while keeping the size of a detection window unchanged. The HOG detector has been an important foundation of many object detectors [13, 14, 32] and a large variety of computer vision applications for many years. (DPM): DPM, as the winners of VOC-07, -08, and -09 detection challenges, was the epitome of the traditional object detection methods. DPM was originally proposed by P. Felzenszwalb [13] in 2008 as an extension of the HOG detector. It follows the detection philosophy of “divide and conquer”, where the training can be simply considered as the learning of a proper way of de- composing an object, and the inference can be considered as an ensemble of detections on different object parts. For example, the problem of detecting a “car” can be decomposed to the detection of its window, body, and wheels. This part of the work, a.k.a. “star-model”, was introduced by P. Felzenszwalb et al. [13]. Later on, R. Girshick has further extended the star model to the “mixture models” to deal with the objects in the real world under more significant variations and has made a series of other improvements [14, 15, 33, 34]. Although today’s object detectors have far surpassed DPM in detection accuracy, many of them are still deeply influenced by its valuable insights, e.g., mixture models, hard negative mining, bounding box regression, context priming, etc. In 2010, P. Felzenszwalb and R. Girshick were awarded the “lifetime achievement” by PASCAL VOC. HOG Detector: In 2005, N. Dalal and B. Triggs proposed Histogram of Oriented Gradients (HOG) feature descriptor [12]. HOG can be considered as an important improvement of the scale-invariant feature transform [29, 30] and shape contexts [31] of its time. To balance the feature invariance (including translation, scale, illumination, etc) and the nonlin- earity, the HOG descriptor is designed to be computed on a 2) Milestones: CNN based Two-stage Detectors: As the performance of hand-crafted features became saturated, the research of object detection reached a plateau after 2010. In 2012, the world saw the rebirth of convolutional neural networks [35]. As a deep convolutional network is able to learn robust and high-level feature representations of an image, a natural question arises: can we introduce it to object detection? R. Girshick et al. took the lead to break the deadlocks in 2 # Object detection accuracy improvements 8500 23.40 A 2000 —o=voC07 mar 76.20 c= voci2 mar o- COCO mAPeLS, 95] 7000 =O=COCO mares 70.00 7500 TST 7130 7080 7 cas 6440 65:70 Nn 61.0 37.70 6500 6000 5850 53.70 5500 __ 5000 dese 4710 om Z 4210 30.10 35:90 of 3620 44.70 4500 4000 a1g0 3500 3000 2500 21.90 2000 19.70 1500 . oe on oes pass ®yo2hio3oyo8\gsya se HSRC Sete eeasee o Se a oo # maP Fig. 3: Accuracy improvement of object detection on VOC07, VOC12 and MS-COCO datasets. Detectors in this figure: DPM-v1 [13], DPM-v5 [37], RCNN [16], SPPNet [17], Fast RCNN [18], Faster RCNN [19], SSD [23], FPN [24], Retina- Net [25], RefineDet [38], TridentNet [39] CenterNet [40], FCOS [41], HTC [42], YOLOv4 [22], Deformable DETR [43], Swin Transformer [44]. 2014 by proposing the Regions with CNN features (RCNN) [16, 36]. Since then, object detection started to evolve at an unprecedented speed. There are two groups of detectors in the deep learning era: “two-stage detectors” and “one-stage detectors”, where the former frames the detection as a “coarse- to-fine” process while the latter frames it as to “complete in one step”. RCNN: The idea behind RCNN is simple: It starts with the extraction of a set of object proposals (object candidate boxes) by selective search [45]. Then each proposal is rescaled to a fixed size image and fed into a CNN model pretrained on ImageNet (say, AlexNet [35]) to extract features. Finally, linear SVM classifiers are used to predict the presence of an object within each region and to recognize object categories. RCNN yields a significant performance boost on VOC07, with a large improvement of mean Average Precision (mAP) from 33.7% (DPM-v5 [46]) to 58.5%. Although RCNN has made great progress, its drawbacks are obvious: the redundant fea- ture computations on a large number of overlapped proposals (over 2000 boxes from one image) lead to an extremely slow detection speed (14s per image with GPU). Later in the same year, SPPNet [17] was proposed and has solved this problem. SPPNet: In 2014, K. He et al. proposed Spatial Pyramid Pooling Networks (SPPNet) [17]. Previous CNN models re- quire a fixed-size input, e.g., a 224x224 image for AlexNet [35]. The main contribution of SPPNet is the introduction of a Spatial Pyramid Pooling (SPP) layer, which enables a CNN to generate a fixed-length representation regardless of the size of the image/region of interest without rescaling it. When using SPPNet for object detection, the feature maps can be computed from the entire image only once, and then fixed- length representations of arbitrary regions can be generated for training the detectors, which avoids repeatedly computing the convolutional features. SPPNet is more than 20 times faster than R-CNN without sacrificing any detection accuracy (VOC07 mAP=59.2%). Although SPPNet has effectively im- proved the detection speed, it still has some drawbacks: first, the training is still multi-stage, second, SPPNet only fine-tunes its fully connected layers while simply ignoring all previous layers. Later in the next year, Fast RCNN [18] was proposed and solved these problems. Fast RCNN: In 2015, R. Girshick proposed Fast RCNN detector [18], which is a further improvement of R-CNN and SPPNet [16, 17]. Fast RCNN enables us to simultaneously train a detector and a bounding box regressor under the same network configurations. On VOC07 dataset, Fast RCNN increased the mAP from 58.5% (RCNN) to 70.0% while with a detection speed over 200 times faster than R-CNN. Although Fast-RCNN successfully integrates the advantages of R-CNN and SPPNet, its detection speed is still limited by the proposal detection (see Section II-C1 for more details). Then, a question naturally arises: “can we generate object proposals with a CNN model?” Later, Faster R-CNN [19] answered this question. Faster RCNN: In 2015, S. Ren et al. proposed Faster RCNN detector [19, 47] shortly after the Fast RCNN. Faster RCNN is the first near-realtime deep learning detector (COCO [email protected]=42.7%, VOC07 mAP=73.2%, 17fps with ZF-Net [48]). The main contribution of Faster-RCNN is the introduc- tion of Region Proposal Network (RPN) that enables nearly cost-free region proposals. From R-CNN to Faster RCNN, most individual blocks of an object detection system, e.g., pro- posal detection, feature extraction, bounding box regression, etc, have been gradually integrated into a unified, end-to-end learning framework. Although Faster RCNN breaks through the speed bottleneck of Fast RCNN, there is still computation redundancy at the subsequent detection stage. Later on, a variety of improvements have been proposed, including RFCN [49] and Light head RCNN [50]. (See more details in Section III.) Feature Pyramid Networks (FPN): In 2017, T.-Y. Lin et al. proposed FPN [24]. Before FPN, most of the deep learning based detectors run detection only on the feature maps of the networks’ top layer. Although the features in deeper layers of a CNN are beneficial for category recognition, it is not conducive to localizing objects. To this end, a top- down architecture with lateral connections is developed in FPN for building high-level semantics at all scales. Since a CNN naturally forms a feature pyramid through its forward propagation, the FPN shows great advances for detecting objects with a wide variety of scales. Using FPN in a basic Faster R-CNN system, it achieves state-of-the-art single model detection results on the COCO dataset without bells and whistles (COCO [email protected]=59.1%). FPN has now become a basic building block of many latest detectors. 3) Milestones: CNN based One-stage Detectors: Most of the two-stage detectors follow a coarse-to-fine processing paradigm. The coarse strives to improve recall ability, while the fine refines the localization on the basis of the coarse detection, and places more emphasis on the discriminate ability. They can easily attain a high precision without any bells and whistles, but rarely employed in engineering due to 3 the poor speed and enormous complexity. On the contrary, one- stage detectors can retrieve all objects in one-step inference. They are well-liked by mobile devices with real-time and easy- deployed features, but their performance suffers noticeably when detecting dense and small objects. You Only Look Once (YOLO): YOLO was proposed by R. Joseph et al. in 2015. It was the first one-stage detector in the deep learning era [20]. YOLO is extremely fast: a fast version of YOLO runs at 155fps with VOC07 mAP=52.7%, while its enhanced version runs at 45fps with VOC07 mAP=63.4%. YOLO follows a totally different paradigm from two-stage de- tectors: to apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region simultaneously. In spite of its great improvement of detection speed, YOLO suffers from a drop of localization accuracy compared with two- stage detectors, especially for some small objects. YOLO’s subsequent versions [21, 22, 51] and the latter proposed SSD [23] has paid more attention to this problem. Recently, YOLOv7 [52], a follow-up work from YOLOv4 team, has been proposed. It outperforms most existing object detectors in terms of speed and accuracy (range from 5 FPS to 160 FPS) by introducing optimized structures like dynamic label assignment and model structure reparameterization. Single Shot MultiBox Detector (SSD): SSD [23] was proposed by W. Liu et al. in 2015. The main contribution of SSD is the introduction of the multi-reference and multi- resolution detection techniques (to be introduced in Section II-C1), which significantly improves the detection accuracy of a one-stage detector, especially for some small objects. SSD has advantages in terms of both detection speed and accuracy (COCO [email protected]=46.5%, a fast version runs at 59fps). The main difference between SSD and previous detectors is that SSD detects objects of different scales on different layers of the network, while the previous ones only run detection on their top layers. RetinaNet: Despite its high speed and simplicity, the one- stage detectors have trailed the accuracy of two-stage detectors for years. T.-Y. Lin et al. have explored the reasons behind and proposed RetinaNet in 2017 [25]. They found that the extreme foreground-background class imbalance encountered during the training of dense detectors is the central cause. To this end, a new loss function named “focal loss” has been introduced in RetinaNet by reshaping the standard cross entropy loss so that detector will put more focus on hard, misclassified examples during training. Focal Loss enables the one-stage detectors to achieve comparable accuracy of two- stage detectors while maintaining a very high detection speed (COCO [email protected]=59.1%). CornerNet: Previous methods primarily used anchor boxes to provide classification and regression references. Objects frequently exhibit variation in terms of number, location, scale, ratio, etc. They have to follow the path of setting up a large number of reference boxes to better match ground truths in order to achieve high performance. However, the network would suffer from further category imbalance, lots of hand-designed hyper-parameters, and a long convergence time. To address these problems, H. Law et al [26] discard the previous detection paradigm, and view the task as a keypoint (corners of a box) prediction problem. After obtaining the key points, it will decouple and re-group the corner points using extra embedding information to form the bounding boxes. CornerNet outperforms most one-stage detectors at that time (COCO [email protected]=57.8%). CenterNet: X. Zhou et al proposed CenterNet [40] in 2019. It also follows a keypoint-based detection paradigm, but elim- inates costly post-processes such as group-based keypoint as- signment (in CornerNet [26], ExtremeNet [53], etc) and NMS, resulting in a fully end-to-end detection network. CenterNet considers an object to be a single point (the object’s center) and regresses all of its attributes (such as size, orientation, location, pose, etc) based on the reference center point. The model is simple and elegant, and it can integrate 3-D object detection, human pose estimation, optical flow learning, depth estima- tion, and other tasks into a single framework. Despite using such a concise detection concept, CenterNet can also achieve comparative detection results (COCO [email protected]=61.1%). DETR: In recent years, Transformers have deeply affected the entire field of deep learning, particularly the field of com- puter vision. Transformers discard the traditional convolution operator in favor of attention-alone calculation in order to overcome the limitations of CNNs and obtain a global-scale receptive field. In 2020, N. Carion et al proposed DETR [28], where they viewed object detection as a set prediction problem and proposed an end-to-end detection network with Transformers. So far, object detection has entered a new era in which objects can be detected without the use of anchor boxes or anchor points. Later, X. Zhu et al proposed Deformable DETR [43] to address the DETR’s long convergence time and limited performance on detecting small objects. It achieves state-of-the-art performance on MSCOCO dataset (COCO [email protected]=71.9%). B. Object Detection Datasets and Metrics 1) Datasets: Building larger datasets with less bias is es- sential for developing advanced detection algorithms. A num- ber of well-known detection datasets have been released in the past 10 years, including the datasets of PASCAL VOC Chal- lenges [54, 55] (e.g., VOC2007, VOC2012), ImageNet Large Scale Visual Recognition Challenge (e.g., ILSVRC2014) [56], MS-COCO Detection Challenge [57], Open Images Dataset [58, 59], Objects365 [60], etc. The statistics of these datasets are given in Table I. Fig. 4 shows some image examples of these datasets. Fig. 3 shows the improvements of detection accuracy on VOC07, VOC12 and MS-COCO datasets from 2008 to 2021. Pascal VOC: The PASCAL Visual Object Classes (VOC) Challenges1 (from 2005 to 2012) [54, 55] was one of the most important competitions in the early computer vision community. Two versions of Pascal-VOC are mostly used in object detection: VOC07 and VOC12, where the former consists of 5k tr. images + 12k annotated objects, and the latter consists of 11k tr. images + 27k annotated objects. 20 classes 1http://host.robots.ox.ac.uk/pascal/VOC/ 4 (c) Fig. 4: Some example images and annotations in (a) PASCAL-VOC07, (b) ILSVRC, (c) MS-COCO, and (d) Open Images. Dataset images train objects validation images objects trainval images objects images test objects VOC-2007 VOC-2012 ILSVRC-2014 ILSVRC-2017 MS-COCO-2015 MS-COCO-2017 Objects365-2019 OID-2020 2,501 5,717 456,567 456,567 82,783 118,287 600,000 1,743,042 6,301 13,609 478,807 478,807 604,907 860,001 9,623,000 14,610,229 2,510 5,823 20,121 20,121 40,504 5,000 38,000 41,620 6,307 13,841 55,502 55,502 291,875 36,781 479,000 303,980 5,011 11,540 476,688 476,688 123,287 123,287 638,000 1,784,662 12,608 27,450 534,309 534,309 896,782 896,782 10,102,000 14,914,209 4,952 10,991 40,152 65,500 81,434 40,670 100,000 125,436 14,976 - - - - - 1,700,00 937,327 TABLE I: Some well-known object detection datasets and their statistics. of objects that are common in life are annotated in these two datasets, e.g., “person”, “cat”, “bicycle”, “sofa”, etc. ILSVRC: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC)2 [56] has pushed forward the state of the art in generic object detection. ILSVRC is organized each year from 2010 to 2017. It contains a detection challenge using ImageNet images [61]. The ILSVRC detection dataset contains 200 classes of visual objects. The number of its images/object instances is two orders of magnitude larger than VOC. MS-COCO: MS-COCO3 [57] is one of the most chal- lenging object detection dataset available today. The annual competition based on MS-COCO dataset has been held since 2015. It has less number of object categories than ILSVRC, but more object instances. For example, MS-COCO-17 contains 164k images and 897k annotated objects from 80 categories. Compared with VOC and ILSVRC, the biggest progress of MS-COCO is that apart from the bounding box annotations, each object is further labeled using per-instance segmentation to aid in precise localization. In addition, MS-COCO contains more small objects (whose area is smaller than 1% of the image) and more densely located objects. Just like ImageNet in its time, MS-COCO has become the de facto standard for the object detection community. Open Images: The year of 2018 sees the introduction of the Open Images Detection (OID) challenge4 [62], following MS-COCO but at an unprecedented scale. There are two tasks in Open Images: 1) the standard object detection, and 2) the visual relationship detection which detects paired objects in particular relations. For the standard detection task, the dataset consists of 1,910k images with 15,440k annotated bounding boxes on 600 object categories. 2) Metrics: How can we evaluate the accuracy of a de- tector? This question may have different answers at different times. In the early time’s detection research, there are no widely accepted evaluation metrics on detection accuracy. For example, in the early research of pedestrian detection [12], the “miss rate vs. false positives per window (FPPW)” was commonly used as the metric. However, the per-window measurement can be flawed and fails to predict full image performance [63]. In 2009, the Caltech pedestrian detection benchmark was introduced [63, 64] and since then, the eval- uation metric has changed from FPPW to false positives per- image (FPPI). In recent years, the most frequently used evaluation for detection is “Average Precision (AP)”, which was originally introduced in VOC2007. AP is defined as the average detection precision under different recalls, and is usually evaluated in 2http://image-net.org/challenges/LSVRC/ 3http://cocodataset.org/ 4https://storage.googleapis.com/openimages/web/index.html 5 6 Fig. 5: Evolution of multi-scale detection techniques in object detection. Detectors in this figure: VJ Det. [10], HOG Det. [12], DPM [13], Exemplar SVM [32], Overfeat [65], RCNN [16], SPPNet [17], Fast RCNN [18], Faster RCNN [19], DNN Det. [66], YOLO [20], SSD [23], Unified Det. [67], FPN [24], RetinaNet [25], RefineDet [38], Cascade R-CNN [68], Swin Transformer [44], FCOS [41], YOLOv4 [22], CornerNet [26], CenterNet [40], Reppoints [69], DETR [28]. detection. In the past 20 years, multi-scale detection has gone through multiple historical periods, as shown in Fig. 5. a category-specific manner. The mean AP (mAP) averaged over all categories is usually used as the final metric of performance. To measure the object localization accuracy, the IoU between the predicted box and the ground truth is used to verify whether it is greater than a predefined threshold, say, 0.5. If yes, the object will be identified as “detected”, otherwise, “missed”. The 0.5-IoU mAP has then become the de facto metric for object detection. Feature pyramids + sliding windows: After the VJ de- tector, researchers started to pay more attention to a more intuitive way of detection, i.e. by building “feature pyramid + sliding windows”. From 2004, a number of milestone detectors were built based on this paradigm, including the HOG detector, DPM, etc. They frequently glide a fixed size detection window over the image, paying little attention to ”different aspect ratios”. To detect objects with a more complex appearance, R. Girshick et al. began to seek better solutions outside the feature pyramid. The “mixture model” [15] was a solution at that time, i.e. to train multiple detectors for objects of different aspect ratios. Apart from this, exemplar-based detection [32, 70] provided another solution by training individual models for every object instance (exemplar). After 2014, due to the introduction of MS-COCO datasets, researchers started to pay more attention to the accuracy of object localization. Instead of using a fixed IoU threshold, MS-COCO AP is averaged over multiple IoU thresholds between 0.5 and 0.95, which encourages more accurate object localization and may be of great importance for some real- world applications (e.g., imagine there is a robot trying to grasp a spanner). Detection with object proposals: Object proposals refer to a group of class-agnostic reference boxes that are likely to contain any objects. Detection with object proposals helps to avoid the exhaustive sliding window search across an image. We refer readers to the following papers for a comprehensive review on this topic [71, 72]. Early time’s proposal detection methods followed a bottom-up detection philosophy [73, 74]. After 2014, with the popularity of deep CNN in visual recognition, the top-down, learning-based approaches began to show more advantages in this problem [19, 75, 76]. Now, the proposal detection gradually slipped out of sight after the rise of one-stage detectors. # C. Technical Evolution in Object Detection In this section, we will introduce some important building blocks of a detection system and their technical evolutions. First, we describe the multi-scale and context priming on model designing, followed by the sample selection strategy and the design of the loss function in the training process, and lastly, the Non-Maximum Suppression in the inference. The time-stamp in the chart and text is supplied by the publication time of papers. The evolution order shown in the figures is primarily to assist readers in understanding and there may be temporal overlap. 1) Technical Evolution of Multi-Scale Detection: Multi- scale detection of objects with “different sizes” and “different aspect ratios” is one of the main technical challenges in object Deep regression and anchor-free detection: In recent years, with the increase of GPU’s computing power, multi- scale detection has become more and more straightforward 7 Fig. 6: Evolution of context priming in object detection. Detectors in this figure: Face Det. [78], MultiPath [79], GBDNet [80, 81], CC-Net [82], MultiRegion-CNN [83], CoupleNet [84], DPM [14, 15], StructDet [85], ION [86], RFCN++ [87], RBFNet [88], TridentNet [39], Non-local [89], DETR [28], CtxSVM [90], PersonContext [91], SMN [92], RelationNet [93], SIN [94], RescoringNet [95]. improve object detection. In the early 2000s, Sinha and Tor- ralba [78] found that the inclusion of local contextual regions such as the facial bounding contour substantially improves face detection performance. Dalal and Triggs also found that incorporating a small amount of background information improves the accuracy of pedestrian detection [12]. Recent deep learning based detectors can also be improved with local context by simply enlarging the networks’ receptive field or the size of object proposals [79–84, 97]. and brute-force. The idea of using the deep regression to solve multi-scale problems becomes simple, i.e., to directly predict the coordinates of a bounding box based on the deep learning features [20, 66]. After 2018, researchers began to think about the object detection problem from the perspective of keypoint detection. These methods often follow two ideas: One is the group-based method which detects keypoints (corners, centers, or representative points) and then conducts object- wise grouping [26, 53, 69, 77]; the other is the group-free method which regards an object as one/many points and then regresses the object attributes (size, ratio, etc.) under the reference of the points [40, 41]. Detection with global context: Global context exploits scene configuration as an additional source of information for object detection. For early time detectors, a common way of integrating global context is to integrate a statistical summary of the elements that comprise the scene, like Gist [96]. For recent detectors, there are two methods to integrate the global context. The first method is to take advantage of deep convolution, dilated convolution, deformable con- volution, pooling operation [39, 87, 88] to receive a large receptive field (even larger than the input image). But now, researchers have explored the potential to apply attention based mechanisms (non-local, transformers, etc.) to achieve a full- image receptive field and have obtained great success [28, 89]. The second method is to think of the global context as a kind of sequential information and to learn it with the recurrent neural networks [86, 98]. Multi-reference/-resolution detection: Multi-reference de- tection is now the most used method for multi-scale detection [19, 22, 23, 41, 47, 51]. The main idea of multi-reference detection [19, 22, 23, 41, 47, 51] is to first define a set of references (a.k.a. anchors, including boxes and points) at every location of an image, and then predict the detection box based on these references. Another popular technique is multi-resolution detection [23, 24, 44, 67, 68], i.e. by detecting objects of different scales at different layers of the network. Multi-reference and multi-resolution detection have now become two basic building blocks in the state-of-the-art object detection systems. 2) Technical Evolution of Context Priming: Visual objects are usually embedded in a typical context with the surrounding environments. Our brain takes advantage of the associations among objects and environments to facilitate visual perception and cognition [96]. Context priming has long been used to improve detection. Fig. 6 shows the evolution of context priming in object detection. Context interactive: Context interactive refers to the con- straints and dependencies that conveys between visual ele- ments. Some recent researches suggested that modern de- tectors can be improved by considering context interactives. Some recent improvements can be grouped into two categories, where the first one is to explore the relationship between individual objects [15, 85, 90, 92, 93, 95], and the second one is to explore the dependencies between objects and scenes [91, 94]. Detection with local context: Local context refers to the visual information in the area that surrounds the object to detect. It has long been acknowledged that local context helps 8 Year: 1994 2001 2005 2008 2014 Method | Bootstrap was widely used to deal with the insufficient computing resources of early time By simply balancing the weights between object and background classes 2015 2016 2017 2018 2019 2020 2021 Bootstrap + New Loss Functions Focusing on hard examples Computing power is no longer a problem @Face Det. (H. A. Rowley et al-CMUTechRep1995), @Haar Det. (C. P. Papageorgiou et al-ICCV1998), @VJ Det. (P. Viola et al-CVPR2001), @HOG Det. (N. Dalal et al-CVPR2005), @DPM (P. Felzenszwalb et al-CVPR2008, TPAMI2010) ... Redmon et al-CVPR2016) ... @RCNN (R. Girshick et al-CVPR2014), @SPPNet (K. He et al-ECCV2014), @Fast RCNN (R. Girshick-ICCV2015), @Faster RCN (S. Ren et al-NIPS2015), @YOLO (J. @SSD (W. Liu et al-ECCV2016), @FasterPed (L. Zhang et al-ECCV2016), @OHEM (A. Shrivastava et al-CVPR2016), @RetinaNet (T. Â¥. Lin et al- ICCV2017), @RefineDet (Zhang et al-CVPR18), @FCOS (Z. Tian et al- ICCV2019), @YOLOv4 (A. Bochkovskiy et al-arXiv2020) ... Fig. 7: Evolution of hard negative mining techniques in object detection. Detectors in this figure: Face Det. [99], Haar Det. [100], VJ Det. [10], HOG Det. [12], DPM [13, 15], RCNN [16], SPPNet [17], Fast RCNN [18], Faster RCNN [19], YOLO [20], SSD [23], FasterPed [101], OHEM [102], RetinaNet [25], RefineDet [38], FCOS [41], YOLOv4 [22]. where t and t∗ are the locations of predicted and ground- truth bounding boxes, p and p∗ are their category probabilities. IoU{a, a∗} is the IoU between the reference box/point a and its ground-truth a∗. η is an IoU threshold, say, 0.5. If an anchor box/point does not match any objects, its localization loss does not count in the final loss. 3) Technical Evolution of Hard Negative Mining: The train- ing of a detector is essentially an imbalanced learning problem. In the case of sliding window based detectors, the imbalance between backgrounds and objects could be as extreme as 107:1 [71]. In this case, using all backgrounds will be harmful to training as the vast number of easy negatives will overwhelm the learning process. Hard negative mining (HNM) aims to overcome this problem. The technical evolution of HNM is shown in Fig. 7. Classification loss: Classification loss is used to evaluate the divergence of the predicted category from the actual category, which was not thoroughly investigated in prevIoUs work such as YOLOv1 [20] and YOLOv2 [51] employing MSE/L2 loss (Mean Squared Error). Later, CE loss (Cross- Entropy) is typically used [21, 23, 47]. L2 loss is a measure in Euclidean space, whereas CE loss can measure distribution differences (termed as a form of likelihood). The prediction of classification is a probability, so CE loss is preferable to L2 loss with greater misclassification cost and lower gradi- ent vanishing effect. For improving categorization efficiency, Label Smooth has been proposed to enhance the model gen- eralization ability and solve the overconfidence problem on noise labels [103, 104], and Focal loss is designed to solve the problem of category imbalance and differences in classification difficulty [25]. Bootstrap: Bootstrap in object detection refers to a group of training techniques in which the training starts with a small part of background samples and then iteratively adds new miss-classified samples. In early times detectors, bootstrap was commonly used with the purpose of reducing the training computations over millions of backgrounds [10, 99, 100]. Later it became a standard technique in DPM and HOG detectors [12, 13] for solving the data imbalance problem. HNM in deep learning based detectors: In the deep learning era, due to the increase of computing power, bootstrap was shortly discarded in object detection during 2014-2016 [16–20]. To ease the data-imbalance problem during training, detectors like Faster RCNN and YOLO simply balance the weights between the positive and negative windows. However, researchers later noticed this cannot completely solve the imbalanced problem [25]. To this end, the bootstrap was re- introduced to object detection after 2016 [23, 38, 101, 102]. An alternative improvement is to design new loss functions [25] by reshaping the standard cross entropy loss so that it will put more focus on hard, misclassified examples [25]. Localization loss: Localization loss is used to optimize in early position and size deviation. L2 loss is prevalent research [16, 20, 51], but it is highly affected by outliers and prone to gradient explosion. Combining the benefits of L1 loss and L2 loss, the researchers propose Smooth L1 loss [18], as illustrated in the following formula, 0.5x2 |x| − 0.5 if |x| < 1 else SmoothL1(x) = (2) 4) Technical Evolution of Loss Function: The loss function measures how well the deviation of the predictions from the true labels). Calculating the loss yields the gradients of the model weights, which can subsequently be updated by backpropagation to better suit the data. Classification loss and localization loss make up the supervision of the object detection problem, seeing Eq. 1. A general form of the loss function can be written as follows: where x denotes the difference between the target and pre- dicted values. When calculating the error, the above losses treat four numbers (x, y, w, h) representing a bounding box as independent variables, however, a correlation exists between them. Moreover, IoU is utilized to determine if the prediction box corresponds to the actual ground truth box in evaluation. Equal Smooth L1 values will have totally different IoU values, hence IoU loss [105] is introduced as follows: L(p,p*, t,t") = Lets.(p, p*) + BI(t) Lioe.(t, t*) 1) = {) ToU{a,a*} > (1) 0. else (1) IoU loss = − log(IoU) (3) 9 Fig. 8: Evolution of non-max suppression (NMS) techniques in object detection from 1994 to 2021: 1) Greedy selection, 2) Bounding box aggregation, 3) Learning to NMS, and 4) NMS-free detection. Detectors in this figure: Face Det. [108], HOG Det. [12], DPM [13, 15], RCNN [16], SPPNet [17], Fast RCNN [18], Faster RCNN [19], YOLO [20], SSD [23], FPN [24], RetinaNet [25], FCOS [41], StrucDet [85], MAP-Det [109], LearnNMS [110], RelationNet [93], Learn2Rank [111], SoftNMS [112], FitnessNMS [113], SofterNMS [114], AdaptiveNMS [115], DIoUNMS [107], Overfeat [65], APC-NMS [116], MAPC [117], WBF [118], ClusterNMS [119], CenterNet [40], DETR [28], POTO [120]. Bounding Box aggregation: BB aggregation is another group of techniques for NMS [10, 65, 116, 117] with the idea of combining or clustering multiple overlapped bounding boxes into one final detection. The advantage of this type of method is that it takes full consideration of object relationships and their spatial layout [118, 119]. Some well-known detectors use this method, such as the VJ detector [10] and the Overfeat (winner of ILSVRC-13 localization task) [65]. Following that, several algorithms improved IoU loss. G- IoU (Generalized IoU) [106] improved the case when IoU loss could not optimize the non-overlapping bounding boxes, i.e., IoU = 0. According to Distance-IoU [107], a successful detection regression loss should meet three geometric metrics: overlap area, center point distance, and aspect ratio. So, based on IoU loss and G-IoU loss, DIoU (Distance IoU) is defined as the distance between the center point of the prediction and the ground truth, and CIoU (Complete IoU) [107] considered the aspect ratio difference on the basis of DIoU. Learning based NMS: A recent group of NMS improve- ments that have recently received much attention is learning based NMS [85, 93, 109–111, 122]. The main idea is to think of NMS as a filter to re-score all raw detections and to train the NMS as part of a network in an end-to-end fashion or train a net to imitate NMS’s behavior. These methods have shown promising results in improving occlusion and dense object detection over traditional hand-crafted NMS methods. NMS-free detector: To release from NMS and achieve a fully end-to-end object detection training network, researchers developed a series of methods to complete one-to-one label assignment (a.k.a. one object with just one prediction box) [28, 40, 120]. These methods frequently adhere to a rule that calls for the use of the highest-quality box for training in order to achieve free NMS. NMS-free detectors are more similar to the human visual perception system and are also a possible way to the future of object detection. 5) Technical Evolution of Non-Maximum Suppression: As the neighboring windows usually have similar detection scores, the non-maximum suppression is used as a post-processing step to remove the replicated bounding boxes and obtain the final detection result. At early times of object detection, NMS was not always integrated [121]. This is because the desired output of an object detection system was not entirely clear at that time. Fig. 8 shows the evolution of NMS in the past 20 years. Greedy selection: Greedy selection is an old-fashioned but the most popular way to perform NMS. The idea behind it is simple and intuitive: for a set of overlapped detections, the bounding box with the maximum detection score is se- lected while its neighboring boxes are removed according to a predefined overlap threshold. Although greedy selection has now become the de facto method for NMS, it still has some space for improvement. First, the top-scoring box may not be the best fit. Second, it may suppress nearby objects. Finally, it does not suppress false positives [116]. Many works have been proposed to solve the problems mentioned above [107, 112, 114, 115]. # III. SPEED-UP OF DETECTION The acceleration of a detector has long been a challenging problem. The speed-up techniques in object detection can be divided into three levels of groups: speed up of “detection pipeline”, “detector backbone”, and “numerical computation”. , as shown in Fig. 9. Refer to [123] for a more detailed version. A. Feature Map Shared Computation Among the different computational stages of a detector, feature extraction usually dominates the amount of compu- tation. The most commonly used idea to reduce the feature computational redundancy is to compute the feature map of the whole image only once [18, 19, 124], which have achieved tens or even hundreds of times of acceleration. # B. Cascaded Detection Cascaded detection is a commonly used technique [10, 125]. It takes a coarse to fine detection philosophy: to filter out most of the simple background windows using simple calculations, then to process those more difficult windows with complex ones. In recent years, cascaded detection has been especially applied to those detection tasks of “small objects in large scenes”, e.g., face detection [126, 127], pedestrian detection [101, 124, 128], etc. C. Network Pruning and Quantification “Network pruning” and “network quantification” are two commonly used methods to speed up a CNN model. The former refers to pruning the network structure or weights and the latter refers to reducing their code length. The research of “network pruning” can be traced back to as early as the 1980s [129]. The recent network pruning methods usually take an iterative training and pruning process, i.e., to remove only a small group of unimportant weights after each stage of training, and to repeat those operations [130]. The recent works on network quantification mainly focus on network binarization, which aims to compress a network by quantifying its activations or weights to binary variables (say, 0/1) so that the floating-point operation is converted to logical operations. # D. Lightweight Network Design The last group of methods to speed up a CNN based detector is to directly design lightweight networks. In addition to some general designing principles like “fewer channels and more layers” [131], some other methods have been proposed in recent years [132–136]. 1) Factorizing Convolutions: Factorizing convolutions is the most straightforward way to build a lightweight CNN model. There are two groups of factorizing methods. The first group is to factorize a large convolution filter into a set of small ones [50, 87, 137], as shown in Fig. 10 (b). For example, one can factorize a 7x7 filter into three 3x3 filters, where they share the same receptive field but the latter one is more efficient. The second group is to factorize convolutions in their channel dimension [138, 139], as shown in Fig. 10 (c). 2) Group Convolution: Group convolution aims to reduce the number of parameters in a convolution layer by dividing the feature channels into different groups, and then convolve on each group independently [140, 141], as shown in Fig. 10 (d). If we evenly divide the features into m groups, without changing other configurations, the computation will be theoretically reduced to 1/m of that before. Speed up of detec. pipeline ___ Â¥ Feat. map shared comput. Y Cascaded detection Detection Speed Up / v Network pruning and quantification v Lightweight network design Speed up of ——— detec. engine v Integral image v FFT Y Vector Quantization v Reduced rank approx. Speed up of numerical computations Fig. 9: An overview of the speed-up techniques in object detection. 3) Depth-wise Separable Convolution: Depth-wise sepa- rable convolution [142], as shown in Fig. 10 (e) can be viewed as a special case of the group convolution when the number of groups is set equal to the number of channels. Usually, a number of 1x1 filters are used to make a dimension transform so that the final output will have the desired number of channels. By using depth-wise separable convolution, the computation can be reduced from O(dk2c) to O(ck2)+O(dc). This idea has been recently applied to object detection and fine-grain classification [143–145]. 4) Bottle-neck Design: A bottleneck layer in a neural net- work contains few nodes compared to the previous layers. In recent years, the bottle-neck design has been widely used for designing lightweight networks [50, 133, 146–148]. Among these methods, the input layer of a detector can be compressed to reduce the amount of computation from the very beginning of the detection [133, 146, 147]. One can also compress the feature map to make it thinner, so that to speed up subsequent detection [50, 148]. 5) Detection with NAS: Deep learning-based detectors are becoming increasingly sophisticated, relying heavily on hand- crafted network architecture and training parameters. Neural architecture search (NAS) is primarily concerned with defining the proper space of candidate networks, improving strategies for searching quickly and accurately, and validating the search- ing results at a low cost. When designing a detection model, NAS can reduce the need for human intervention on the design of the network backbone and anchor boxes [149–155]. E. Numerical Acceleration Numerical Acceleration aims to accelerate object detectors from the bottom of their implementations. 1) Speed Up with Integral Image: The integral image is an important method in image processing. It helps to rapidly calculate summations over image sub-regions. The essence of integral image is the integral-differential separability of convolution in signal processing: f (x) ∗ g(x) = ( f (x)dx) ∗ ( dg(x) dx ), (4) where if dg(x)/dx is a sparse signal, then the convolution can be accelerated by the right part of this equation [10, 156]. 10 11 . Feature me Featuremap 5 ( » fers C4 l Large conv. filter k'xk! (a) Standard convolution d/2 filters | a2 filters oy. | 1 (d) Group convolution (#groups = 2) Feature map Feature map Small GoUR filters B-= |— (b) Factorizing convolutional filters Feature map . Feature map of. (c) Factorizing convolutional channels _ a (e) Depth-wise separable convolution kx1 1xk d filters (1x1xc) | in Fig. 10: An overview of speed up methods of a CNN’s convolutional layer and the comparison of their computational complexity: (a) Standard convolution: O(dk?c). (b) Factoring convolutional filters (k x k > (k xk )? or 1xk,k x1): O(dk 2c) or O(dke). (c) Factoring convolutional channels: O(d'k?c) + O(dk?d’). (d) separable convolution: O(ck?) + O(dc). From HOG Map to Integral HOG Map Gradient Orientation Vector Gradient Orientation image Cell Group convolution (#groups=m): O(dk?c/m). (e) Depth-wise [f exay Gradient Orientation Histogram Integral Orientation image Block Fig. 11: An illustration of how to compute the “Integral HOG Map” [124]. With integral image techniques, we can efficiently compute the histogram feature of any location and any size with constant computational complexity. The integral image can also be used to speed up more general features in object detection, e.g., color histogram, gradient histogram [124, 157–159], etc. A typical example is to speed up HOG by computing integral HOG maps [124, 157], as shown in Fig. 11. Integral HOG map has been used in pedes- trian detection and has achieved dozens of times’ acceleration without losing any accuracy [124]. 2) Speed Up in Frequency Domain: Convolution is an important type of numerical operation in object detection. As the detection of a linear detector can be viewed as the window-wise inner product between the feature map and de- tector’s weights, which can be implemented by convolutions. The Fourier transform is a very practical way to speed up convolutions, where the theoretical basis is the convolution theorem in signal processing, i.e. under suitable conditions, the Fourier transform F of a convolution of two signals I ∗ W is the point-wise product in their Fourier space: IxW =F "(F( I) F(W)) (5) where F' is Fourier transform, F~! is Inverse Fourier trans- form, and © is the point-wise product. The above calculation can be accelerated by using the Fast Fourier Transform (FFT) and the Inverse FFT (IFFT) [160–163]. 3) Vector Quantization: The Vector Quantization (VQ) is a classical quantization method in signal processing that aims to approximate the distribution of a large group of data by a small set of prototype vectors. It can be used for data compression and accelerating the inner product operation in object detection [164, 165]. # IV. RECENT ADVANCES IN OBJECT DETECTION The continual appearance of new technologies over the past two decades has a considerable influence on object detection, while its fundamental principles and underlying logic have remained unchanged. In the above sections, we introduced the evolution of technology over the past two decades in a large-scale time range to help readers comprehend object detection; in this section, we will focus more on state-of-the- art algorithms in recent years on a short time range to help readers understand object detection. Some are expansions of previously discussed techniques (e.g., Sec. IV-A – IV-E), while others are novel crossovers that mix concepts (e.g., Sec. IV-F – IV-H). _vckrop v x v Backprop. (a) Single resolution image, backprop all objects Scale: 1 y Scale: 2 Scale: 3 x v, rop. x me 4 (b) Multi-resolution images, backprop objects of selected scale Fig. 12: Different training strategies for multi-scale object detection: (a): Training on a single resolution image, back propagate objects of all scales [17–19, 23]. (b) Training on multi-resolution images (image pyramid), back propagate objects of selected scale. If an object is too large or too small, its gradient will be discarded [39, 176, 177]. A. Beyond Sliding Window Detection Since an object in an image can be uniquely determined by its upper left corner and lower right corner of the ground truth box, the detection task, therefore, can be equivalently framed as a pair-wise key points localization problem. One recent implementation of this idea is to predict a heat-map for the corners [26]. Some other methods follow the idea and utilize more key points (corner and center [77], extreme and center points [53], representative points [69] ) to obtain better performance. Another paradigm views an object as a point/points and directly predicts the object’s attributes (e.g. height and width) without grouping. The advantage of this approach is that it can be implemented under a semantic segmentation framework, and there is no need to design multi- scale anchor boxes. Furthermore, by viewing object detection as a set prediction, DETR [28, 43] completely liberates it in a reference-based framework. B. Robust Detection of Rotation and Scale Changes In recent years, efforts have been made on robust detection of rotation and scale changes. 1) Rotation Robust Detection: Object rotation is common to see in face detection, text detection, and remote sensing object detection. The most straightforward solution to this problem is to perform data augmentation so that an object in any orientation can be well covered by the augmented data distribution [166], or to train independent detectors separately for each orientation [167, 168]. Designing rotation invariant loss functions is a recent popular solution, where a constraint on the detection loss is added so that the feature of rotated objects keeps unchanged [169–171]. Another recent solution is to learn geometric transformations of the objects candidates [172–175]. In two-stage detectors, ROI pooling aims to extract a fixed-length feature representation for an object proposal with any location and size. Since the feature pooling usually is performed in Cartesian coordinates, it is not invariant to rotation transform. A recent improvement is to perform ROI pooling in polar coordinates so that the features can be robust to the rotation changes [167]. 2) Scale Robust Detection: Recent studies have been made for scale robust detection at both training and detection stages. Scale adaptive training: Modern detectors usually re-scale input images to a fixed size and back propagate the loss of the objects in all scales. A drawback of doing this is there will be a “scale imbalance” problem. Building an image pyramid during detection could alleviate this problem but not fundamentally [49, 178]. A recent improvement is Scale Normalization for Image Pyramids (SNIP) [176], which builds image pyramids at both training and detection stages and only backpropagates the loss of some selected scales, as shown in Fig. 12. Some researchers have further proposed a more efficient training strategy: SNIP with Efficient Resampling (SNIPER) [177], i.e. to crop and re-scale an image to a set of sub-regions so that to benefit from large batch training. Scale adaptive detection: In CNN based detectors, the size of and aspect ratio of anchors are usually carefully designed. A drawback of doing this is the configurations cannot be adaptive to unexpected scale changes. To improve the detection of small objects, some “adaptive zoom-in” techniques are proposed in some recent detectors to adaptively enlarge the small objects into the “larger ones” [179, 180]. Another recent improvement is to predict the scale distribution of objects in an image, and then adaptively re-scaling the image according to it [181, 182]. C. Detection with Better Backbones The accuracy/speed of a detector depends heavily on the feature extraction networks, a.k.a, backbones, e.g. the ResNet [178], CSPNet [183], Hourglass [184], and Swin Transformer [44]. For a detailed introduction of some important detection backbones in deep learning era, we refer readers to the following surveys [185]. Fig. 13 shows the detection accuracy of three well-known detection systems: Faster RCNN [19], R- FCN [49] and SSD [23] with different backbones [186]. Object detection has recently benefited from the powerful feature extraction capabilities of Transformers. On the COCO dataset, the top-10 detection methods are all transformer-based 5. The 5https://paperswithcode.com/sota/object-detection-on-coco 12 performance gap between Transformers and CNNs have been gradually widened. D. Improvements of Localization To improve localization accuracy, there are two groups of methods in recent detectors: 1) bounding box refinement, and 2) new loss functions for accurate localization. 1) Bounding Box Refinement: The most intuitive way to improve localization accuracy is bounding box refinement, which can be considered as a post-processing of the detection results. One recent method is to iteratively feed the detection results into a BB regressor until the prediction converges to a correct location and size [187–189]. However, some researchers also claimed that this method does not guarantee the monotonicity of localization accuracy [187] and may degenerate the localization if the refinement is applied for multiple times. 2) New Loss Functions for Accurate Localization: In most modern detectors, object localization is considered as a co- ordinate regression problem. However, the drawbacks of this paradigm are obvious. First, the regression loss does not correspond to the final evaluation of localization, especially for some objects with very large aspect ratios. Second, the tra- ditional BB regression method does not provide the confidence of localization. When there are multiple BB’s overlapping with each other, this may lead to failure in non-maximum suppres- sion. The above problems can be alleviated by designing new loss functions. The most intuitive improvement is to directly use IoU as the localization loss [105–107, 190]. Besides, some researchers also tried to improve localization under a probabilistic inference framework [191]. Different from the previous methods that directly predict the box coordinates, this method predicts the probability distribution of a bounding box location. E. Learning with Segmentation Loss Object detection and semantic segmentation are two fun- damental tasks in computer vision. Recent researches suggest object detection can be improved by learning with semantic segmentation losses. To improve detection with segmentation, the simplest way is to think of the segmentation network as a fixed feature extractor and to integrate it into a detector as auxiliary features [83, 192, 193]. The advantage of this approach is that it is easy to implement, while the disadvantage is that the segmentation network may bring additional computation. Another way is to introduce an additional segmentation branch on top of the original detector and to train this model with multi-task loss functions (seg. + det.) [4, 42, 192]. The advantage is the seg. brunch will be removed at the inference stage and the detection speed will not be affected. However, the disadvantage is that the training requires pixel-level image annotations. # F. Adversarial Training The Generative Adversarial Networks (GAN) [194], intro- duced by A. Goodfellow et al. in 2014, has received great 32 Meta Architecture 30 @ Faster RCNN e e Py s 28 @ R-FCN 3 > e@ ssD 3 2 ao 26 3 2 e 3 < 3 s 2 € « c © = 24 Ry e e }: Fs S22 baa e $ s g e/g s & 5 = © 20 e & “ é 18 A 3 3 £ e{3 16 sje rt 14 70 72 74 76 78 80 82 Feature Extractor Accuracy Fig. 13: A comparison of detection accuracy of three detectors: Faster RCNN [19], R-FCN [49] and SSD [23] on MS-COCO dataset with different detection backbones. Image from J. Huang et al. CVPR 2017 [186]. attention in many tasks such as image generation[194, 195], image style transfer [196], and image super-resolution [197]. Recently, adversarial training has also been applied to object detection, especially for improving the detection of the small and occluded objects. For small object detection, GAN can be used to enhance the features of small objects by narrowing the representations between small and large ones [198, 199]. To improve the detection of occluded objects, one recent idea is to generate occlusion masks by using adversarial training [200]. Instead of generating examples in pixel space, the adversarial network directly modifies the features to mimic occlusion. # G. Weakly Supervised Object Detection Training a deep learning based object detector usually requires a large amount of manually labeled data. Weakly Supervised Object Detection (WSOD) aims at easing the reliance on data annotation by training a detector with only image-level annotations instead of bounding boxes [201]. Multi-instance learning is a group of supervised learning algorithms that has seen widespread application in WSOD [202–209]. Instead of learning with a set of instances which are individually labeled, a multi-instance learning model re- ceives a set of labeled bags, each containing many instances. If we consider object candidates in an image as a bag and image-level annotation as the label, then the WSOD can be formulated as a multi-instance learning process. Class activation mapping is another recent group of methods for WSOD [210, 211]. The research on CNN visualization has shown that the convolutional layer of a CNN behaves as object detectors despite there is no supervision on the location of the object. Class activation mapping shed light on how to enable a CNN with localization capability despite being trained on image-level labels [212]. In addition to the above approaches, some other researchers considered the WSOD as a proposal ranking process by selecting the most informative regions and then training these regions with image-level annotation [213]. Some other re- searchers proposed to mask out different parts of the image. If the detection score drops sharply, then the masked region may 13 contain an object with high probability [214]. More recently, generative adversarial training has also been used for WSOD [215]. H. Detection with Domain Adaptation The training process of most object detectors can be es- sentially viewed as a likelihood estimation process under the assumption of independent and identically distributed (i.i.d.) data. Object detection with non-i.i.d. data, especially for some real-world applications, still remains a challenge. Aside from collecting more data or applying proper data augmentation, domain adaptation offers the possibility of narrowing the gap between domains. To obtain domain-invariant feature representation, feature regularization and adversarial training based methods have been explored at the image, category, or object levels [216–221]. Cycle-consistent transformation [222] has also been applied to bridge the gap between source and target domain [223, 224]. Some other methods also incorporate both ideas [225] to acquire better performance. # V. CONCLUSION AND FUTURE DIRECTIONS Remarkable achievements have been made in object detec- tion over the past 20 years. This paper extensively reviews some milestone detectors, key technologies, speed-up meth- ods, datasets, and metrics in its 20 years of history. Some promising future directions may include but are not limited to the following aspects to help readers get more insights beyond the scheme mentioned above. Lightweight object detection: Lightweight object detection aims to speed up the detection inference to run on low- power edge devices. Some important applications include mobile augmented reality, automatic driving, smart city, smart cameras, face verification, etc. Although a great effort has been made in recent years, the speed gap between a machine and human eyes still remains large, especially for detecting some small objects or detecting with multi-source information [226, 227]. End-to-End object detection: Although some methods have been developed to detect objects in a fully end-to- end manner (image to box in a network) using one-to-one label assignment training, the majority still use a one-to-many label assignment method where the non-maximum suppression operation is separately designed. Future research on this topic may focus on designing end-to-end pipelines that maintain both high detection accuracy and efficiency [228]. Small object detection: Detecting small objects in large scenes has long been a challenge. Some potential application of this research direction includes counting the population of people in crowd or animals in the open air and detecting mili- tary targets from satellite images. Some further directions may include the integration of the visual attention mechanisms and the design of high resolution lightweight networks [229, 230]. 3D object detection: Despite recent advances in 2-D object detection, applications like autonomous driving rely on access to the objects’ location and pose in a 3D world. The future of object detection will receive more attention in the 3D world and the utilization of multi-source and multi-view data (e.g., RGB images and 3D lidar points from multiple sensors) [231, 232]. Detection in videos: Real-time object detection/tracking in HD videos is of great importance for video surveillance and autonomous driving. Traditional object detectors are usually designed under for image-wise detection, while simply ignores the correlations between videos frames. Improving detection by exploring the spatial and temporal correlation under the calculation limitation is an important research direction [233, 234]. Cross-modality detection: Object detection with multiple sources/modalities of data, e.g., RGB-D image, lidar, flow, sound, text, video, etc, is of great importance for a more accurate detection system which performs like human-being’s perception. Some open questions include: how to immigrate well-trained detectors to different modalities of data, how to make information fusion to improve detection, etc [235, 236]. Towards open-world detection: Out-of-domain general- ization, zero-shot detection, and incremental detection are emerging topics in object detection. The majority of them devised ways to reduce catastrophic forgetting or utilized supplemental information. Humans have an instinct to discover objects of unknown categories in the environment. When the corresponding knowledge (label) is given, humans will learn new knowledge from it, and get to keep the patterns. However, it is difficult for current object detection algorithms to grasp the detection ability of unknown classes of objects. Object detection in the open world aims at discovering unknown cat- egories of objects when supervision signals are not explicitly given or partially given, which holds great promise in appli- cations such as robotics and autonomous driving [237, 238]. Standing on the highway of technical evolutions, we believe this paper will help readers to build a complete road map of object detection and to find future directions of this fast- moving research field. # REFERENCES [1] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik, “Simultaneous detection and segmentation,” in ECCV. Springer, 2014, pp. 297–312. [2] ——, “Hypercolumns for object segmentation and fine- grained localization,” in Proceedings of the IEEE con- ference on computer vision and pattern recognition, 2015, pp. 447–456. [3] J. Dai, K. He, and J. Sun, “Instance-aware semantic seg- mentation via multi-task network cascades,” in CVPR, 2016, pp. 3150–3158. [4] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick, “Mask r-cnn,” in ICCV. IEEE, 2017, pp. 2980–2988. [5] A. Karpathy and L. Fei-Fei, “Deep visual-semantic for generating image descriptions,” in alignments CVPR, 2015, pp. 3128–3137. [6] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in ICML, 2015, pp. 2048–2057. [7] Q. Wu, C. Shen, P. Wang, A. Dick, and A. van den Hengel, “Image captioning and visual question an- 14 swering based on attributes and external knowledge,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 6, pp. 1367–1381, 2018. [8] K. Kang, H. Li, J. Yan, X. Zeng, B. Yang, T. Xiao, C. Zhang, Z. Wang, R. Wang, X. Wang et al., “T-cnn: Tubelets with convolutional neural networks for object detection from videos,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 10, pp. 2896–2907, 2018. [9] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015. [10] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in CVPR, vol. 1. IEEE, 2001, pp. I–I. [11] P. Viola and M. J. Jones, “Robust real-time face detec- tion,” International journal of computer vision, vol. 57, no. 2, pp. 137–154, 2004. [12] N. Dalal and B. Triggs, “Histograms of oriented gra- IEEE, dients for human detection,” in CVPR, vol. 1. 2005, pp. 886–893. [13] P. Felzenszwalb, D. McAllester, and D. Ramanan, “A discriminatively trained, multiscale, deformable part model,” in CVPR. [14] P. F. Felzenszwalb, R. B. Girshick, and D. McAllester, “Cascade object detection with deformable part mod- els,” in CVPR. [15] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object detection with discriminatively trained part-based models,” IEEE transactions on pat- tern analysis and machine intelligence, vol. 32, no. 9, pp. 1627–1645, 2010. [16] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in CVPR, 2014, pp. 580–587. [17] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyra- mid pooling in deep convolutional networks for visual recognition,” in ECCV. Springer, 2014, pp. 346–361. [18] R. Girshick, “Fast r-cnn,” in ICCV, 2015, pp. 1440– 1448. [19] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r- cnn: Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, 2015, pp. 91–99. [20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detec- tion,” in CVPR, 2016, pp. 779–788. [21] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. [22] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detec- tion,” arXiv preprint arXiv:2004.10934, 2020. [23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” in ECCV. Springer, 2016, pp. 21–37. [24] T.-Y. Lin, P. Doll´ar, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie, “Feature pyramid networks for object detection.” in CVPR, vol. 1, no. 2, 2017, p. 4. [25] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar, “Focal loss for dense object detection,” IEEE trans- actions on pattern analysis and machine intelligence, 2018. [26] H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 734– 750. [27] Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE transac- tions on neural networks and learning systems, vol. 30, no. 11, pp. 3212–3232, 2019. [28] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kir- illov, and S. Zagoruyko, “End-to-end object detection with transformers,” in European Conference on Com- puter Vision. Springer, 2020, pp. 213–229. [29] D. G. Lowe, “Object recognition from local scale- Ieee, 1999, pp. invariant features,” in ICCV, vol. 2. 1150–1157. [30] ——, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91–110, 2004. [31] S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” CALI- FORNIA UNIV SAN DIEGO LA JOLLA DEPT OF COMPUTER SCIENCE AND ENGINEERING, Tech. Rep., 2002. [32] T. Malisiewicz, A. Gupta, and A. A. Efros, “Ensemble of exemplar-svms for object detection and beyond,” in ICCV. [33] R. B. Girshick, P. F. Felzenszwalb, and D. A. Mcallester, “Object detection with grammar models,” in Advances in Neural Information Processing Systems, 2011, pp. 442–450. [34] R. B. Girshick, From rigid templates to grammars: Citeseer, Object detection with structured models. 2012. [35] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Ima- genet classification with deep convolutional neural net- works,” in Advances in neural information processing systems, 2012, pp. 1097–1105. [36] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate ob- ject detection and segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 1, pp. 142–158, 2016. [37] M. A. Sadeghi and D. Forsyth, “30hz object detection with dpm v5,” in ECCV. Springer, 2014, pp. 65–79. [38] S. Zhang, L. Wen, X. Bian, Z. Lei, and S. Z. Li, “Single- shot refinement neural network for object detection,” in CVPR, 2018. [39] Y. Li, Y. Chen, N. Wang, and Z. Zhang, “Scale-aware trident networks for object detection,” arXiv preprint arXiv:1901.01892, 2019. [40] X. Zhou, D. Wang, and P. Kr¨ahenb¨uhl, “Objects as points,” arXiv preprint arXiv:1904.07850, 2019. [41] Z. Tian, C. Shen, H. Chen, and T. He, “Fcos: Fully con- volutional one-stage object detection,” in Proceedings of the IEEE/CVF international conference on computer 15 vision, 2019, pp. 9627–9636. [42] K. Chen, J. Pang, J. Wang, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Shi, W. Ouyang et al., “Hybrid task cascade for instance segmentation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 4974–4983. [43] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, and J. Dai, “Deformable detr: Deformable transformers for end-to- end object detection,” arXiv preprint arXiv:2010.04159, 2020. [44] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vi- sion transformer using shifted windows,” arXiv preprint arXiv:2103.14030, 2021. [45] J. R. Uijlings, K. E. Van De Sande, T. Gevers, and A. W. Smeulders, “Selective search for object recognition,” International journal of computer vision, vol. 104, no. 2, pp. 154–171, 2013. [46] R. B. Girshick, P. F. Felzenszwalb, and D. McAllester, “Discriminatively trained deformable part models, release 5,” http://people.cs.uchicago.edu/ rbg/latent- release5/. [47] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 6, pp. 1137–1149, 2017. [48] M. D. Zeiler and R. Fergus, “Visualizing and under- standing convolutional networks,” in ECCV. Springer, 2014, pp. 818–833. [49] J. Dai, Y. Li, K. He, and J. Sun, “R-fcn: Object de- tection via region-based fully convolutional networks,” in Advances in neural information processing systems, 2016, pp. 379–387. [50] Z. Li, C. Peng, G. Yu, X. Zhang, Y. Deng, and J. Sun, “Light-head r-cnn: In defense of two-stage object de- tector,” arXiv preprint arXiv:1711.07264, 2017. [51] J. Redmon and A. Farhadi, “Yolo9000: better, faster, stronger,” arXiv preprint, 2017. [52] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Yolov7: Trainable bag-of-freebies sets new state-of- the-art for real-time object detectors,” arXiv preprint arXiv:2207.02696, 2022. [53] X. Zhou, J. Zhuo, and P. Krahenbuhl, “Bottom-up object detection by grouping extreme and center points,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 850–859. [54] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes journal of computer (voc) challenge,” International vision, vol. 88, no. 2, pp. 303–338, 2010. [55] M. Everingham, S. A. Eslami, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes challenge: A retrospective,” International journal of computer vision, vol. 111, no. 1, pp. 98–136, 2015. [56] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition chal- lenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. [57] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft coco: Common objects in context,” in ECCV. Springer, 2014, pp. 740–755. [58] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Mal- loci, A. Kolesnikov, T. Duerig, and V. Ferrari, “The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale,” IJCV, 2020. [59] R. Benenson, S. Popov, and V. Ferrari, “Large-scale interactive object segmentation with human annotators,” in CVPR, 2019. [60] S. Shao, Z. Li, T. Zhang, C. Peng, G. Yu, X. Zhang, J. Li, and J. Sun, “Objects365: A large-scale, high- quality dataset for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8430–8439. [61] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in CVPR. [62] I. Krasin and T. e. a. Duerig, “Openimages: A public dataset for large-scale multi-label and multi- class image classification.” Dataset available from https://storage.googleapis.com/openimages/web/index.html, 2017. [63] P. Doll´ar, C. Wojek, B. Schiele, and P. Perona, “Pedes- IEEE, 2009, trian detection: A benchmark,” in CVPR. pp. 304–311. [64] P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedes- trian detection: An evaluation of the state of the art,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 4, pp. 743–761, 2012. [65] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fer- gus, and Y. LeCun, “Overfeat: Integrated recogni- tion, localization and detection using convolutional net- works,” arXiv preprint arXiv:1312.6229, 2013. [66] C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks for object detection,” in Advances in neural information processing systems, 2013, pp. 2553–2561. [67] Z. Cai, Q. Fan, R. S. Feris, and N. Vasconcelos, “A unified multi-scale deep convolutional neural network for fast object detection,” in ECCV. Springer, 2016, pp. 354–370. [68] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6154–6162. [69] Z. Yang, S. Liu, H. Hu, L. Wang, and S. Lin, “Rep- points: Point set representation for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9657–9666. [70] T. Malisiewicz, Exemplar-based representations for ob- Carnegie ject detection, association and beyond. Mellon University, 2011. [71] J. Hosang, R. Benenson, P. Doll´ar, and B. Schiele, 16 “What makes for effective detection proposals?” IEEE transactions on pattern analysis and machine intelli- gence, vol. 38, no. 4, pp. 814–830, 2016. [72] J. Hosang, R. Benenson, and B. Schiele, “How good are detection proposals, really?” arXiv preprint arXiv:1406.6962, 2014. [73] B. Alexe, T. Deselaers, and V. Ferrari, “What is an object?” in CVPR. IEEE, 2010, pp. 73–80. [74] ——, “Measuring the objectness of image windows,” IEEE transactions on pattern analysis and machine intelligence, vol. 34, no. 11, pp. 2189–2202, 2012. [75] M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr, “Bing: Binarized normed gradients for objectness estimation at 300fps,” in CVPR, 2014, pp. 3286–3293. [76] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, “Scalable object detection using deep neural networks,” in CVPR, 2014, pp. 2147–2154. [77] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian, “Centernet: Keypoint triplets for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6569–6578. [78] A. Torralba and P. Sinha, “Detecting faces in impover- ished images,” MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB, Tech. Rep., 2001. [79] S. Zagoruyko, A. Lerer, T.-Y. Lin, P. O. Pinheiro, S. Gross, S. Chintala, and P. Doll´ar, “A multi- path network for object detection,” arXiv preprint arXiv:1604.02135, 2016. [80] X. Zeng, W. Ouyang, B. Yang, J. Yan, and X. Wang, “Gated bi-directional cnn for object detection,” in ECCV. Springer, 2016, pp. 354–369. [81] X. Zeng, W. Ouyang, J. Yan, H. Li, T. Xiao, K. Wang, Y. Liu, Y. Zhou, B. Yang, Z. Wang et al., “Crafting gbd- net for object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 9, pp. 2109–2123, 2018. [82] W. Ouyang, K. Wang, X. Zhu, and X. Wang, “Learning chained deep features and classifiers for cascade in ob- ject detection,” arXiv preprint arXiv:1702.07054, 2017. [83] S. Gidaris and N. Komodakis, “Object detection via a multi-region and semantic segmentation-aware cnn model,” in ICCV, 2015, pp. 1134–1142. [84] Y. Zhu, C. Zhao, J. Wang, X. Zhao, Y. Wu, H. Lu et al., “Couplenet: Coupling global structure with local parts for object detection,” in ICCV, vol. 2, 2017. [85] C. Desai, D. Ramanan, and C. C. Fowlkes, “Discrimina- tive models for multi-class object layout,” International journal of computer vision, vol. 95, no. 1, pp. 1–12, 2011. [86] S. Bell, C. Lawrence Zitnick, K. Bala, and R. Girshick, “Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks,” in CVPR, 2016, pp. 2874–2883. [87] Z. Li, Y. Chen, G. Yu, and Y. Deng, “R-fcn++: Towards accurate region-based fully convolutional networks for object detection.” in AAAI, 2018. [88] S. Liu, D. Huang et al., “Receptive field block net for accurate and fast object detection,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 385–400. [89] X. Wang, R. Girshick, A. Gupta, and K. He, “Non- local neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7794–7803. [90] Q. Chen, Z. Song, J. Dong, Z. Huang, Y. Hua, and S. Yan, “Contextualizing object detection and classi- fication,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, no. 1, pp. 13–27, 2015. [91] S. Gupta, B. Hariharan, and J. Malik, “Exploring person context and local scene context for object detection,” arXiv preprint arXiv:1511.08177, 2015. [92] X. Chen and A. Gupta, “Spatial memory for context rea- soning in object detection,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 4086–4096. [93] H. Hu, J. Gu, Z. Zhang, J. Dai, and Y. Wei, “Rela- tion networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3588–3597. [94] Y. Liu, R. Wang, S. Shan, and X. Chen, “Structure inference net: Object detection using scene-level context and instance-level relationships,” in CVPR, 2018, pp. 6985–6994. [95] L. V. Pato, R. Negrinho, and P. M. Q. Aguiar, “See- looking: Contextual rescoring of object ing without detections for ap maximization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. [96] S. K. Divvala, D. Hoiem, J. H. Hays, A. A. Efros, and M. Hebert, “An empirical study of context in object detection,” in CVPR. [97] C. Chen, M.-Y. Liu, O. Tuzel, and J. Xiao, “R-cnn for small object detection,” in Asian conference on computer vision. Springer, 2016, pp. 214–230. [98] J. Li, Y. Wei, X. Liang, J. Dong, T. Xu, J. Feng, and S. Yan, “Attentive contexts for object detection,” IEEE Transactions on Multimedia, vol. 19, no. 5, pp. 944– 954, 2017. [99] H. A. Rowley, S. Baluja, and T. Kanade, “Human face detection in visual scenes,” in Advances in Neural Information Processing Systems, 1996, pp. 875–881. [100] C. P. Papageorgiou, M. Oren, and T. Poggio, “A general IEEE, 1998, framework for object detection,” in ICCV. pp. 555–562. [101] L. Zhang, L. Lin, X. Liang, and K. He, “Is faster r- cnn doing well for pedestrian detection?” in ECCV. Springer, 2016, pp. 443–457. [102] A. Shrivastava, A. Gupta, and R. Girshick, “Training region-based object detectors with online hard example mining,” in CVPR, 2016, pp. 761–769. [103] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE confer- ence on computer vision and pattern recognition, 2016, pp. 2818–2826. 17 [104] R. M¨uller, S. Kornblith, and G. E. Hinton, “When does label smoothing help?” Advances in neural information processing systems, vol. 32, 2019. [105] J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, “Unitbox: An advanced object detection network,” in Proceedings of the 2016 ACM on Multimedia Confer- ence. ACM, 2016, pp. 516–520. [106] H. Rezatofighi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, “Generalized intersection over union: A metric and a loss for bounding box regression,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 658–666. [107] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-iou loss: Faster and better learning for bound- ing box regression,” in Proceedings of the AAAI Con- ference on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12 993–13 000. [108] R. Vaillant, C. Monrocq, and Y. Le Cun, “Original approach for the localisation of objects in images,” IEE Proceedings-Vision, Image and Signal Processing, vol. 141, no. 4, pp. 245–250, 1994. [109] P. Henderson and V. Ferrari, “End-to-end training of object class detectors for mean average precision,” in Asian Conference on Computer Vision. Springer, 2016, pp. 198–213. [110] J. H. Hosang, R. Benenson, and B. Schiele, “Learning non-maximum suppression.” in CVPR, 2017, pp. 6469– 6477. [111] Z. Tan, X. Nie, Q. Qian, N. Li, and H. Li, “Learning to rank proposals for object detection,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 8273–8281. [112] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis, “Soft-nms—improving object detection with one line of code,” in ICCV. [113] L. Tychsen-Smith and L. Petersson, “Improving object localization with fitness nms and bounded iou loss,” arXiv preprint arXiv:1711.00164, 2017. [114] Y. He, C. Zhu, J. Wang, M. Savvides, and X. Zhang, “Bounding box regression with uncertainty for accurate object detection,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, 2019, pp. 2888–2897. [115] S. Liu, D. Huang, and Y. Wang, “Adaptive nms: Re- fining pedestrian detection in a crowd,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6459–6468. [116] R. Rothe, M. Guillaumin, and L. Van Gool, “Non- maximum suppression for object detection by passing messages between windows,” in Asian Conference on Computer Vision. Springer, 2014, pp. 290–306. J. Hoffman, R. Hu, K. Saenko, and T. Darrell, “Spatial semantic regular- isation for large scale object detection,” in ICCV, 2015, pp. 2003–2011. [118] R. Solovyev, W. Wang, and T. Gabruseva, “Weighted boxes fusion: Ensembling boxes from different object detection models,” Image and Vision Computing, vol. 107, p. 104117, 2021. [119] Z. Zheng, P. Wang, D. Ren, W. Liu, R. Ye, Q. Hu, and W. Zuo, “Enhancing geometric factors in model learn- ing and inference for object detection and instance seg- mentation,” IEEE Transactions on Cybernetics, 2021. [120] J. Wang, L. Song, Z. Li, H. Sun, J. Sun, and N. Zheng, “End-to-end object detection with fully convolutional network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15 849–15 858. [121] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” International journal of computer vision, vol. 38, no. 1, pp. 15–33, 2000. [122] L. Wan, D. Eigen, and R. Fergus, “End-to-end integra- tion of a convolution network, deformable parts model and non-maximum suppression,” in CVPR, 2015, pp. 851–859. [123] Z. Zou, Z. Shi, Y. Guo, and J. Ye, “Object detection in 20 years: A survey,” arXiv preprint arXiv:1905.05055, 2019. [124] Q. Zhu, M.-C. Yeh, K.-T. Cheng, and S. Avidan, “Fast human detection using a cascade of histograms of oriented gradients,” in CVPR, vol. 2. IEEE, 2006, pp. 1491–1498. [125] F. Fleuret and D. Geman, “Coarse-to-fine face detec- tion,” International Journal of computer vision, vol. 41, no. 1-2, pp. 85–107, 2001. [126] H. Li, Z. Lin, X. Shen, J. Brandt, and G. Hua, “A con- volutional neural network cascade for face detection,” in CVPR, 2015, pp. 5325–5334. [127] K. Zhang, Z. Zhang, Z. Li, and Y. Qiao, “Joint face detection and alignment using multitask cascaded con- volutional networks,” IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016. [128] Z. Cai, M. Saberian, and N. Vasconcelos, “Learning complexity-aware cascades for deep pedestrian detec- tion,” in ICCV, 2015, pp. 3361–3369. [129] Y. LeCun, J. S. Denker, and S. A. Solla, “Optimal brain damage,” in Advances in neural information processing systems, 1990, pp. 598–605. [130] S. Han, H. Mao, and W. J. Dally, “Deep compres- sion: Compressing deep neural networks with prun- ing, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015. [131] K. He and J. Sun, “Convolutional neural networks at constrained time cost,” in CVPR, 2015, pp. 5353–5360. [132] Z. Qin, Z. Li, Z. Zhang, Y. Bao, G. Yu, Y. Peng, and J. Sun, “Thundernet: Towards real-time generic object detection on mobile devices,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6718–6727. [133] R. J. Wang, X. Li, and C. X. Ling, “Pelee: A real-time object detection system on mobile devices,” in Advances in Neural Information Processing Systems 31, S. Ben- gio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa- Bianchi, and R. Garnett, Eds. Curran Associates, Inc., 2018, pp. 1967–1976. [134] R. Huang, J. Pedoeem, and C. Chen, “Yolo-lite: a real- 18 time object detection algorithm optimized for non-gpu computers,” in 2018 IEEE International Conference on Big Data (Big Data). [135] H. Law, Y. Teng, O. Russakovsky, and J. Deng, “Cornernet-lite: Efficient keypoint based object detec- tion,” arXiv preprint arXiv:1904.08900, 2019. [136] G. Yu, Q. Chang, W. Lv, C. Xu, C. Cui, W. Ji, Q. Dang, K. Deng, G. Wang, Y. Du et al., “Pp-picodet: A better real-time object detector on mobile devices,” arXiv preprint arXiv:2111.00902, 2021. [137] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in CVPR, 2016, pp. 2818–2826. [138] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun, “Efficient and accurate approximations of nonlinear convolutional networks,” in CVPR, 2015, pp. 1984–1992. [139] X. Zhang, J. Zou, K. He, and J. Sun, “Accelerating very deep convolutional networks for classification and detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 10, pp. 1943–1955, 2016. [140] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” 2017. [141] G. Huang, S. Liu, L. van der Maaten, and K. Q. Weinberger, “Condensenet: An efficient densenet using learned group convolutions,” group, vol. 3, no. 12, p. 11, 2017. [142] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” arXiv preprint, pp. 1610– 02 357, 2017. [143] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks preprint for mobile arXiv:1704.04861, 2017. [144] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in CVPR. IEEE, 2018, pp. 4510–4520. [145] Y. Li, J. Li, W. Lin, and J. Li, “Tiny-dsod: Lightweight object detection for resource-restricted usages,” arXiv preprint arXiv:1807.11013, 2018. [146] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size,” arXiv preprint arXiv:1602.07360, 2016. [147] B. Wu, F. N. Iandola, P. H. Jin, and K. Keutzer, “Squeezedet: Unified, small, low power fully convolu- tional neural networks for real-time object detection for autonomous driving.” in CVPR Workshops, 2017, pp. 446–454. [148] T. Kong, A. Yao, Y. Chen, and F. Sun, “Hypernet: Towards accurate region proposal generation and joint object detection,” in CVPR, 2016, pp. 845–853. [149] Y. Chen, T. Yang, X. Zhang, G. Meng, C. Pan, and J. Sun, “Detnas: Neural architecture search on object detection,” arXiv preprint arXiv:1903.10979, 2019. [150] H. Xu, L. Yao, W. Zhang, X. Liang, and Z. Li, “Auto- fpn: Automatic network architecture adaptation for ob- ject detection beyond classification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6649–6658. [151] G. Ghiasi, T.-Y. Lin, and Q. V. Le, “Nas-fpn: Learning scalable feature pyramid architecture for object detec- tion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7036–7045. [152] J. Guo, K. Han, Y. Wang, C. Zhang, Z. Yang, H. Wu, X. Chen, and C. Xu, “Hit-detector: Hierarchical trinity architecture search for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 405–11 414. [153] N. Wang, Y. Gao, H. Chen, P. Wang, Z. Tian, C. Shen, and Y. Zhang, “Nas-fcos: Fast neural architecture search for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2020, pp. 11 943–11 951. [154] L. Yao, H. Xu, W. Zhang, X. Liang, and Z. Li, “Sm- nas: structural-to-modular neural architecture search for object detection,” in Proceedings of the AAAI Confer- ence on Artificial Intelligence, vol. 34, no. 07, 2020, pp. 12 661–12 668. [155] C. Jiang, H. Xu, W. Zhang, X. Liang, and Z. Li, “Sp-nas: Serial-to-parallel backbone search for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 863–11 872. [156] P. Simard, L. Bottou, P. Haffner, and Y. LeCun, “Boxlets: a fast convolution algorithm for signal pro- cessing and neural networks,” in Advances in Neural Information Processing Systems, 1999, pp. 571–577. [157] X. Wang, T. X. Han, and S. Yan, “An hog-lbp human detector with partial occlusion handling,” in ICCV. IEEE, 2009, pp. 32–39. [158] F. Porikli, “Integral histogram: A fast way to extract histograms in cartesian spaces,” in CVPR, vol. 1. IEEE, 2005, pp. 829–836. [159] P. Doll´ar, Z. Tu, P. Perona, and S. Belongie, “Integral channel features,” 2009. [160] M. Mathieu, M. Henaff, and Y. LeCun, “Fast training of convolutional networks through ffts,” arXiv preprint arXiv:1312.5851, 2013. [161] H. Pratt, B. Williams, F. Coenen, and Y. Zheng, “Fcnn: Fourier convolutional neural networks,” in Joint Euro- pean Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2017, pp. 786–798. [162] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino, and Y. LeCun, “Fast convolutional nets with fbfft: A gpu performance evaluation,” arXiv preprint arXiv:1412.7580, 2014. [163] O. Rippel, J. Snoek, and R. P. Adams, “Spectral rep- resentations for convolutional neural networks,” in Ad- vances in neural information processing systems, 2015, pp. 2449–2457. [164] M. A. Sadeghi and D. Forsyth, “Fast template evalu- ation with vector quantization,” in Advances in neural 19 information processing systems, 2013, pp. 2949–2957. [165] I. Kokkinos, “Bounding part scores for rapid detection Springer, with deformable part models,” in ECCV. 2012, pp. 41–50. [166] H. Zhu, X. Chen, W. Dai, K. Fu, Q. Ye, and J. Jiao, “Orientation robust object detection in aerial images using deep convolutional neural network,” in ICIP. IEEE, 2015, pp. 3735–3739. [167] B. Cai, Z. Jiang, H. Zhang, Y. Yao, and S. Nie, “Online exemplar-based fully convolutional network for aircraft detection in remote sensing images,” IEEE Geoscience and Remote Sensing Letters, no. 99, pp. 1–5, 2018. [168] G. Cheng, J. Han, P. Zhou, and L. Guo, “Multi- class geospatial object detection and geographic image classification based on collection of part detectors,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 98, pp. 119–132, 2014. [169] G. Cheng, P. Zhou, and J. Han, “Rifd-cnn: Rotation- invariant and fisher discriminative convolutional neural networks for object detection,” in CVPR, 2016, pp. 2884–2893. [170] ——, “Learning rotation-invariant convolutional neural networks for object detection in vhr optical remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7405–7415, 2016. [171] G. Cheng, J. Han, P. Zhou, and D. Xu, “Learn- ing rotation-invariant and fisher discriminative convo- lutional neural networks for object detection,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 265–278, 2018. [172] X. Shi, S. Shan, M. Kan, S. Wu, and X. Chen, “Real- time rotation-invariant face detection with progressive calibration networks,” in CVPR, 2018, pp. 2295–2303. [173] M. Jaderberg, K. Simonyan, A. Zisserman et al., “Spa- tial transformer networks,” in Advances in neural infor- mation processing systems, 2015, pp. 2017–2025. [174] D. Chen, G. Hua, F. Wen, and J. Sun, “Supervised transformer network for efficient face detection,” in ECCV. Springer, 2016, pp. 122–138. [175] J. Ding, N. Xue, Y. Long, G.-S. Xia, and Q. Lu, “Learn- ing roi transformer for oriented object detection in aerial images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2849–2858. [176] B. Singh and L. S. Davis, “An analysis of scale in- variance in object detection–snip,” in CVPR, 2018, pp. 3578–3587. [177] B. Singh, M. Najibi, and L. S. Davis, “Sniper: Efficient multi-scale training,” arXiv preprint arXiv:1805.09300, 2018. [178] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016, pp. 770–778. [179] M. Gao, R. Yu, A. Li, V. I. Morariu, and L. S. Davis, “Dynamic zoom-in network for fast object detection in large images,” in CVPR, 2018. [180] Y. Lu, T. Javidi, and S. Lazebnik, “Adaptive object detection using adjacency and zoom prediction,” in CVPR, 2016, pp. 2351–2359. [181] S. Qiao, W. Shen, W. Qiu, C. Liu, and A. L. Yuille, “Scalenet: Guiding object proposal generation in super- markets and beyond.” in ICCV, 2017, pp. 1809–1818. [182] Z. Hao, Y. Liu, H. Qin, J. Yan, X. Li, and X. Hu, “Scale- aware face detection,” in CVPR, vol. 3, 2017. [183] C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.- W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 390–391. [184] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in European conference on computer vision. Springer, 2016, pp. 483–499. [185] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, L. Wang, G. Wang et al., “Re- cent advances in convolutional neural networks,” arXiv preprint arXiv:1512.07108, 2015. [186] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama et al., “Speed/accuracy trade-offs for modern convolu- tional object detectors,” in CVPR, vol. 4, 2017. [187] Z. Cai and N. Vasconcelos, “Cascade r-cnn: Delving into high quality object detection,” in CVPR, vol. 1, no. 2, 2018, p. 10. [188] R. N. Rajaram, E. Ohn-Bar, and M. M. Trivedi, “Re- finenet: Iterative refinement for accurate object localiza- tion,” in ITSC. [189] M.-C. Roh and J.-y. Lee, “Refining faster-rcnn for accu- rate object detection,” in Machine Vision Applications (MVA), 2017 Fifteenth IAPR International Conference on. [190] B. Jiang, R. Luo, J. Mao, T. Xiao, and Y. Jiang, “Acquisition of localization confidence for accurate object detection,” in Proceedings of the ECCV, Munich, Germany, 2018, pp. 8–14. [191] S. Gidaris and N. Komodakis, “Locnet: Improving localization accuracy for object detection,” in CVPR, 2016, pp. 789–798. [192] S. Brahmbhatt, H. I. Christensen, and J. Hays, “Stuffnet: Using ‘stuff’to improve object detection,” in Applica- tions of Computer Vision (WACV), 2017 IEEE Winter Conference on. [193] A. Shrivastava and A. Gupta, “Contextual priming and feedback for faster r-cnn,” in ECCV. Springer, 2016, pp. 330–348. [194] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680. [195] A. Radford, L. Metz, and S. Chintala, “Unsuper- vised representation learning with deep convolu- tional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. [196] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent ad- versarial networks,” arXiv preprint, 2017. 20 [197] C. Ledig, L. Theis, F. Husz´ar, J. Caballero, A. Cun- ningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super- resolution using a generative adversarial network.” in CVPR, vol. 2, no. 3, 2017, p. 4. [198] J. Li, X. Liang, Y. Wei, T. Xu, J. Feng, and S. Yan, “Perceptual generative adversarial networks for small object detection,” in CVPR, 2017. [199] Y. Bai, Y. Zhang, M. Ding, and B. Ghanem, “Sod- mtgan: Small object detection via multi-task generative adversarial network,” Computer Vision-ECCV, pp. 8–14, 2018. [200] X. Wang, A. Shrivastava, and A. Gupta, “A-fast-rcnn: Hard positive generation via adversary for object detec- tion,” in CVPR, 2017. [201] D. Zhang, J. Han, G. Cheng, and M.-H. Yang, “Weakly supervised object localization and detection: A survey,” IEEE transactions on pattern analysis and machine intelligence, vol. 44, no. 9, pp. 5866–5885, 2021. [202] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P´erez, “Solving the multiple instance problem with axis- parallel rectangles,” Artificial intelligence, vol. 89, no. 1-2, pp. 31–71, 1997. [203] S. Andrews, I. Tsochantaridis, and T. Hofmann, “Sup- port vector machines for multiple-instance learning,” in Advances in neural information processing systems, 2003, pp. 577–584. [204] R. G. Cinbis, J. Verbeek, and C. Schmid, “Weakly supervised object localization with multi-fold multiple instance learning,” IEEE transactions on pattern anal- ysis and machine intelligence, vol. 39, no. 1, pp. 189– 203, 2017. [205] D. P. Papadopoulos, J. R. Uijlings, F. Keller, and V. Ferrari, “We don’t need no bounding-boxes: Training object class detectors using only human verification,” in CVPR, 2016, pp. 854–863. [206] D. Zhang, W. Zeng, J. Yao, and J. Han, “Weakly su- pervised object detection using proposal-and semantic- level relationships,” IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 2020. [207] P. Tang, X. Wang, S. Bai, W. Shen, X. Bai, W. Liu, and A. Yuille, “Pcl: Proposal cluster learning for weakly supervised object detection,” IEEE transactions on pat- tern analysis and machine intelligence, vol. 42, no. 1, pp. 176–191, 2018. [208] E. Sangineto, M. Nabi, D. Culibrk, and N. Sebe, “Self paced deep learning for weakly supervised object detection,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 3, pp. 712–725, 2018. [209] D. Zhang, J. Han, L. Zhao, and D. Meng, “Leveraging prior-knowledge for weakly supervised object detection under a collaborative self-paced curriculum learning framework,” International Journal of Computer Vision, vol. 127, no. 4, pp. 363–380, 2019. [210] Y. Zhu, Y. Zhou, Q. Ye, Q. Qiu, and J. Jiao, “Soft proposal networks for weakly supervised object local- ization,” in ICCV, 2017, pp. 1841–1850. [211] A. Diba, V. Sharma, A. M. Pazandeh, H. Pirsiavash, and L. Van Gool, “Weakly supervised cascaded convo- lutional networks.” in CVPR, vol. 1, no. 2, 2017, p. 8. [212] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning deep features for discriminative localization,” in CVPR, 2016, pp. 2921–2929. [213] H. Bilen and A. Vedaldi, “Weakly supervised deep detection networks,” in CVPR, 2016, pp. 2846–2854. [214] L. Bazzani, A. Bergamo, D. Anguelov, and L. Torresani, “Self-taught object localization with deep networks,” in Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. [215] Y. Shen, R. Ji, S. Zhang, W. Zuo, and Y. Wang, “Generative adversarial learning towards fast weakly supervised detection,” in CVPR, 2018, pp. 5764–5773. [216] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, “Domain adaptive faster r-cnn for object detection in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3339–3348. [217] Y. Wang, R. Zhang, S. Zhang, M. Li, Y. Xia, X. Zhang, and S. Liu, “Domain-specific suppression for adaptive object detection,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, 2021, pp. 9603–9612. [218] L. Hou, Y. Zhang, K. Fu, and J. Li, “Informative and consistent correspondence mining for cross-domain weakly supervised object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9929–9938. [219] X. Zhu, J. Pang, C. Yang, J. Shi, and D. Lin, “Adapting object detectors via selective cross-domain alignment,” in Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, 2019, pp. 687– 696. [220] K. Saito, Y. Ushiku, T. Harada, and K. Saenko, “Strong- weak distribution alignment for adaptive object detec- tion,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6956–6965. [221] C.-D. Xu, X.-R. Zhao, X. Jin, and X.-S. Wei, “Exploring categorical regularization for domain adaptive object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 724–11 733. [222] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent ad- versarial networks,” in Proceedings of the IEEE interna- tional conference on computer vision, 2017, pp. 2223– 2232. [223] T. Kim, M. Jeong, S. Kim, S. Choi, and C. Kim, “Diversify and match: A domain adaptive representation learning paradigm for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 12 456–12 465. [224] N. Inoue, R. Furuta, T. Yamasaki, and K. Aizawa, “Cross-domain weakly-supervised object detection through progressive domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern 21 recognition, 2018, pp. 5001–5009. [225] H.-K. Hsu, C.-H. Yao, Y.-H. Tsai, W.-C. Hung, H.- Y. Tseng, M. Singh, and M.-H. Yang, “Progressive domain adaptation for object detection,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 749–757. [226] B. Bosquet, M. Mucientes, and V. M. Brea, “Stdnet- st: Spatio-temporal convnet for small object detection,” Pattern Recognition, vol. 116, p. 107929, 2021. [227] C. Yang, Z. Huang, and N. Wang, “Querydet: Cascaded sparse query for accelerating high-resolution small ob- ject detection,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, 2022, pp. 13 668–13 677. [228] P. Sun, Y. Jiang, E. Xie, W. Shao, Z. Yuan, C. Wang, and P. Luo, “What makes for end-to-end object detec- tion?” in International Conference on Machine Learn- ing. PMLR, 2021, pp. 9934–9944. [229] X. Zhou, X. Xu, W. Liang, Z. Zeng, S. Shimizu, L. T. Yang, and Q. Jin, “Intelligent small object detection for digital twin in smart manufacturing with industrial cyber-physical systems,” IEEE Transactions on Indus- trial Informatics, vol. 18, no. 2, pp. 1377–1386, 2021. [230] G. Cheng, X. Yuan, X. Yao, K. Yan, Q. Zeng, and J. Han, “Towards large-scale small object de- tection: Survey and benchmarks,” arXiv preprint arXiv:2207.14096, 2022. [231] Y. Wang, V. C. Guizilini, T. Zhang, Y. Wang, H. Zhao, and J. Solomon, “Detr3d: 3d object detection from multi-view images via 3d-to-2d queries,” in Conference on Robot Learning. PMLR, 2022, pp. 180–191. [232] Y. Wang, T. Ye, L. Cao, W. Huang, F. Sun, F. He, and D. Tao, “Bridged transformer for vision and point cloud 3d object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2022, pp. 12 114–12 123. [233] X. Cheng, H. Xiong, D.-P. Fan, Y. Zhong, M. Harandi, T. Drummond, and Z. Ge, “Implicit motion handling for video camouflaged object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13 864–13 873. [234] Q. Zhou, X. Li, L. He, Y. Yang, G. Cheng, Y. Tong, L. Ma, and D. Tao, “Transvod: End-to-end video ob- ject detection with spatial-temporal transformers,” arXiv preprint arXiv:2201.05047, 2022. [235] R. Cong, Q. Lin, C. Zhang, C. Li, X. Cao, Q. Huang, and Y. Zhao, “Cir-net: Cross-modality interaction and refinement for rgb-d salient object detection,” IEEE Transactions on Image Processing, 2022. [236] Y. Wang, L. Zhu, S. Huang, T. Hui, X. Li, F. Wang, and S. Liu, “Cross-modality domain adaptation for freespace detection: A simple yet effective baseline,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 4031–4042. [237] C. Feng, Y. Zhong, Z. Jie, X. Chu, H. Ren, X. Wei, W. Xie, and L. Ma, “Promptdet: Expand your detec- tor vocabulary with uncurated images,” arXiv preprint arXiv:2203.16513, 2022. [238] Y. Zhong, J. Yang, P. Zhang, C. Li, N. Codella, L. H. Li, L. Zhou, X. Dai, L. Yuan, Y. Li et al., “Regionclip: Region-based language-image pretraining,” in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 793–16 803. 22
{ "id": "1711.07264" }
1905.03197
Unified Language Model Pre-training for Natural Language Understanding and Generation
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm.
http://arxiv.org/pdf/1905.03197
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon
cs.CL
Accepted by NeurIPS-19. Code and pre-trained models: https://github.com/microsoft/unilm
null
cs.CL
20190508
20191015
9 1 0 2 t c O 5 1 ] L C . s c [ 3 v 7 9 1 3 0 . 5 0 9 1 : v i X r a # Unified Language Model Pre-training for Natural Language Understanding and Generation Li Dong∗ Nan Yang∗ Wenhui Wang∗ Furu Wei∗ † Xiaodong Liu Yu Wang Jianfeng Gao Ming Zhou Hsiao-Wuen Hon Microsoft Research {lidong1,nanya,wenwan,fuwei}@microsoft.com {xiaodl,yuwan,jfgao,mingzhou,hon}@microsoft.com # Abstract This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirec- tional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of- the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm. # Introduction Language model (LM) pre-training has substantially advanced the state of the art across a variety of natural language processing tasks [8, 29, 19, 31, 9, 1]. Pre-trained LMs learn contextualized text representations by predicting words based on their context using large amounts of text data, and can be fine-tuned to adapt to downstream tasks. Different prediction tasks and training objectives have been used for pre-training LMs of different types, as shown in Table 1. ELMo [29] learns two unidirectional LMs: a forward LM reads the text from left to right, and a backward LM encodes the text from right to left. GPT [31] uses a left-to-right Transformer [43] to predict a text sequence word-by-word. In contrast, BERT [9] employs a bidirectional Transformer encoder to fuse both the left and right context to predict the masked words. Although BERT significantly improves the performance of a wide range of natural language understanding tasks [9], its bidirectionality nature makes it difficult to be applied to natural language generation tasks [44]. In this work we propose a new UNIfied pre-trained Language Model (UNILM) that can be applied to both natural language understanding (NLU) and natural language generation (NLG) tasks. UNILM is a multi-layer Transformer network, jointly pre-trained on large amounts of text, optimized for three types of unsupervised language modeling objectives as shown in Table 2. In particular, we design a # ∗ Equal contribution. † Contact person. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. ELMo GPT BERT UNILM Left-to-Right LM v v Right-to-Left LM v Bidirectional LM v Sequence-to-Sequence LM v v v # Table 1: Comparison between language model (LM) pre-training objectives. Backbone Network LM Objectives of Unified Pre-training What Unified LM Learns Example Downstream Tasks Transformer with shared parameters for all LM objectives Bidirectional LM Unidirectional LM Sequence-to-Sequence LM Bidirectional encoding Unidirectional decoding Unidirectional decoding conditioned on bidirectional encoding GLUE benchmark Extractive question answering Long text generation Abstractive summarization Question generation Generative question answering Table 2: The unified LM is jointly pre-trained by multiple language modeling objectives, sharing the same parameters. We fine-tune and evaluate the pre-trained unified LM on various datasets, including both language understanding and generation tasks. set of cloze tasks [42] where a masked word is predicted based on its context. These cloze tasks differ in how the context is defined. For a left-to-right unidirectional LM, the context of the masked word to be predicted consists of all the words on its left. For a right-to-left unidirectional LM, the context consists of all the words on the right. For a bidirectional LM, the context consists of the words on both the right and the left [9]. For a sequence-to-sequence LM, the context of the to-be-predicted word in the second (target) sequence consists of all the words in the first (source) sequence and the words on the its left in the target sequence. Similar to BERT, the pre-trained UNILM can be fine-tuned (with additional task-specific layers if necessary) to adapt to various downstream tasks. But unlike BERT which is used mainly for NLU tasks, UNILM can be configured, using different self-attention masks (Section 2), to aggregate context for different types of language models, and thus can be used for both NLU and NLG tasks. The proposed UNILM has three main advantages. First, the unified pre-training procedure leads to a single Transformer LM that uses the shared parameters and architecture for different types of LMs, alleviating the need of separately training and hosting multiple LMs. Second, the parameter sharing makes the learned text representations more general because they are jointly optimized for different language modeling objectives where context is utilized in different ways, mitigating overfitting to any single LM task. Third, in addition to its application to NLU tasks, the use of UNILM as a sequence-to-sequence LM (Section 2.3), makes it a natural choice for NLG, such as abstractive summarization and question generation. Experimental results show that our model, used as a bidirectional encoder, compares favorably with BERT on the GLUE benchmark and two extractive question answering tasks (i.e., SQuAD 2.0 and CoQA). In addition, we demonstrate the effectiveness of UNILM on five NLG datasets, where it is used as a sequence-to-sequence model, creating new state-of-the-art results on CNN/DailyMail and Gigaword abstractive summarization, SQuAD question generation, CoQA generative question answering, and DSTC7 dialog response generation. # 2 Unified Language Model Pre-training Given an input sequence x = x1 · · · x|x|, UNILM obtains a contextualized vector representation for each token. As shown in Figure 1, the pre-training optimizes the shared Transformer [43] network with respect to several unsupervised language modeling objectives, namely, unidirectional LM, bidirectional LM, and sequence-to-sequence LM. In order to control the access to the context of the word token to be predicted, we employ different masks for self-attention. In other words, we use masking to control how much context the token should attend to when computing its contextualized 2 Transformer Allow to attend ' IB Prevent from attending Ss, ' S> ‘ Transformer . sos] | s, | [EOS] | S, | |EOS gro $,8S,: attend to all tokens >=} Et Et EA LE 2 Segment 1 Segment 2 Transformer Transformer Block 2 H Transformer Block 1 tem -----------+-- Se S,: attend to left context [SOS] | Si | |S: | |S: | [EOS Token Embedding os, Segment 1 Position Embedding My s, 8, 9 Transformer Block L Left-to-Right LM = mH, Segment Embedding ' Tieansiionaey tf Ss, 2G 2D X3 Xa | L% WILL te . S) ' Unified LM with ' Shared Parameters S,: attend to S, tokens a 1 S,: attend to left context SOS] |_S:_| [EOS] |_S2 | [EOS 1 2 Self-attention Masks Segment 1 Segment 2 Figure 1: Overview of unified LM pre-training. The model parameters are shared across the LM objectives (i.e., bidirectional LM, unidirectional LM, and sequence-to-sequence LM). We use different self-attention masks to control the access to context for each word token. The right-to-left LM is similar to the left-to-right one, which is omitted in the figure for brevity. representation. Once UNILM is pretrained, we can fine-tune it using task-specific data for downstream tasks. # Input Representation The input x is a word sequence, which is either a text segment for unidirectional LMs or a pair of segments packed together for bidirectional LM and sequence-to-sequence LM. We always add a special start-of-sequence ([SOS]) token at the beginning of input, and a special end-of-sequence ([EOS]) token at the end of each segment. [EOS] not only marks the sentence boundary in NLU tasks, but also is used for the model to learn when to terminate the decoding process in NLG tasks. The input representation follows that of BERT [9]. Texts are tokenized to subword units by WordPiece [48]. For each input token, its vector representation is computed by summing the corresponding token embedding, position embedding, and segment embedding. Since UNILM is trained using multiple LM tasks, segment embeddings also play a role of LM identifier in that we use different segment embeddings for different LM objectives. # 2.2 Backbone Network: Multi-Layer Transformer The input vectors {xi}|x| i=1 is first packed into H0 = [x1, · · · , x|x|], and then encoded into contextual representations at different levels of abstract Hl = [hl 1, · · · , hl |x|] using an L-layer Transformer Hl = Transformerl(Hl−1), l ∈ [1, L]. In each Transformer block, multiple self-attention heads are used to aggregate the output vectors of the previous layer. For the l-th Transformer layer, the output 3 of a self-attention head Al is computed via: Q=H''w?, 0, l , V = Hl−1WV l (1) l , K = Hl−1WK allow to attend −∞, prevent from attending Mij = (2) Kt Aj = softmax( 2 +M)V; (3) Vdk where the previous layer’s output Hl−1 ∈ R|x|×dh is linearly projected to a triple of queries, keys and values using parameter matrices WQ l ∈ Rdh×dk , respectively, and the mask matrix l , WV M ∈ R|x|×|x| determines whether a pair of tokens can be attended to each other. We use different mask matrices M to control what context a token can attend to when computing its contextualized representation, as illustrated in Figure 1. Take bidirectional LM as an example. The elements of the mask matrix are all 0s, indicating that all the tokens have access to each other. # 2.3 Pre-training Objectives We pretrain UNILM using four cloze tasks designed for different language modeling objectives. In a cloze task, we randomly choose some WordPiece tokens in the input, and replace them with special token [MASK]. Then, we feed their corresponding output vectors computed by the Transformer network into a softmax classifier to predict the masked token. The parameters of UNILM are learned to minimize the cross-entropy loss computed using the predicted tokens and the original tokens. It is worth noting that the use of cloze tasks makes it possible to use the same training procedure for all LMs, unidirectional and bidirectional alike. Unidirectional LM We use both left-to-right and right-to-left LM objectives. Take the left-to-right LM as an example. The representation of each token encodes only the leftward context tokens and itself. For instance, to predict the masked token of “x1x2 [MASK] x4”, only tokens x1, x2 and itself can be used. This is done by using a triangular matrix for the self-attention mask M (as in Equation (2)), where the upper triangular part of the self-attention mask is set to −∞, and the other elements to 0, as shown in Figure 1. Similarly, a right-to-left LM predicts a token conditioned on its future (right) context. Bidirectional LM Following [9], a bidirectional LM allows all tokens to attend to each other in prediction. It encodes contextual information from both directions, and can generate better contextual representations of text than its unidirectional counterpart. As indicated in Equation (2), the self- attention mask M is a zero matrix, so that every token is allowed to attend across all positions in the input sequence. Sequence-to-Sequence LM As shown in Figure 1, for prediction, the tokens in the first (source) segment can attend to each other from both directions within the segment, while the tokens of the second (target) segment can only attend to the leftward context in the target segment and itself, as well as all the tokens in the source segment. For example, given source segment t1t2 and its target segment t3t4t5, we feed input “[SOS] t1 t2 [EOS] t3 t4 t5 [EOS]” into the model. While both t1 and t2 have access to the first four tokens, including [SOS] and [EOS], t4 can only attend to the first six tokens. Figure 1 shows the self-attention mask M used for the sequence-to-sequence LM objective. The left part of M is set to 0 so that all tokens can attend to the first segment. The upper right part is set to −∞ to block attentions from the source segment to the target segment. Moreover, for the lower right part, we set its upper triangular part to −∞, and the other elements to 0, which prevents tokens in the target segment from attending their future (right) positions. During training, we randomly choose tokens in both segments, and replace them with the special token [MASK]. The model is learned to recover the masked tokens. Since the pair of source and target texts are packed as a contiguous input text sequence in training, we implicitly encourage the model to learn the relationship between the two segments. In order to better predict tokens in the target segment, UNILM learns to effectively encode the source segment. Thus, the cloze task designed for 4 the sequence-to-sequence LM, also known as the encoder-decoder model, simultaneously pre-trains a bidirectional encoder and an unidirectional decoder. The pre-trained model, used as an encoder- decoder model, can be easily adapted to a wide range of conditional text generation tasks, such as abstractive summarization. Next Sentence Prediction For the bidirectional LM, we also include the next sentence prediction task for pre-training, as in [9]. # 2.4 Pre-training Setup The overall training objective the sum of different types of LM objectives described above. Specif- ically, within one training batch, 1/3 of the time we use the bidirectional LM objective, 1/3 of the time we employ the sequence-to-sequence LM objective, and both left-to-right and right-to-left LM objectives are sampled with rate of 1/6. The model architecture of UNILM follows that of BERTLARGE [9] for a fair comparison. The gelu activation [18] is used as GPT [31]. Specifically, we use a 24-layer Transformer with 1, 024 hidden size, and 16 attention heads, which contains about 340M parameters. The weight matrix of the softmax classifier is tied with token embeddings. UNILM is initialized by BERTLARGE, and then pre-trained using English Wikipedia2 and BookCorpus [53], which have been processed in the same way as [9]. The vocabulary size is 28, 996. The maximum length of input sequence is 512. The token masking probability is 15%. Among masked positions, 80% of the time we replace the token with [MASK], 10% of the time with a random token, and keeping the original token for the rest. In addition, 80% of the time we randomly mask one token each time, and 20% of the time we mask a bigram or a trigram. Adam [22] with β1 = 0.9, β2 = 0.999 is used for optimization. The learning rate is 3e-5, with linear warmup over the first 40, 000 steps and linear decay. The dropout rate is 0.1. The weight decay is 0.01. The batch size is 330. The pre-training procedure runs for about 770, 000 steps. It takes about 7 hours for 10, 000 steps using 8 Nvidia Telsa V100 32GB GPU cards with mixed precision training. # 2.5 Fine-tuning on Downstream NLU and NLG Tasks For NLU tasks, we fine-tune UNILM as a bidirectional Transformer encoder, like BERT. Take text classification as an example. We use the encoding vector of [SOS] as the representation of input, denoted as hL 1 , and feed it to a randomly initialized softmax classifier (i.e., the task-specific output 1 WC), where WC ∈ Rdh×C is layer), where the class probabilities are computed as softmax(hL a parameter matrix, and C the number of categories. We maximize the likelihood of the labeled training data by updating the parameters of the pre-trained LM and the added softmax classifier. For NLG tasks, we take the sequence-to-sequence task as an example. The fine-tuning procedure is similar to pre-training using the self-attention masks as in Section 2.3. Let S1 and S2 denote source and target sequences, respectively. We pack them together with special tokens, to form the input “[SOS] S1 [EOS] S2 [EOS]”. The model is fine-tuned by masking some percentage of tokens in the target sequence at random, and learning to recover the masked words. The training objective is to maximize the likelihood of masked tokens given context. It is worth noting that [EOS], which marks the end of the target sequence, can also be masked during fine-tuning, thus when this happens, the model learns when to emit [EOS] to terminate the generation process of the target sequence. # 3 Experiments We have conducted experiments on both NLU (i.e., the GLUE benchmark, and extractive question answering) and NLG tasks (i.e., abstractive summarization, question generation, generative question answering, and dialog response generation). # 3.1 Abstractive Summarization Automatic text summarization produces a concise and fluent summary conveying the key information in the input (e.g., a news article). We focus on abstractive summarization, a generation task where # 2Wikipedia version: enwiki-20181101. 5 RG-1 RG-2 RG-L RG-1 RG-2 RG-L Extractive Summarization LEAD-3 Best Extractive [27] Abstractive Summarization 40.42 43.25 17.62 20.24 36.67 39.63 10K Training Examples Transformer [43] MASS [39] UNILM 10.97 25.03 32.96 2.23 9.48 14.68 10.42 23.48 30.56 PGNet [37] Bottom-Up [16] S2S-ELMo [13] UNILM 39.53 41.22 41.56 43.33 17.28 18.68 18.94 20.21 37.98 38.34 38.47 40.51 Full Training Set OpenNMT [23] Re3Sum [4] MASS [39] UNILM 36.73 37.04 37.66 38.45 17.86 19.03 18.53 19.45 33.68 34.46 34.89 35.75 Table 3: Evaluation results on CNN/DailyMail summarization. Models in the first block are ex- tractive systems listed here for reference, while the others are abstractive models. The results of the best reported extractive model are taken from [27]. RG is short for ROUGE. Table 4: Results on Gigaword abstractive summa- rization. Models in the first block only use 10K examples for training, while the others use 3.8M examples. Results of OpenNMT and Transformer are taken from [4, 39]. RG is short for ROUGE. the summary is not constrained to reusing the phrases or sentences in the input text. We use the non-anonymized version of the CNN/DailyMail dataset [37] and Gigaword [36] for model fine-tuning and evaluation. We fine-tune UNILM as a sequence-to-sequence model following the procedure described in Section 2.5 by concatenating document (the first segment) and summary (the second segment) as input which is truncated according to a pre-defined maximum length. We fine-tune our model on the training set for 30 epochs. We reuse most hyper-parameters from pre-training. The masking probability is 0.7. We also use label smoothing [40] with rate of 0.1. For CNN/DailyMail, we set batch size to 32, and maximum length to 768. For Gigaword, we set batch size to 64, and maximum length to 256. During decoding, we use beam search with beam size of 5. The input document is truncated to the first 640 and 192 tokens for CNN/DailyMail and Gigaword, respectively. We remove duplicated trigrams in beam search, and tweak the maximum summary length on the development set [28, 13]. We use the F1 version of ROUGE [25] as the evaluation metric for both datasets. In Table 3, we compare UNILM against the baseline and several state-of-the-art models on CNN/DailyMail. LEAD- 3 is a baseline model that extracts the first three sentences in a document as its summary. PGNet [37] is a sequence-to-sequence model based on the pointer-generator network. S2S-ELMo [13] uses a sequence-to-sequence model augmented with pre-trained ELMo representations, which is termed as SRC-ELMO+SHDEMB in [13]. Bottom-Up [16] is a sequence-to-sequence model augmented with a bottom-up content selector for selecting salient phrases. We also include in Table 3 the best reported extractive summarization result [27] on the dataset. As shown in Table 3, our model outperforms all previous abstractive systems, creating a new state-of-the-art abstractive summarization result on the dataset. Our model also outperforms the best extractive model [27] by 0.88 point in ROUGE-L. In Table 4, we evaluate the models on Gigaword with different scales (10K and 3.8M). Both Transformer [43] and OpenNMT [23] implement standard attentional sequence-to-sequence models. Re3Sum [4] retrieves summaries as candidate templates, and then use an extended sequence-to- sequence model to generate summaries. MASS [39] is a pre-trained sequence-to-sequence model based on Transformer networks. Experimental results show that UNILM achieves better performance than previous work. Besides, in the low-resource setting (i.e., only 10,000 examples are used as training data), our model outperforms MASS by 7.08 point in ROUGE-L. # 3.2 Question Answering (QA) The task is to answer a question given a passage [33, 34, 15]. There are two settings. The first is called extractive QA, where the answer is assumed to be a text span in the passage. The other is called generative QA, where the answer needs to be generated on the fly. Extractive QA This task can be formulated as a NLU task where we need to predict the start and end positions of the answer spans within the passage. We fine-tune the pre-trained UNILM as a 6 EM F1 F1 F1 RMR+ELMo [20] BERTLARGE UNILM 71.4 78.9 80.5 73.7 81.8 83.4 DrQA+ELMo [35] BERTLARGE UNILM 67.2 82.7 84.9 Seq2Seq [35] PGNet [35] UNILM 27.5 45.4 82.5 Table 5: Extractive QA results on the SQuAD development set. Table 6: Extractive QA results on the CoQA development set. Table 7: Generative QA results on the CoQA development set. bidirectional encoder for the task. We conduct experiments on the Stanford Question Answering Dataset (SQuAD) 2.0 [34], and Conversational Question Answering (CoQA) [35] datasets. The results on SQuAD 2.0 are reported in Table 5, where we compare two models in Exact Match (EM) and F1 score. RMR+ELMo [20] is an LSTM-based question answering model augmented with pre-trained language representation. BERTLARGE is a cased model, fine-tuned on the SQuAD training data for 3 epochs, with batch size 24, and maximum length 384. UNILM is fine-tuned in the same way as BERTLARGE. We see that UNILM outperforms BERTLARGE. CoQA is a conversational question answering dataset. Compared with SQuAD, CoQA has several unique characteristics. First, the examples in CoQA are conversational, so we need to answer the input question based on conversation histories. Second, the answers in CoQA can be free-form texts, including a large portion is of yes/no answers. We modify the model used for SQuAD as follows. Firstly, in addition to the asked question, we concatenate the question-answer histories to the first segment, so that the model can capture conversational information. Secondly, for yes/no questions, we use the final hidden vector of the [SOS] token to predict whether the input is a yes/no question, and whether the answer is yes or no. For other examples, we select a passage subspan with the highest F1 score for training. The results on CoQA are reported in Table 6, where we compare two models in F1 scores. DrQA+ELMo [35] is an LSTM-based question answering model augmented with pre-trained ELMo representation. BERTLARGE is a cased model, fine-tuned on the CoQA training data for 2 epochs, with batch size 16, and maximum length 512. UNILM is fine-tuned with the same hyper-parameters as BERTLARGE. We see that UNILM outperforms BERTLARGE. Generative QA Generative question answering generates free-form answers for the input question and passage, which is a NLG task. In contrast, extractive methods can only predict subspans of the input passage as answers. On the CoQA dataset (as described above), Reddy et al. [2019] show that vanilla sequence-to-sequence models still underperforms extractive methods by a wide margin. We adapt UNILM to generative question answering as a sequence-to-sequence model. The first segment (i.e., the input sequence) is the concatenation of conversational histories, the input question and the passage. The second segment (i.e., the output sequence) is the answer. We fine-tune the pre-trained UNILM on the CoQA training set for 10 epochs. We set the batch size to 32, the mask probability to 0.5, and the maximum length to 512. We also use label smoothing with rate of 0.1. The other hyper-parameters are kept the same as pre-training. During decoding, we use beam search with beam size of 3. The maximum length of input question and passage is 470. For passages that are longer than the maximum length, we split the passage into several chunks with a sliding window approach, and select a chunk with the highest word overlap over the question. We compare our method with the generative question answering models Seq2Seq and PGNet as described in [35]. The Seq2Seq baseline is a sequence-to-sequence model with an attention mech- anism. The PGNet model augments Seq2Seq with a copy mechanism. As shown in Table 7, our generative question answering model outperforms previous generative methods by a wide margin, which significantly closes the gap between generative method and extractive method. # 3.3 Question Generation We conduct experiments for the answer-aware question generation task [52]. Given an input passage and an answer span, our goal is to generate a question that asks for the answer. The SQuAD 1.1 dataset [33] is used for evaluation. Following [12], we split the original training set into training and 7 BLEU-4 MTR RG-L CorefNQG [11] SemQG [50] UNILM 15.16 18.37 22.12 19.12 22.65 25.06 - 46.68 51.07 MP-GSN [51] SemQG [50] UNILM 20.25 24.20 25.61 44.48 48.91 52.04 16.38 20.76 23.75 Table 8: Question generation results on SQuAD. MTR is short for METEOR, and RG for ROUGE. Results in the groups use different data splits. # EM # F1 # UNILM QA Model (Section 3.2) + UNILM Generated Questions 80.5 84.7 83.4 87.6 Table 9: Question generation based on UNILM improves question answering results on the SQuAD development set. NIST-4 BLEU-4 METEOR Entropy-4 Div-1 Div-2 Avg len Best System in DSTC7 Shared Task 2.523 2.669 UNILM 1.83 4.39 8.07 8.27 9.030 9.195 0.109 0.120 0.325 0.391 15.133 14.807 Human Performance 2.650 3.13 8.31 10.445 0.167 0.670 18.76 Table 10: Response generation results. Div-1 and Div-2 indicate diversity of unigrams and bigrams, respectively. test sets, and keep the original development set. We also conduct experiments following the data split as in [51], which uses the reversed dev-test split. The question generation task is formulated as a sequence-to-sequence problem. The first segment is the concatenation of input passage and answer, while the second segment is the generated question. We fine-tune UNILM on the training set for 10 epochs. We set batch size to 32, masking probability to 0.7, and learning rate to 2e-5. The rate of label smoothing is 0.1. The other hyper-parameters are the same as pre-training. During decoding, we truncate the input to 464 tokens by selecting a passage chunk which contains the answer. The evaluation metrics BLEU-4, METEOR, and ROUGE-L are computed by the same scripts as in [12]. The results3 are presented in Table 8. CorefNQG [11] is based on a sequence-to-sequence model with attention and a feature-rich encoder. MP-GSN [51] uses an attention-based sequence-to-sequence model with a gated self-attention encoder. SemQG [50] uses two semantics-enhanced rewards to regularize the generation. UNILM outperforms previous models and achieves a new state-of-the-art for question generation. Generated Questions Improve QA The question generation model can automatically harvest a large number of question-passage-answer examples from a text corpus. We show that the augmented data generated by question generation improves the question answering model. We generate five million answerable examples, and four million unanswerable examples by modifying the answerable ones. We fine-tune our question answering model on the generated data for one epoch. Then the model is fine-tuned on the SQuAD 2.0 data for two more epochs. As shown in Table 9, the augmented data generated by UNILM improves question answering model introduced in Section 3.2. Note that we use bidirectional masked language modeling as an auxiliary task for both the generated and SQuAD 2.0 datasets during fine-tuning, which brings 2.3 absolute improvement compared to directly using automatically generated examples. A possible reason is that the auxiliary task alleviates catastrophic forgetting [49] when fine-tuning on augmented data. # 3.4 Response Generation We evaluate UNILM on the document-grounded dialog response generation task [30, 15]. Given a multi-turn conversation history and a web document as the knowledge source, the system needs to 3Notice that if we directly use the tokenized references provided by Du et al. [2017], the results are (21.63 BLEU-4 / 25.04 METEOR / 51.09 ROUGE-L) on the original data split [12], and (23.08 BLEU-4 / 25.57 METEOR / 52.03 ROUGE-L) in the reversed dev-test setup [51]. 8 Model CoLA SST-2 MRPC STS-B QQP MNLI-m/mm QNLI RTE WNLI AX Score MCC Acc F1 S Corr F1 Acc Acc Acc Acc Acc GPT 45.4 BERTLARGE 60.5 61.1 UNILM 91.3 94.9 94.5 82.3 89.3 90.0 80.0 86.5 87.7 70.3 72.1 71.7 82.1/81.4 86.7/85.9 87.0/85.9 87.4 92.7 92.7 56.0 70.1 70.9 53.4 65.1 65.1 29.8 39.6 38.4 72.8 80.5 80.8 Table 11: GLUE test set results scored using the GLUE evaluation server. generate a natural language response that is both conversationally appropriate and reflective of the contents of the web document. We fine-tune UNILM to the task as a sequence-to-sequence model. The first segment (input sequence) is the concatenation of the web document and the conversation history. The second segment (output sequence) is the response. We fine-tune UNILM on the DSTC7 training data for 20 epochs, with batch size 64. The masking probability is set to 0.5. The maximum length is 512. During decoding, we use beam search with size of 10. The maximum length of generated response is set to 40. As shown in Table 10, UNILM outperforms the best system [41] in the DSTC7 shared task [14] across all evaluation metrics. # 3.5 GLUE Benchmark We evaluate UNILM on the General Language Understanding Evaluation (GLUE) benchmark [45]. GLUE is a collection of nine language understanding tasks, including question answering [33], linguistic acceptability [46], sentiment analysis [38], text similarity [5], paraphrase detection [10], and natural language inference (NLI) [7, 2, 17, 3, 24, 47]. Our model is fine-tuned as a bidirectional LM. We use Adamax [21] as our optimizer with a learning rate of 5e-5 and a batch size of 32. The maximum number of epochs is set to 5. A linear learning rate decay schedule with warmup of 0.1 is used. The dropout rate of the last linear projection for each task is set to 0.1, except 0.3 for MNLI and 0.05 for CoLA/SST-2. To avoid the gradient explosion issue, the gradient norm was clipped within 1. We truncated the tokens no longer than 512. Table 11 presents the GLUE test results obtained from the benchmark evaluation server. The results show that UNILM obtains comparable performance on the GLUE tasks in comparison with BERTLARGE. # 4 Conclusion and Future Work We propose a unified pre-training model, UNILM, which is jointly optimized for several LM objectives with shared parameters. The unification of bidirectional, unidirectional, and sequence- to-sequence LMs enables us to straightforwardly fine-tune the pre-trained UNILM for both NLU and NLG tasks. Experimental results demonstrate that our model compares favorably with BERT on the GLUE benchmark and two question answering datasets. In addition, UNILM outperforms previous state-of-the-art models on five NLG datasets: CNN/DailyMail and Gigaword abstractive summarization, SQuAD question generation, CoQA generative question answering, and DSTC7 dialog response generation. The work can be advanced from the following perspectives: • We will push the limit of the current method by training more epochs and larger models on web- scale text corpora. At the same time, we will also conduct more experiments on end applications as well as ablation experiments to investigate the model capability and the benefits of pre-training multiple language modeling tasks with the same network. • We are focusing on monolingual NLP tasks in our current experiments. We are also interested in extending UNILM to support cross-lingual tasks [6]. • We will conduct multi-task fine-tuning on both NLU and NLG tasks, which is a natural extension of Multi-Task Deep Neural Network (MT-DNN) [26]. Acknowledgement We would like to acknowledge Shiyue Zhang for the helpful discussions about the question generation experiments. 9 # References [1] Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. Cloze-driven pretraining of self-attention networks. arXiv preprint arXiv:1903.07785, 2019. [2] Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, 01 2006. [3] Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC-09), 2009. [4] Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 152–161, Melbourne, Australia, July 2018. Association for Computational Linguistics. [5] Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017. [6] Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, and Heyan Huang. Cross- lingual natural language generation via pre-training. ArXiv, abs/1909.10481, 2019. [7] Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entail- ment challenge. In Proceedings of the First International Conference on Machine Learning Challenges: Evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment, MLCW’05, pages 177–190, Berlin, Heidelberg, 2006. Springer-Verlag. [8] Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems 28, pages 3079–3087. Curran Associates, Inc., 2015. [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. [10] William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential para- phrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005. [11] Xinya Du and Claire Cardie. Harvesting paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1907–1917, Melbourne, Australia, July 2018. Association for Computational Linguistics. [12] Xinya Du, Junru Shao, and Claire Cardie. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1342–1352, 2017. [13] Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. CoRR, abs/1903.09722, 2019. [14] Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. Grounded response generation task at dstc7. In AAAI Dialog System Technology Challenges Workshop, 2019. [15] Jianfeng Gao, Michel Galley, Lihong Li, et al. Neural approaches to conversational ai. Founda- tions and Trends in Information Retrieval, 13(2-3):127–298, 2019. [16] Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. 10 [17] Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL In Proceedings of the ACL-PASCAL Workshop recognizing textual entailment challenge. on Textual Entailment and Paraphrasing, pages 1–9, Prague, June 2007. Association for Computational Linguistics. [18] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv preprint arXiv:1606.08415, 2016. [19] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classi- fication. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 328–339, Melbourne, Australia, July 2018. Association for Computational Linguistics. [20] Minghao Hu, Furu Wei, Yuxing Peng, Zhen Huang, Nan Yang, and Ming Zhou. Read + verify: Machine reading comprehension with unanswerable questions. CoRR, abs/1808.05759, 2018. [21] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [22] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, San Diego, CA, 2015. [23] Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. OpenNMT: In Proceedings of ACL 2017, System Open-source toolkit for neural machine translation. Demonstrations, pages 67–72, Vancouver, Canada, July 2017. Association for Computational Linguistics. [24] Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012. [25] Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summariza- tion Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81, Barcelona, Spain, July 2004. Association for Computational Linguistics. [26] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. CoRR, abs/1901.11504, 2019. [27] Yang Liu. Fine-tune BERT for extractive summarization. CoRR, abs/1903.10318, 2019. [28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304, 2018. [29] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227–2237, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [30] Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jianfeng Gao. Conversing by reading: Contentful neural conversation with on- In Proceedings of the 57th Annual Meeting of the Association demand machine reading. for Computational Linguistics, pages 5427–5436, Florence, Italy, July 2019. Association for Computational Linguistics. [31] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. [32] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. [33] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ ques- tions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas, November 2016. Association for Computational Linguistics. 11 [34] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable In Proceedings of the 56th Annual Meeting of the Association for questions for SQuAD. Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 784–789, 2018. [35] Siva Reddy, Danqi Chen, and Christopher D. Manning. CoQA: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249–266, March 2019. [36] Alexander M. Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379–389, Lisbon, Portugal, September 2015. Association for Computational Linguistics. [37] Abigail See, Peter J. Liu, and Christopher D. Manning. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083, Vancouver, Canada, July 2017. Association for Computational Linguistics. [38] Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642, 2013. [39] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450, 2019. [40] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Re- thinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016. [41] Y Tam, Jiachen Ding, Cheng Niu, and Jie Zhou. Cluster-based beam search for pointer-generator chatbot grounded by knowledge. In AAAI Dialog System Technology Challenges Workshop, 2019. [42] Wilson L Taylor. Cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433, 1953. [43] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa- tion Processing Systems 30, pages 5998–6008. Curran Associates, Inc., 2017. [44] Alex Wang and Kyunghyun Cho. BERT has a mouth, and it must speak: BERT as a markov random field language model. CoRR, abs/1902.04094, 2019. [45] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations, 2019. [46] Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471, 2018. [47] Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1112–1122, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [48] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. 12 [49] Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and arXiv preprint Phil Blunsom. Learning and evaluating general linguistic intelligence. arXiv:1901.11373, 2019. [50] Shiyue Zhang and Mohit Bansal. Addressing semantic drift in question generation for semi- supervised question answering. CoRR, abs/1909.06356, 2019. [51] Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. [52] Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. Neural question generation from text: A preliminary study. In Xuanjing Huang, Jing Jiang, Dongyan Zhao, Yansong Feng, and Yu Hong, editors, Natural Language Processing and Chinese Computing, pages 662–671. Springer International Publishing, 2018. [53] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE International Conference on Computer Vision, pages 19–27, 2015. # Appendix A Long Text Generation: A Case Study Our model can generate text samples using the left-to-right setting. We picked three text samples sampled from left to right using our model, as shown in Table 12. We use the top-40 truncating sampling strategy [32], and forbid duplicate 4-grams during generation. For each example, we sampled 10 times from the same input and we hand-picked the best one; as such, these samples should be considered to be better than the average model output. From the examples, we find that the model can produce fluent output with somewhat consistent contents which fits the inputs’ genres and topics. In the first example, given a modified excerpt from the novel “1984” as input, the model’s output is like a short paragraph in an fantasy novel; for the other two with input from Wikipedia, the model makes up “facts” with plausible names, time stamps and events written in the Wikipedia style. # Appendix B GLUE Benchmark As shown in Table 13, we summarize the data size and the evaluation metrics used for the General Language Understanding Evaluation (GLUE) benchmark. 13 Winston sat back. A complete sense of had helplessness descended upon him. Winston sat back. A sense of complete helplessness had descended upon him. He was now helpless and unmoving. What would that mean? To become a monk under her mother’s rule. What would it mean for him to be put at the center of the most dangerous of all of demons, the demon known as Azrael? He would never know exactly what that was, or even how it would end, unless he could sense the strength of Azrael’s mind and soul. It wasn’t only the existence of his mother, it was all he knew, and every moment. He was trapped within her embrace, a trapped, torn, battered, crippling presence he only knew it would mean was eternity. And in the instant that he was lost his life changed forever, when Azraelis’ spirit touched his mind, he was engulfed in this dark and mystical, so enchanting and beautiful force. (...) yellow Bert is Muppet character on the long running children’s television show, Sesame Street. a Bert is a yellow Muppet character on the long running children’s television show, Sesame Street. He would typically appear as a sidekick or as a villain in the show; he is usually portrayed as a cowardly and untrustworthy figure that tends to intrude on the normal life of the street, such as “The Story of the Mole” , “The Story from the Side”, and “Bump the Bush”. His voice was provided by Michael Combs. Bert was a regular cast member of Sesame Street, and his appearance was always played throughout the series by two of his regular cast members, Frank Laughner and Michael Combs. (...) Maurice Ravel (7 March 1875 - 28 December 1937) was a French composer, pianist and conductor. Maurice Ravel (7 March 1875 - 28 December 1937) was a French composer, pianist and conductor. He was born in Paris, and studied piano with Eugene Massenet. In the early 1900s Ravel became interested in the composition of music, and he composed much of his early music for the piano. Ravel’s early work is marked by a marked sense of lyricism, but in the later years of his career he developed a strong preference for orchestral forms. His first work, “le Grand Cours d’ Auvergne”, was composed in 1909 by Maurice Ravel, a student of Jules Massenet, and was published by A.F.A. in 1912. It was re-published in 1912 by the publisher J.S.D.M. de l’Etablissement Musicale de la Musique Francaise. Ravel wrote the piano concerto “la Tragedie et la Chanson Dans le Theatre des Champs Elysees” in 1916. (...) Table 12: Text samples generated by our model using left-to-right generation. Corpus #Train/#Dev/#Test Metrics Single-Sentence Classification CoLA (Acceptability) 8.5k/1k/1k SST-2 (Sentiment) 67k/872/1.8k Matthews corr Accuracy Pairwise Text Classification MNLI (NLI) RTE (NLI) QNLI (NLI) WNLI (NLI) QQP (Paraphrase) MRPC (Paraphrase) 393k/20k/20k 2.5k/276/3k 108k/5.7k/5.7k 634/71/146 364k/40k/391k 3.7k/408/1.7k Accuracy Accuracy Accuracy Accuracy F1 score F1 score Text Similarity STS-B (Similarity) Spearman corr # 7k/1.5k/1.4k Table 13: Summary of the GLUE benchmark. 14
{ "id": "1606.08415" }
1905.02450
MASS: Masked Sequence to Sequence Pre-training for Language Generation
Pre-training and fine-tuning, e.g., BERT, have achieved great success in language understanding by transferring knowledge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence: its encoder takes a sentence with randomly masked fragment (several consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language generation tasks, including neural machine translation, text summarization and conversational response generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over the baselines without pre-training or with other pre-training methods. Specially, we achieve the state-of-the-art accuracy (37.5 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model.
http://arxiv.org/pdf/1905.02450
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu
cs.CL, cs.AI, cs.LG
Accepted by ICML 2019
null
cs.CL
20190507
20190621
9 1 0 2 n u J 1 2 ] L C . s c [ 5 v 0 5 4 2 0 . 5 0 9 1 : v i X r a # MASS: Masked Sequence to Sequence Pre-training for Language Generation # Kaitao Song * 1 Xu Tan * 2 Tao Qin 2 Jianfeng Lu 1 Tie-Yan Liu 2 # Abstract Pre-training and fine-tuning, e.g., BERT (De- vlin et al., 2018), have achieved great success in language understanding by transferring knowl- edge from rich-resource pre-training task to the low/zero-resource downstream tasks. Inspired by the success of BERT, we propose MAsked Sequence to Sequence pre-training (MASS) for encoder-decoder based language generation. MASS adopts the encoder-decoder framework to reconstruct a sentence fragment given the re- maining part of the sentence: its encoder takes a sentence with randomly masked fragment (sev- eral consecutive tokens) as input, and its decoder tries to predict this masked fragment. In this way, MASS can jointly train the encoder and decoder to develop the capability of representation extraction and language modeling. By further fine-tuning on a variety of zero/low-resource language gen- eration tasks, including neural machine transla- tion, text summarization and conversational re- sponse generation (3 tasks and totally 8 datasets), MASS achieves significant improvements over baselines without pre-training or with other pre- training methods. Specially, we achieve state-of- the-art accuracy (37.5 in terms of BLEU score) on the unsupervised English-French translation, even beating the early attention-based supervised model (Bahdanau et al., 2015b)1. # 1. Introduction Pre-training and fine-tuning are widely used when target tasks are of low or zero resource in terms of training data, *Equal contribution 1Key Laboratory of Intelligent Percep- tion and Systems for High-Dimensional Information of Min- istry of Education, Nanjing University of Science and Technol- ogy 2Microsoft Research. Correspondence to: Tao Qin <tao- [email protected]>. Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s). 1We release the codes in https://github.com/ microsoft/MASS. while pre-training has plenty of data (Girshick et al., 2014; Szegedy et al., 2015; Ouyang et al., 2015; Dai & Le, 2015; Howard & Ruder, 2018; Radford et al., 2018; Devlin et al., 2018). For example, in computer vision, models are usually pre-trained on the large scale ImageNet dataset and then fine- tuned on downstream tasks like object detection (Szegedy et al., 2015; Ouyang et al., 2015) or image segmenta- tion (Girshick et al., 2014). Recently, pre-training methods such as ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018) and BERT (Devlin et al., 2018) have attracted a lot of attention in natural language processing, and achieved state-of-the-art accuracy in multiple language understanding tasks such as sentiment classification (Socher et al., 2013), natural language inference (Bowman et al., 2015), named entity recognition (Tjong Kim Sang & De Meulder, 2003) and SQuAD question answering (Rajpurkar et al., 2016), which usually have limited supervised data. Among the pre-training methods mentioned above, BERT is the most prominent one by pre-training the bidirectional encoder rep- resentations on a large monolingual corpus through masked language modeling and next sentence prediction. Different from language understanding, language generation aims to generate natural language sentences conditioned on some inputs, including tasks like neural machine translation (NMT) (Cho et al., 2014; Bahdanau et al., 2015a; Vaswani et al., 2017), text summarization (Ayana et al., 2016; Suzuki & Nagata, 2017; Gehring et al., 2017) and conversational re- sponse generation (Shang et al., 2015; Vinyals & Le, 2015). Language generation tasks are usually data-hungry, and many of them are low-resource or even zero-source in terms of training data. Directly applying a BERT like pre-training method on these natural language generation tasks is not fea- sible, since BERT is designed for language understanding, which are usually handled by just one encoder or decoder. Therefore, how to design pre-training methods for the lan- guage generation tasks (which usually adopt the encoder- decoder based sequence to sequence learning framework) is of great potential and importance. In this paper, inspired by BERT, we propose a novel ob- jective for pre-training: MAsked Sequence to Sequence learning (MASS) for language generation. MASS is based on the sequence to sequence learning framework: its en- coder takes a sentence with a masked fragment (several consecutive tokens) as input, and its decoder predicts this MASS: Masked Sequence to Sequence Pre-training for Language Generation masked fragment conditioned on the encoder representa- tions. Unlike BERT or a language model that pre-trains only the encoder or decoder, MASS is carefully designed to pre-train the encoder and decoder jointly in two steps: 1) By predicting the fragment of the sentence that is masked on the encoder side, MASS can force the encoder to understand the meaning of the unmasked tokens, in order to predict the masked tokens in the decoder side; 2) By masking the input tokens of the decoder that are unmasked in the source side, MASS can force the decoder rely more on the source representation other than the previous tokens in the target side for next token prediction, better facilitating the joint training between encoder and decoder. MASS just needs to pre-train one model and then fine-tune on a variety of downstream tasks. We use transformer as the basic sequence to sequence model and pre-train on the WMT monolingual corpus2, and then fine-tune on three different language generation tasks including NMT, text summariza- tion and conversational response generation. Considering the downstream tasks cover cross-lingual task like NMT, we pre-train one model on multiple languages. We explore the low-resource setting for all the three tasks, and also consider unsupervised NMT which is a purely zero-resource set- ting. For NMT, the experiments are conducted on WMT14 English-French, WMT16 English-German and WMT16 English-Romanian datasets. For unsupervised NMT, we directly fine-tune the pre-trained model on monolingual data with back-translation loss (Lample et al., 2018), in- stead of using additional denoising auto-encoder loss as in Lample et al. (2018). For low-resource NMT, we fine- tune our model on limited bilingual data. For the other two tasks, we conduct experiments on: 1) the Gigaword corpus for abstractive text summarization; 2) the Cornell Movie Dialog corpus for conversational response generation. Our method achieves improvements on all these tasks as well as both the zero- and low-resource settings, demonstrating our method is effective and applicable to a wide range of sequence generation tasks. The contributions of this work are listed as follows: 1) We propose MASS, a masked sequence to sequence pre-training method for language generation; 2) We apply MASS on a variety of language generation tasks including NMT, text summarization and conversational response generation, and achieve significant improvements, demonstrating the effec- tiveness of our proposed method. Specially, we achieve a state-of-the art BLEU score for unsupervised NMT on two language pairs: English-French and English-German, and outperform the previous unsupervised NMT method (Lam- ple & Conneau, 2019) by more than 4 points on English- French and 1 point on French-English in terms of BLEU 2The monolingual data for each language is downloaded from http://www.statmt.org/wmt16/translation-task.html. score, and even beating the early attention-based supervised model (Bahdanau et al., 2015b). # 2. Related Work There are a lot of works on sequence to sequence learning and the pre-training for natural language processing. We briefly review several popular approaches in this section. # 2.1. Sequence to Sequence Learning Sequence to sequence learning (Cho et al., 2014; Bahdanau et al., 2015a; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) is a challenging task in artificial intelligence, and covers a variety of language generation applications such as NMT (Cho et al., 2014; Bahdanau et al., 2015a; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Tan et al., 2019; Artetxe et al., 2017; Lample et al., 2017; 2018; He et al., 2018; Hassan et al., 2018; Song et al., 2018; Shen et al., 2018), text summarization (Ayana et al., 2016; Suzuki & Nagata, 2017; Gehring et al., 2017), question answering (Yuan et al., 2017; Fedus et al., 2018) and con- versational response generation (Shang et al., 2015; Vinyals & Le, 2015). Sequence to sequence learning has attracted much attention in recent years due to the advance of deep learning. How- ever, many language generations tasks such as NMT lack paired data but have plenty of unpaired data. Therefore, the pre-training on unpaired data and fine-tuning with small- scale paired data will be helpful for these tasks, which is exactly the focus of this work. # 2.2. Pre-training for NLP tasks Pre-training has been widely used in NLP tasks to learn better language representation. Previous works mostly fo- cus on natural language understanding tasks, and can be classified into feature-based approaches and fine-tuning ap- proaches. Feature-based approaches mainly leverage pre- training to provide language representations and features to the downstream tasks, which includes word-level rep- resentations (Brown et al., 1992; Ando & Zhang, 2005; Blitzer et al., 2006; Collobert & Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014) and sentence-level rep- resentations (Kiros et al., 2015; Logeswaran & Lee, 2018; Le & Mikolov, 2014), as well as context sensitive features from the NMT model (McCann et al., 2017) and ELMo (Pe- ters et al., 2018). Fine-tuning approaches mainly pre-train a model on language modeling objective and then fine- tune the model on the downstream tasks with supervised data (Dai & Le, 2015; Howard & Ruder, 2018; Radford et al., 2018; Devlin et al., 2018). Specifically, Devlin et al. (2018) proposed BERT based on masked language modeling and next sentence prediction and achieved a state-of-the-art MASS: Masked Sequence to Sequence Pre-training for Language Generation | Encoder nHoeounm Attention Eevee Decoder | BOO mmm Figure 1. The encoder-decoder framework for our proposed MASS. The token “ ” represents the mask symbol [M]. accuracy on multiple language understanding tasks in the GLUE benchmark (Wang et al., 2018) and SQuAD (Ra- jpurkar et al., 2016). There are also some works pre-training the encoder-decoder model for language generation. Dai & Le (2015); Ra- machandran et al. (2016) leverage a language model or auto-encoder to pre-train the encoder and decoder. Their improvements, although observed, are limited and not as general and significant as the pre-training methods (e.g., BERT) for language understanding. Zhang & Zong (2016) designed a sentence reordering task for pre-training, but only for the encoder part of the encoder-decoder model. Zoph et al. (2016); Firat et al. (2016) pre-train the model on similar rich-resource language pairs and fine-tuned on the target language pair, which relies on supervised data on other language pairs. Recently, XLM (Lample & Conneau, 2019) pre-trained BERT-like models both for the encoder and decoder, and achieved the previous state of the art re- sults on unsupervised machine translation. However, the encoder and decoder in XLM are pre-trained separately and the encoder-decoder attention mechanism cannot be pre- trained, which are sub-optimal for sequence to sequence based language generation tasks. Different from previous works, our proposed MASS is care- fully designed to pre-train both the encoder and decoder jointly using only unlabeled data, and can be applied to most language generations tasks. mains. A sequence to sequence model learns the param- eter 6 to estimate the conditional probability P(y|zx; 0), and usually uses log likelihood as the objective function: L(9;(Â¥,Y)) = Ywyye(v,y) log P(y|x; 6). The condi- tional probability P(y|x; 6) can be further factorized accord- ing to the chain rule: P(y|x;0) = []/_, P(yelyce, 039), where y<; is the proceeding tokens before position t. A major approach to sequence to sequence learning is the encoder-decoder framework: The encoder reads the source sequence and generates a set of representations; the decoder estimates the conditional probability of each target token given the source representations and its preceding tokens. Attention mechanism (Bahdanau et al., 2015a) is further introduced between the encoder and decoder to find which source representation to focus on when predicting the cur- rent token. # 3.2. Masked Sequence to Sequence Pre-training We introduce a novel unsupervised prediction task in this section. Given an unpaired source sentence x ∈ X , we denote x\u:v as a modified version of x where its fragment from position u to v are masked, 0 < u < v < m and m is the number of tokens of sentence x. We denote k = v−u+1 as the number of tokens being masked from position u to v. We replace each masked token by a special symbol [M], and the length of the masked sentence is not changed. xu:v denotes the sentence fragment of x from u to v. # 3. MASS In this section, we first introduce the basic framework of sequence to sequence learning, and then propose MASS (MAsked Sequence to Sequence pre-training). We then discuss the differences between MASS and previous pre- training methods including the masked language modeling in BERT and standard language modeling. MASS pre-trains a sequence to sequence model by predict- ing the sentence fragment xu:v taking the masked sequence x\u:v as input. We also use the log likelihood as the objec- tive function: 1 " un L(0; X) = Sonex log P(x |x\“"; 0) || 1 v (1) = ibabmaaad log Il Play wep, a"; 0). t=u # 3.1. Sequence to Sequence Learning We denote (x, y) ∈ (X , Y) as a sentence pair, where x = (x1, x2, ..., xm) is the source sentence with m to- kens, and y = (y1, y2, ..., yn) is the target sentence with n tokens, and X and Y are the source and target do- We show an example in Figure 1, where the input sequence has 8 tokens with the fragment x3x4x5x6 being masked. Note that the model only predicts the masked fragment x3x4x5x6, given x3x4x5 as the decoder input for position 4 − 6, and the decoder takes the special mask symbol [M] as inputs for the other positions (e.g., position 1 − 3 and MASS: Masked Sequence to Sequence Pre-training for Language Generation @ Decoder Encoder ‘Attention Encoder | Eslcareslecaesieareales (a) Masked language modeling in BERT (k = 1) Il eooeoees see0eeca ww (b) Standard language modeling (kK = m) (a) Masked language modeling in BERT (k = 1) (b) Standard language modeling (k = m) Figure 2. The model structure of MASS when k = 1 and k = m. Masked language modeling in BERT can be viewed as the case k = 1 and standard language modeling can be viewed as the case k = m. 7 − 8). While our method works for any neural network based encoder-decoder frameworks, we choose Transformer in our experiments, considering that it achieves state-of-the- art performances in multiple sequence to sequence learning tasks. Length k = 1 k = m k ∈ (1, m) Probability P (xu|x\u; θ) P (x1:m|x\1:m; θ) P (xu:v|x\u:v; θ) Model masked LM in BERT standard LM in GPT methods in between Actually, the masked language modeling in BERT (Devlin et al., 2018) and the standard language modeling (Bengio et al., 2003; Mikolov et al., 2010) in GPT (Radford et al., 2018) can be viewed as special cases of MASS. We have an important hyperparameter k, which denotes the length of the masked fragment of the sentence. Our method with different k values can cover the special cases that are related to previous pre-training methods, as shown in Table 1. When k = 1, the masked fragment in the source sentence contains only one token, and the decoder predicts this token without any tokens as input but conditioned on the unmasked source tokens, as shown in Figure 2a. It becomes the masked language modeling as used in BERT. One may argue that the model structure is a little bit different from the masked language model. However, since all the input tokens of the decoder are masked, the decoder is itself like a non-linear classifier, analogous to the softmax matrix used in BERT. In this case, the conditional probability is P (xu|x\u; θ) and u is the position of the masked token, which is exactly the formulation of masked language modeling used in BERT3. Table 1. Masked language modeling in BERT and standard lan- guage modeling, as special cases covered in MASS. these methods in general. • Standard language modeling has long been used for pre-training, and the most prominent ones are the re- cently proposed ELMo (Peters et al., 2018) and Ope- nAI GPT (Radford et al., 2018). BERT introduces two pre-training tasks (masked language modeling and next sentence prediction) for natural language under- standing, and uses one encoder to extract the repre- sentation for a single sentence or a pair of sentences. Both standard language modeling and BERT can just pre-train the encoder or decoder separately. While achieving promising results on language understand- ing tasks, they are not suitable for language genera- tion tasks which typically leverage an encoder-decoder framework for conditional sequence generation. When k = m where m is the number of tokens in sen- tence x, all the tokens on the encoder side are masked and the decoder needs to predict all tokens given previous to- kens, as shown in Figure 2b. The conditional probability is P (x1:m|x\1:m; θ), and it becomes the standard language modeling in GPT, conditioned on null information from the encoder as all the tokens in the encoder side are masked. # 3.3. Discussions MASS is a pre-training method for language generation. While its special cases are related to the previous methods including the standard language modeling in GPT and the masked language modeling in BERT, it is different from 3One may argue that the masked language modeling in BERT randomly masks multiple tokens rather than just one token at a time. However, the key idea behind masking language modeling in BERT is to leverage bidirectional context information. Masking multiple tokens at a time is mainly for training speedup. • MASS is designed to jointly pre-train the encoder and decoder for language generation tasks. First, by only predicting the masked tokens through a sequence to sequence framework, MASS forces the encoder to un- derstand the meaning of the unmasked tokens, and also encourages the decoder to extract useful infor- mation from the encoder side. Second, by predicting consecutive tokens in the decoder side, the decoder can build better language modeling capability than just predicting discrete tokens. Third, by further masking the input tokens of the decoder which are not masked in the encoder side (e.g., when predicting fragment x3x4x5x6, only the tokens x3x4x5 are taken as the in- put and other tokens are masked with [M]), the decoder is encouraged to extract more useful information from the encoder side, rather than leveraging the abundant information from the previous tokens. MASS: Masked Sequence to Sequence Pre-training for Language Generation # 4. Experiments and Results In this section, we describe the experimental details about MASS pre-training and fine-tuning on a variety of language generation tasks, including NMT, text summarization, con- versational response generation. position for the third token is still 2 but not 0). In this way, we can get similar accuracy and reduce 50% computation in the decoder. We use Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4 for the pre-training. The model are trained on 8 NVIDIA V100 GPU cards and each mini-batch contains 3000 tokens for pre-training. # 4.1. MASS Pre-training Model Configuration We choose Transformer (Vaswani et al., 2017) as the basic model structure, which consists of 6-layer encoder and 6-layer decoder with 1024 embed- ding/hidden size and 4096 feed-forward filter size. For neural machine translation task, we pre-train our model on the monolingual data of the source and target languages. We respectively conduct experiments on three language pairs: English-French, English-German, and English-Romanian. For other language generation tasks, including text summa- rization and conversational response generation, we pre- train the model with only English monolingual data re- spectively. To distinguish between the source and target languages in neural machine translation task, we add a lan- guage embedding to each token of the input sentence for the encoder and decoder, which is also learnt end-to-end. We implement our method based on codebase of XLM 4. Datasets We use all of the monolingual data from WMT News Crawl datasets5, which covers 190M, 62M and 270M sentences from year 2007 to 2017 for English, French, Ger- man respectively. We also include a low-resource language, Romanian, in the pre-training stage, to verify the effective- ness of MASS pre-trained with low-resource monolingual data. We use all of the available Romanian sentences from News Crawl dataset and augment it with WMT16 data, which results in 2.9M sentences. We remove the sentences with length over 175. For each task, we jointly learn a 60,000 sub-word units with Byte-Pair Encoding (Sennrich et al., 2016) between source and target languages. Pre-Training Details We mask the fragment by replac- ing the consecutive tokens with special symbols [M], with random start position u. Following Devlin et al. (2018), the masked tokens in the encoder will be a [M] token 80% of the time, a random token 10% of the time and a unchanged token 10% of the time. We set the fragment length k as roughly 50% of the total number of tokens in the sentence and also study different k to compare their accuracy changes. To reduce the memory and computation cost, we removed the padding in the decoder (the masked tokens) but keep the positional embedding of the unmasked tokens unchanged (e.g., if the first two tokens are masked and removed, the To verify the effectiveness of MASS, we fine-tune the pre- trained model on three language generation tasks: NMT, text summarization and conversational response generation. We explore the low-resource setting on these tasks where we just leverage few training data for fine-tuning to simulate the low-resource scenario. For NMT, we mainly investigate the zero-resource (unsupervised) setting, as unsupervised NMT has become a challenging task in recent years (Artetxe et al., 2017; Lample et al., 2017; 2018). # 4.2. Fine-Tuning on NMT In this section, we first describe the experiments on the unsupervised NMT, and then introduce the experiments on low-resource NMT. Experimental Setting For unsupervised NMT, there is no bilingual data to fine-tune the pre-trained model. Therefore, we leverage the monolingual data that is also used in the pre-training stage. Different from Artetxe et al. (2017); Lample et al. (2017; 2018); Leng et al. (2019), we just use back-translation to generate pseudo bilingual data for training, without using denoising auto-encoder6. During fine-tuning, we use Adam optimizer (Kingma & Ba, 2015) with initial learning rate 10−4, and the batch size is set as 2000 tokens for each GPU. During evaluation, we calculate the BLEU score with multi-bleu.pl7 on newstest2014 for English-French, and newstest2016 for English-German and English-Romanian. Results on Unsupervised NMT Our results are shown in Table 2. On all the 6 translation directions, our method outperforms all of the previous results, including the meth- ods without pre-training (Lample et al., 2018) and with pre-training (Lample & Conneau, 2019). XLM (Lample & Conneau, 2019) is the previous state-of-the-art method which leverage BERT like pre-training in encoder and de- coder, which covers several pre-training methods: masked language model (MLM) and causal language model (CLM). Our method still outperforms XLM by 4.1 BLEU points on en-fr. Compared with Other Pre-training Methods We also compare MASS with the previous pre-training methods for 4https://github.com/facebookresearch/XLM 5While we choose the WMT monolingual data in the current setting, pre-training on Wikipedia data is also feasible. 6MASS is better than denoising auto-encoder as we will show in Table 3. 7https://github.com/moses-smt/mosesdecoder/blob/master/ scripts/generic/multi-bleu.perl MASS: Masked Sequence to Sequence Pre-training for Language Generation Method Setting en - fr fr - en en - de de - en en - ro ro - en Artetxe et al. (2017) Lample et al. (2017) Yang et al. (2018) Lample et al. (2018) XLM (Lample & Conneau, 2019) 2-layer RNN 3-layer RNN 4-layer Transformer 4-layer Transformer 6-layer Transformer 15.13 15.05 16.97 25.14 33.40 15.56 14.31 15.58 24.18 33.30 6.89 9.75 10.86 17.16 27.00 10.16 13.33 14.62 21.00 34.30 - - - 21.18 33.30 - - - 19.44 31.80 MASS 6-layer Transformer 37.50 34.90 28.30 35.20 35.20 33.10 Table 2. The BLEU score comparisons between MASS and the previous works on unsupervised NMT. Results on en-fr and fr-en pairs are reported on newstest2014 and the others are on newstest2016. Since XLM uses different combinations of MLM and CLM in the encoder and decoder, we report the highest BLEU score for XLM on each language pair. language generation tasks. The first baseline is BERT+LM, which use masked language modeling in BERT to pre-train the encoder and the standard language modeling to pre-train the decoder. The second baseline is DAE, which simply uses denoising auto-encoder (Vincent et al., 2008) to pre- train the encoder and decoder. We pre-train the model with BERT+LM and DAE, and fine-tune on the unsupervised translation pairs with same fine-tuning strategy of XLM (i.e., DAE loss + back-translation). These methods are also configured with the 6-layer Transformer setting. Method en-fr fr-en en-de de-en en-ro ro-en BERT+LM 33.4 32.3 DAE 30.1 28.3 24.9 32.9 20.9 27.5 31.7 30.4 28.8 27.6 MASS 37.5 34.9 28.3 35.2 35.2 33.1 Table 3. The BLEU score comparisons between MASS and other pre-training methods. The results for BERT+LM are directly taken from the MLM+CLM setting in XLM (Lample & Conneau, 2019) as they use the same pre-training methods. As shown in Table 3, BERT+LM achieves higher BLEU score than DAE, and MASS outperforms both BERT+LM and DAE on all the unsupervised translation pairs. While DAE usually leverages some denoising methods like ran- domly masking tokens or swapping adjacent tokens, the decoder can still easily learn to copy the unmasked tokens through encoder-decoder attention8. On the other hand, the decoder in DAE takes the full sentence as the input, which is enough to predict the next token like the language model, and is not forced to extract additional useful representation from the encoder. Experiments on Low-Resource NMT In the low- resource NMT setting, we respectively sample 10K, 100K, 1M paired sentence from the bilingual training data of WMT14 English-French, WMT16 English-German and WMT16 English-Romanian, to explore the performance of our method in different low-resource scenarios. We use the same BPE codes learned in the pre-trained stage to tokenize the training sentence pairs. We fine-tune the pre-trained model on the paired data for 20,000 steps with Adam op- timizer and the learning rate is set as 10−4. We choose the best model according to the accuracy on development set. We report the BLEU scores on the same testsets used in the unsupervised setting. As shown in Figure 3, MASS outperforms the baseline models that are trained only on the bilingual data without any pre-training on all the six translation directions, demonstrating the effectiveness of our method in the low-resource scenarios. # 4.3. Fine-Tuning on Text Summarization Experiment Setting Text summarization is the task of creating a short and fluent summary of a long text document, which is a typical sequence generation task. We fine-tune the pre-trained model on text summarization task with different scales (10K, 100K, 1M and 3.8M) of training data from the Gigaword corpus (Graff et al., 2003)9, which consists of a total of 3.8M article-title pairs in English. We take the article as the encoder input and title as the decoder input for fine-tuning. We report the F1 score of ROUGE-1, ROUGE- 2 and ROUGE-L on the Gigaword testset during evaluation. We use beam search with a beam size of 5 for inference. Results Our results are illustrated in Figure 4. We com- pare MASS with the model that is trained only on the paired data without any pre-training. MASS consistently outper- forms the baseline on different scales of fine-tuning data (more than 10 ROUGE points gain on 10K data and 5 ROUGE points gain on 100K data), which demonstrates that MASS is effective in low-resource scenarios with dif- ferent scale of training data on this task. 8The popular encoder-decoder based model structures (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) all adopt residual connection (He et al., 2016). Therefore, the token genera- tion in the top layer of the decoder side can directly depend on the token embedding in the encoder side through residual connection and attention. Compared with Other Pre-Training Methods We fur- ther compare MASS with the pre-training methods of BERT+LM and DAE described in Section 4.2, with 3.8M 9https://github.com/harvardnlp/sent-summary MASS: Masked Sequence to Sequence Pre-training for Language Generation (a) en-fr (b) fr-en (c) en-de (d) de-en (e) en-ro (f) ro-en mm Basoline mm Mass. 30 BLEU 1 S ° J 100K Number of parallel data 30) mum Baseline mm vas __ BEEU, 1M J 100K Number of parallel data mm Baseline 20) gam mass : a ]J BLEU _ 1M 100K Number of parallel data BLEU mm Baseline mm Mass 10K 100K 1M Number of parallel data mm Baseline 20) mm Mass a 210 | | 1M Number of parallel data 30° mam Baseline mmm ass cs a J 100K 1M Number of parallel data Figure 3. The BLEU score comparisons between MASS and the baseline on low-resource NMT with different scales of paired data. (a) RG-1 (F) (b) RG-2 (F) (c) RG-L (F) mm Baseline mm MASS f 10K 100K 1M 3.8M ‘Number of parallel data mm Mass mm Baseline 11 1 10K 100K 1M 3.8M Number of parallel data ROUGE 35| mm Baseline mm Mass 30 S25 3 =20 15 10 10K 100K 1M 3.8M Number of parallel data Figure 4. The comparisons between MASS and the baseline on text summarization task with different scales of paired data. The results are reported in ROUGE-1 (RG-1), ROUGE-2 (RG-2) and ROUGE-L (RG-L) respectively. F stands for F1-score. on the 10K pairs (randomly chosen) and the whole 110K pairs, and show the results in Table 5. MASS achieves lower PPL than the baseline on both the 10K and 110K data. Method Data = 10K Data = 110K Baseline BERT+LM 82.39 80.11 26.38 24.84 MASS 74.32 23.52 Table 5. The comparisons between MASS and other baseline meth- ods in terms of PPL on Cornell Movie Dialog corpus. Method RG-1 (F) RG-2 (F) RG-L (F) BERT+LM DAE 37.75 35.97 18.45 17.17 34.85 33.14 MASS 38.73 19.71 35.96 Table 4. The comparisons between MASS and two other pre- training methods in terms of ROUGE score on the text summariza- tion task with 3.8M training data. Compared with Other Pre-Training Methods We also compare MASS with the pre-training methods of BERT+LM and DAE on conversational response generation. As shown in Table 5, MASS consistently outperforms the two pre- training methods with lower PPL on 10K and 110K training data respectively. # 4.5. Analysis of MASS data on the text summarization task. As shown in Table 4, MASS consistently outperforms the two pre-training meth- ods on the three ROUGE scores. # 4.4. Fine-Tuning on Conversational Response Generation Experimental Setting Conversational response gener- ation generates a flexible response for the conversa- tion (Shang et al., 2015; Vinyals & Le, 2015). We conduct experiments on the Cornell movie dialog corpus (Danescu- Niculescu-Mizil & Lee, 2011)10 that contains 140K conver- sation pairs. We randomly sample 10K/20K pairs as the validation/test set and the remaining data is used for training. We adopt the same optimization hyperparameters from the pre-training stage for fine-tuning. We report the results with perplexity (PPL) following Vinyals & Le (2015). Results We compare MASS with the baseline that is trained on the available data pairs. We conduct experiments 10https://github.com/suriyadeepan/datasets/tree/master/seq2seq/ cornell movie corpus Study of Different k The length of the masked fragment k is an important hyperparameter of MASS and we have varied k in Section 3.2 to cover the special cases of masked language modeling in BERT and standard language mod- eling. In this section, we study the performance of MASS with different k, where we choose k from 10% to 90% per- centage of the sentence length m with a step size of 10%, plus with k = 1 and k = m. We observe both the performance of MASS after pre- training, as well as the performance after fine-tuning on several language generation tasks, including unsupervised English-French translation, text summarization and conver- sational response generation. We first show the perplexity (PPL) of the pre-training model on the English and French languages with different k. We choose the English and French sentences from newstest2013 of WMT En-Fr as the validation set, and plot the PPL in Figure 5a (English) and 5b (French). It can be seen that the pre-trained model achieves the best validation PPL when k is between 50% and 70% of the sentence length m. We then observe the performance on fine-tuning tasks. We show the curve of the validation BLEU scores on unsupervised En-Fr trans- MASS: Masked Sequence to Sequence Pre-training for Language Generation (a) (b) (c) (d) (e) ‘ : \ 1.24) \ B12) 4 Bly ‘ A K 1.16 ave / a“ | Menene 1127 20% 40% 60% 80% m Mask length k 1.38 ‘ \ 1.34) \ i #13} ‘| BM) Ae \ / , A 1.26, —h- F a. ; . Naeem! 7 20% 40% 60% 80% m Mask length k A ce EN rie ¢ ‘, 23.0, £ ry poe } ‘ & H | H 22.6, } \| i , a f ; k 1 20% 40% 60% 80% m Mask length k 13.0 “2 pea ea, / “ ‘ 12.5) 7 A 8 H Si20l f mB: = i ! 15) i‘ i 11 36% 40% 60% 80% m Mask length K ‘ 34.04 : 1 33.5) 1 “4330 \ 4 hee I y, Bo 4 “ 32.5| 4 * i / 32.0, < . f 31.5 hennt 1 20% 40% 60% 80% m Mask length k Figure 5. The performances of MASS with different masked lengths k, in both pre-training and fine-tuning stages, which include: the PPL of the pre-trained model on English (Figure a) and French (Figure b) sentences from WMT newstest2013 on English-French translation; the BLEU score of unsupervised English-French translation on WMT newstest2013 (Figure c); the ROUGE score (F1 score in RG-2) on the validation set of text summarization (Figure d); the PPL on the validation set of conversational response generation (Figure e). Method BLEU Method BLEU Method BLEU Discrete 36.9 Feed 35.3 MASS 37.5 Table 6. The comparison between MASS and the ablation methods in terms of BLEU score on the unsupervised en-fr translation. lation in Figure 5c, the validation ROUGE scores on text summarization in Figure 5d, and the validation PPL on con- versational response generation in Figure 5e. It can be seen that MASS achieves best performance on these downstream tasks when k is nearly 50% of the sentence length m. There- fore, we set k = 50% of m for MASS in our experiments. Actually, k = 50% of m is a good balance between the encoder and decoder. Too few valid tokens in the encoder side or in the decoder side will bias the model to concentrate more on the other side, which is not suitable for language generation task that typically leverages the encoder-decoder framework to extract the sentence representation in the en- coder, as well as to model and generate the sentence in the decoder. The extreme cases are k = 1 (masked language modeling in BERT) and k = m (standard language model- ing), as illustrated in Figure 2. Neither k = 1 nor k = m can achieve good performance on the downstream language generation tasks, as shown in Figure 5. to randomly mask discrete tokens instead of consecutive tokens in MASS, denoted as Discrete. The second study is to feed all the tokens to the decoder instead of masking the input tokens of the decoder that are not masked in the encoder side, denoted as Feed. We compare MASS with the two ablation methods on the unsupervised English-French translation, as shown in Table 6. It can be seen that both Dis- crete and Feed perform worse than MASS, demonstrating the effectiveness of the two designs in MASS. # 5. Conclusion In this work, we have proposed MASS: masked sequence to sequence pre-training for language generation tasks, which reconstructs a sentence fragment given the remaining part of the sentence in the encoder-decoder framework. MASS just needs to pre-train one model and then fine-tune on multiple language generation tasks such as neural machine translation, text summarization and conversational response generation. Through experiments on the three above tasks and total eight datasets, MASS achieved significant improve- ments over the baseline without pre-training or with other pre-training methods. More specifically, MASS achieved the state-of-the-art BLEU scores for unsupervised NMT on three language pairs, outperforming the previous state-of- the-art by more than 4 BLEU points on English-French. Ablation Study of MASS In our masked sequence to se- quence pre-training, we have two careful designs: (1) We mask consecutive tokens in the encoder side, and thus pre- dict consecutive tokens in the decoder side, which can build better language modeling capability than just predicting discrete tokens. (2) We mask the input tokens of the de- coder which are not masked in the encoder side (e.g., when predicting fragment x3x4x5x6 in Figure 1, only the tokens x3x4x5 are taken as the input and other tokens are masked with [M]), to encourage the decoder to extract more useful information from the encoder side, rather than leveraging the abundant information from the previous tokens. In this section, we conduct two ablation studies to verify the ef- fectiveness of the two designs in MASS. The first study is For future work, we will apply MASS to more language generation tasks such as sentence paraphrasing, text style transfer and post editing, as well as other sequence genera- tion tasks (Ren et al., 2019). We will also investigate more of the theoretical and empirical analysis on our masked sequence to sequence pre-training method. # Acknowledgements This work was partially supported by the National Key Re- search and Development Program of China under Grant 2018YFB1004904. We thank Yichong Leng, Weicong Chen, Yi Zhuang, Hao Sun and Yi Ren for the further develop- MASS: Masked Sequence to Sequence Pre-training for Language Generation ment on the work of MASS. We also thank the anonymous reviewers for their valuable comments on our paper. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. CoRR, 2018. # References Ando, R. K. and Zhang, T. A framework for learning pre- dictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6(Nov):1817– 1853, 2005. Artetxe, M., Labaka, G., Agirre, E., and Cho, K. Unsuper- vised neural machine translation. CoRR, 2017. Ayana, Shen, S., Liu, Z., and Sun, M. Neural headline generation with minimum risk training. ArXiv, 2016. Fedus, W., Goodfellow, I., and Dai, A. Maskgan: Better text generation via filling in the . In ICLR, 2018. Firat, O., Sankaran, B., Al-Onaizan, Y., Vural, F. T. Y., and Cho, K. Zero-resource translation with multi-lingual In EMNLP, pp. 268–277, neural machine translation. 2016. Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. Convolutional sequence to sequence learning. In ICML, volume 70, pp. 1243–1252, 2017. Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. ICLR 2015, 2015a. Girshick, R., Donahue, J., Darrell, T., and Malik, J. Rich feature hierarchies for accurate object detection and se- mantic segmentation. In CVPR, pp. 580–587, 2014. Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. 2015b. Bengio, Y., Ducharme, R., Vincent, P., and Jauvin, C. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. Graff, David, Kong, Junbo, Chen, Ke, Maeda, and Kazuaki. English gigaword. In Linguistic Data Consortium, 2003. Hassan, H., Aue, A., Chen, C., Chowdhary, V., Clark, J., Federmann, C., Huang, X., Junczys-Dowmunt, M., Lewis, W., Li, M., et al. Achieving human parity on auto- matic chinese to english news translation. arXiv preprint arXiv:1803.05567, 2018. Blitzer, J., McDonald, R., and Pereira, F. Domain adapta- tion with structural correspondence learning. In EMNLP, pp. 120–128. Association for Computational Linguistics, 2006. Bowman, S. R., Angeli, G., Potts, C., and Manning, C. D. A large annotated corpus for learning natural language inference. In EMNLP, 2015. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, pp. 770–778, 2016. He, T., Tan, X., Xia, Y., He, D., Qin, T., Chen, Z., and Liu, T.-Y. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems, pp. 7944–7954, 2018. Brown, P. F., Desouza, P. V., Mercer, R. L., Pietra, V. J. D., and Lai, J. C. Class-based n-gram models of natural lan- guage. Computational linguistics, 18(4):467–479, 1992. Howard, J. and Ruder, S. Universal language model fine- tuning for text classification. In ACL, volume 1, pp. 328– 339, 2018. Cho, K., van Merrienboer, B., G¨ulc¸ehre, C¸ ., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, 2014. Collobert, R. and Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, pp. 160–167. ACM, 2008. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015. Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urtasun, R., Torralba, A., and Fidler, S. Skip-thought vectors. In NIPS, pp. 3294–3302, 2015. Lample, G. and Conneau, A. Cross-lingual language model pretraining. CoRR, abs/1901.07291, 2019. Dai, A. M. and Le, Q. V. Semi-supervised sequence learning. In NIPS, pp. 3079–3087, 2015. Danescu-Niculescu-Mizil, C. and Lee, L. Chameleons in imagined conversations: A new approach to understand- ing coordination of linguistic style in dialogs. In ACL Workshop, 2011. Lample, G., Conneau, A., Denoyer, L., and Ranzato, M. Unsupervised machine translation using monolingual cor- pora only. CoRR, 2017. Lample, G., Ott, M., Conneau, A., Denoyer, L., and Ran- zato, M. Phrase-based & neural unsupervised machine translation. In EMNLP, pp. 5039–5049, 2018. MASS: Masked Sequence to Sequence Pre-training for Language Generation Le, Q. and Mikolov, T. Distributed representations of sen- tences and documents. In ICML, pp. 1188–1196, 2014. Leng, Y., Tan, X., Qin, T., Li, X.-Y., and Liu, T.-Y. Unsu- pervised pivot translation for distant languages. In ACL, 2019. Shen, Y., Tan, X., He, D., Qin, T., and Liu, T.-Y. Dense information flow for neural machine translation. In Pro- ceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1294–1303, June 2018. Logeswaran, L. and Lee, H. An efficient framework for learning sentence representations. CORR, 2018. McCann, B., Bradbury, J., Xiong, C., and Socher, R. Learned in translation: Contextualized word vectors. In NIPS, pp. 6294–6305, 2017. Mikolov, T., Karafi´at, M., Burget, L., ˇCernock`y, J., and Khudanpur, S. Recurrent neural network based language model. In Eleventh Annual Conference of the Interna- tional Speech Communication Association, 2010. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., and Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, pp. 1631–1642, 2013. Song, K., Tan, X., He, D., Lu, J., Qin, T., and Liu, T.-Y. Double path networks for sequence to sequence learning. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 3064–3074, 2018. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases In NIPS, pp. 3111–3119, and their compositionality. 2013. Ouyang, W., Li, H., Zeng, X., and Wang, X. Learning deep representation with large-scale attributes. In CVPR, pp. 1895–1903, 2015. Suzuki, J. and Nagata, M. Cutting-off redundant repeating generations for neural abstractive summarization. In ACL, pp. 291–297, 2017. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. In CVPR, pp. 1–9, 2015. Pennington, J., Socher, R., and Manning, C. Glove: Global vectors for word representation. In EMNLP, pp. 1532– 1543, 2014. Tan, X., Ren, Y., He, D., Qin, T., and Liu, T.-Y. Multilingual neural machine translation with knowledge distillation. In ICLR, 2019. Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. In NAACL, volume 1, pp. 2227–2237, 2018. Tjong Kim Sang, E. F. and De Meulder, F. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In NAACL, pp. 142–147. Association for Computational Linguistics, 2003. Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. Improving language understanding by generative pre- training. 2018. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In NIPS, pp. 6000–6010, 2017. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. Squad: 100,000+ questions for machine comprehension of text. CoRR, 2016. Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-A. Extracting and composing robust features with denoising autoencoders. In ICML, pp. 1096–1103. ACM, 2008. Ramachandran, P., Liu, P. J., and Le, Q. V. Unsupervised pretraining for sequence to sequence learning. CoRR, abs/1611.02683, 2016. Ren, Y., Tan, X., Qin, T., Zhao, S., Zhao, Z., and Liu, T.-Y. Almost unsupervised text to speech and automatic speech recognition. In ICML, 2019. Vinyals, O. and Le, Q. V. A neural conversational model. CoRR, abs/1506.05869, 2015. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Glue: A multi-task benchmark and analy- sis platform for natural language understanding. CoRR, abs/1804.07461, 2018. Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. In ACL, volume 1, pp. 1715–1725, 2016. Shang, L., Lu, Z., and Li, H. Neural responding machine for short-text conversation. In ACL, volume 1, pp. 1577– 1586, 2015. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, L., Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. Google’s neural machine translation MASS: Masked Sequence to Sequence Pre-training for Language Generation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. Yang, Z., Chen, W., Wang, F., and Xu, B. Unsupervised neural machine translation with weight sharing. In ACL, pp. 46–55, 2018. Yuan, X., Wang, T., Gulcehre, C., Sordoni, A., Bachman, P., Zhang, S., Subramanian, S., and Trischler, A. Machine comprehension by text-to-text neural question generation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 15–25, 2017. Zhang, J. and Zong, C. Exploiting source-side monolingual data in neural machine translation. In EMNLP, pp. 1535– 1545, 2016. Zoph, B., Yuret, D., May, J., and Knight, K. Transfer learning for low-resource neural machine translation. In EMNLP, pp. 1568–1575, 2016.
{ "id": "1803.05567" }
1905.02331
Taming Pretrained Transformers for Extreme Multi-label Text Classification
We consider the extreme multi-label text classification (XMC) problem: given an input text, return the most relevant labels from a large label collection. For example, the input text could be a product description on Amazon.com and the labels could be product categories. XMC is an important yet challenging problem in the NLP community. Recently, deep pretrained transformer models have achieved state-of-the-art performance on many NLP tasks including sentence classification, albeit with small label sets. However, naively applying deep transformer models to the XMC problem leads to sub-optimal performance due to the large output space and the label sparsity issue. In this paper, we propose X-Transformer, the first scalable approach to fine-tuning deep transformer models for the XMC problem. The proposed method achieves new state-of-the-art results on four XMC benchmark datasets. In particular, on a Wiki dataset with around 0.5 million labels, the prec@1 of X-Transformer is 77.28%, a substantial improvement over state-of-the-art XMC approaches Parabel (linear) and AttentionXML (neural), which achieve 68.70% and 76.95% precision@1, respectively. We further apply X-Transformer to a product2query dataset from Amazon and gained 10.7% relative improvement on prec@1 over Parabel.
http://arxiv.org/pdf/1905.02331
Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, Inderjit Dhillon
cs.LG, cs.AI, cs.IR, stat.ML
KDD 2020 Applied Data Track
null
cs.LG
20190507
20200623
0 2 0 2 n u J 3 2 ] G L . s c [ 4 v 1 3 3 2 0 . 5 0 9 1 : v i X r a # Taming Pretrained Transformers for Extreme Multi-label Text Classification Hsiang-Fu Yu Amazon Wei-Cheng Chang Carnegie Mellon University Kai Zhong Amazon Yiming Yang Carnegie Mellon University Inderjit S. Dhillon Amazon & UT Austin ABSTRACT We consider the extreme multi-label text classification (XMC) prob- lem: given an input text, return the most relevant labels from a large label collection. For example, the input text could be a product de- scription on Amazon.com and the labels could be product categories. XMC is an important yet challenging problem in the NLP commu- nity. Recently, deep pretrained transformer models have achieved state-of-the-art performance on many NLP tasks including sen- tence classification, albeit with small label sets. However, naively applying deep transformer models to the XMC problem leads to sub-optimal performance due to the large output space and the label sparsity issue. In this paper, we propose X-Transformer, the first scalable approach to fine-tuning deep transformer models for the XMC problem. The proposed method achieves new state-of-the-art results on four XMC benchmark datasets. In particular, on a Wiki dataset with around 0.5 million labels, the prec@1 of X-Transformer is 77.28%, a substantial improvement over state-of-the-art XMC ap- proaches Parabel (linear) and AttentionXML (neural), which achieve 68.70% and 76.95% precision@1, respectively. We further apply X- Transformer to a product2query dataset from Amazon and gained 10.7% relative improvement on prec@1 over Parabel. CCS CONCEPTS • Computing methodologies → Machine learning; Natural lan- guage processing; • Information systems → Information retrieval. 1 INTRODUCTION We are interested in the Extreme multi-label text classification (XMC) problem: given an input text instance, return the most rele- vant labels from an enormous label collection, where the number of labels could be in the millions or more. One can view the XMC problem as learning a score function f : X × Y → R, that maps an (instance, label) pair (x, y) to a score f (x, y). The function f should be optimized such that highly relevant (x, y) pairs have high scores, whereas the irrelevant pairs have low scores. Many real-world ap- plications are in this form. For example, in E-commerce dynamic search advertising, x represents an item and y represents a bid query on the market [20, 21]. In open-domain question answering, x represents a question and y represents an evidence passage con- taining the answer [4, 11]. In the PASCAL Large-Scale Hierarchical Text Classification (LSHTC) challenge, x represents an article and y represents a category of the Wikipedia hierarchical taxonomy [17]. XMC is essentially a text classification problem on an industrial scale, which is one of the most important and fundamental topics in machine learning and natural language processing (NLP) communi- ties. Recently, deep pretrained Transformers, e.g., BERT [5], along with its many successors such as XLNet [30] and RoBERTa [13], have led to state-of-the-art performance on many tasks, such as question answering, part-of-speech tagging, information retrieval, and sentence classification with very few labels. Deep pretrained Transformer models induce powerful token-level and sentence-level embeddings that can be rapidly fine-tuned on many downstream NLP problems by adding a task-specific lightweight linear layer on top of the transformer models. # KEYWORDS Transformer models, eXtreme Multi-label text classification ACM Reference Format: Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, and Inderjit S. Dhillon. 2020. Taming Pretrained Transformers for Extreme Multi-label Text Classification. In Proceedings of the 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’20), August 23–27, 2020, Virtual Event, CA, USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/ 3394486.3403368 Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). KDD ’20, August 23–27, 2020, Virtual Event, CA, USA © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-7998-4/20/08. https://doi.org/10.1145/3394486.3403368 However, how to successfully apply Transformer models to XMC problems remains an open challenge, primarily due to the extremely large output space and severe label sparsity issues. As a concrete example, Table 1 compares the model size (in terms of the number of model parameters) and GPU memory usage, when applying a 24- layer XLNet model to a binary classification problem (e.g., the MNLI dataset of GLUE [27]) versus its application to an XMC problem with 1 million labels. Note that the classifier for the MNLI problem and XMC problem has a model size of 2K and 1025M, respectively. This means that the latter is a much harder problem than the former from the model optimization point of view. Additionally, in attempting to solve the XMC problem, we run out of GPU memory even for a single example mini-batch update. Table 1 gives the details of the GPU memory usage in the training stages of one forward pass, one backward pass and one optimization step, respectively. In addition to the computational challenges, the large output space in XMC is exacerbated by a severe label sparsity issue. The left part of Figure 1 illustrates the “long-tailed” label distribution problem XLNet-large model (# params) classifier encoder total (batch size, sequence length)=(1,128) load model +forward +backward 361 M 361 M 2169 MB 361 M 1,025 M 1,386 M 6077 MB 2 K 2609 MB 6537 MB 3809 MB OOM 6571 MB OOM # +optimizer step Table 1: On the left of are the model sizes (numbers of parameters) when applying the XLNet-large model to the MNLI problem vs. the XMC (1M) problem; on the right is the GPU memory usage (in megabytes) in solving the two problems, respectively. The results were obtained on a recent Nvidia 2080Ti GPU with 12GB memory. OOM stands for out-of-memory. in the Wiki-500K data set [25]. Only 2% of the labels have more than 100 training instances, while the remaining 98% are long-tail labels with much fewer training instances. How to successfully fine-tune Transformer models with such sparsely labeled data is a tough question that has not been well-studied so far, to the best of our knowledge. Matching component fine-tunes a Transformer model for each of the SLI-induced XMC sub-problems, resulting in a better mapping from the input text to the set of label clusters. Finally, the Ensemble Ranking component is trained conditionally on the instance-to- cluster assignment and neural embedding from the Transformer, and is used to assemble scores derived from various SLI-induced sub-problems for further performance improvement. 10° 10° — no label cluster — pifa-tfidt 10° — ifa-neural — text-emb 3 cs 8 3 number of training instances © 100000 200000 300000 400000 500000 © 2000 4000 6000 8000 label ID (sorted by frequency) cluster ID (sorted by frequency) Figure 1: On the left, Wiki-500K shows a long-tail distribution of labels. Only 2.1% of the labels have more than 100 training instances, as indicated by the cyan blue regime. On the right is the clusters distribution after our semantic label indexing based on different label representations; 99.4% of the clusters have more than 100 training instances, which mitigates the data sparsity issue for fine-tuning of Transformer models. In our experiments, the proposed X-Transformer achieves new state-of-the-art results on four XMC benchmarks and leads to im- provement on two real-would XMC applications. On a Wiki dataset with a half million labels, the precision@1 of X-Transformer reaches 77.28%, a substantial improvement over the well-established hierar- chical label tree approach Parabel [20] (i.e., 68.70%) and the compet- ing deep learning method AttentionXML [32] (i.e., 76.95%). Further- more, X-Transformer also demonstrates great impact on the scalabil- ity of deep Transformer models in real-world large applications. In our application of X-Transformer to Amazon Product2Query prob- lem that can be formulated as XMC, X-Transformer significantly outperforms Parabel too. The dataset, experiment code, models are available: https://github.com/OctoberChang/X-Transformer. # 2 RELATED WORK AND BACKGROUND 2.1 Extreme Multi-label Classification Instead of fine-tuning deep Transformer models and dealing with the bottleneck classifier layer, an alternative is to use a more economical transfer learning paradigm as studied in the context of word2vec [15], ELMo [19], and GPT [22]. For instance, ELMo uses a (bi-directional LSTM) model pretrained on large unlabeled text data to obtain contexualized word embeddings. When applying ELMo on a downstream task, these word embeddings can be used as input without adaptation. This is equivalent to freezing the ELMo encoder, and fine-tuning the downstream task-specific model on top of ELMo, which is much more efficient in terms of memory as well as computation. However, such a benefit comes at the price of limiting the model capacity from adapting the encoder, as we will see in the experimental results in Section 4. In this paper, we propose X-Transformer, a new approach that overcomes the aforementioned issues, with successful fine-tuning of deep Transformer models for the XMC problem. X-Transformer consists of a Semantic Label Indexing component, a Deep Neural Matching component, and an Ensemble Ranking component. First, Semantic label Indexing (SLI) decomposes the original intractable XMC problem into a set of feasible sub-problems of much smaller output space via label clustering, which mitigates the label sparsity issue as shown in the right part of Figure 1. Second, the Deep Neural Sparse Linear Models. To overcome computational issues, most existing XMC algorithms use sparse TF-IDF features (or slight variants), and leverage different partitioning techniques on the label space to reduce complexity. For example, sparse linear one-vs- all (OVA) methods such as DiSMEC [1], ProXML [2] and PPDSparse [31] explore parallelism to speed up the algorithm and reduce the model size by truncating model weights to encourage sparsity. OVA approaches are also widely used as building blocks for many other approaches, for example, in Parabel [20] and SLICE [7], linear OVA classifiers with smaller output domains are used. The efficiency and scalability of sparse linear models can be fur- ther improved by incorporating different partitioning techniques on the label spaces. For instance, Parabel [20] partitions the labels through a balanced 2-means label tree using label features con- structed from the instances. Recently, several approaches are pro- posed to improve Parabel. Bonsai [9] relaxes two main constraints in Parabel: 1) allowing multi-way instead of binary partitionings of the label set at each intermediate node, and 2) removing strict balancing constraints on the partitions. SLICE [7] considers build- ing an approximate nearest neighbor (ANN) graph as an indexing structure over the labels. For a given instance, the relevant labels can be found quickly from the nearest neighbors of the instance via the ANN graph. Deep Learning Approaches. Instead of using handcrafted TF- IDF features which are hard to optimize for different downstream XMC problems, deep learning approaches employ various neural network architectures to extract semantic embeddings of the in- put text. XML-CNN [12] employs one-dimensional Convolutional neural networks along both sequence length and word embedding dimension for representing text input. As a follow-up, SLICE con- siders dense embedding from the supervised pre-trained XML-CNN models as the input to its hierarchical linear models. More recently, AttentionXML [32] uses BiLSTMs and label-aware attention as the scoring function, and performs warm-up training of the models with hierarchical label trees. In addition, AttentionXML consider various negative sampling strategies on the label space to avoid back-propagating the entire bottleneck classifier layer. 2.2 Transfer Learning Approaches in NLP Recently, the NLP community has witnessed a dramatic paradigm shift towards the “pre-training then fine-tuning” framework. One of the pioneering works is BERT [5], whose pre-training objectives are masked token prediction and next sentence prediction tasks. After pre-training on large-scale unsupervised corpora such as Wikipedia and BookCorpus, the Transformer model demonstrates vast improvement over existing state-of-the-art when fine-tuned on many NLP tasks such as the GLUE benchmark [27], named entity recognition, and question answering. More advanced vari- ants of the pre-trained Transformer models include XLNet [30] and RoBERTa [13]. XLNet considers permutation language modeling as the pre-training objective and two-stream self-attention for target- aware token prediction. It is worth noting that the contextualized token embeddings extracted from XLNet also demonstrate compet- itive performance when fed into a task-specific downstream model on large-scale retrieval problems. RoBERTa improves upon BERT by using more robust optimization with large-batch size update, and pre-training the model for longer till it truly converges. However, transferring the success of these pre-trained Trans- former models on the GLUE text classification to the XMC problem is non-trivial, as we illustrated in Table 1. Before the emergence of BERT-type end-to-end fine-tuning, the canonical way of transfer learning in NLP perhaps comes from the well-known Word2Vec [15] or GloVe [18] papers. Word2vec is a shallow two-layer neural net- work that is trained to reconstruct the linguistic context of words. GLoVe considers a matrix factorization objective to reconstruct the global word-to-word co-occurrence in the corpus. A critical down- side of Word2vec and GloVe is that the pre-trained word embed- dings are not contextualized depending on the local surrounding word. ELMo [19] and GPT2 [22] instead present contextualized word embeddings by using large BiLSTM or Transformer models. After the models are pre-trained, transfer learning can be easily carried out by feeding these extracted word embeddings as input to the downstream task-specific models. This is more efficient com- pared to the BERT-like end-to-end additional fine-tuning of the encoder, but comes at the expense of losing model expressiveness. In the experimental results section, we show that using fixed word embeddings from universal pre-trained models such as BERT is not powerful enough for XMC problems. 2.3 Amazon Applications Many challenging problems at Amazon amount to finding relevant results from an enormous output space of potential candidates: for example, suggesting keywords to advertisers starting new cam- paigns on Amazon, predicting next queries a customer will type based on the previous queries he/she typed. Here we discuss key- word recommendation system for Amazon Sponsored Products, as illustrations in Fig.2, and how it can be formulated as XMC problems. Keyword recommendation system. Keyword Recommenda- tion Systems provide keyword suggestions for advertisers to create campaigns. In order to maximize the return of investment for the advertisers, the suggested keywords should be highly relevant to their products so that the suggestions can lead to conversion. An XMC model, when trained on an product-to-query dataset such as product-query customer purchase records, can suggest queries that are relevant to any given product by utilizing product information, like title, description, brand, etc. Suggested keywords | Provide your own keywords Suggested keyword Match © (a) icronais tones Brat Select bichon tine Brat Sole echenals black Brat ice timer lace Broad Sete 4 keywords selected ene igen Tne Blak Keywords Match type > Keyword bid Sanne bic tr so sae sa agtattmer rat F100 lack htchen mer roe 0 onenaisktonen ter = 00 Figure 2: keyword recommendation system # 3 PROPOSED METHOD: X-TRANSFORMER 3.1 Problem Formulation Motivations. Given a training set D = {(xi , yi )|xi ∈ X, yi ∈ {0, 1}L, i = 1, . . . , N }, extreme multi-label classification aims to learn a scoring function f that maps an input (or instance) xi and a label l to a score f (xi , l) ∈ R. The function f should be optimized = 1 (i.e., label l is relevant such that the score is high when yil to instance xi ) and the score is low when yil = 0. A simple one- versus-all approach realizes the scoring function f as f (x, l) = wT # l ϕ(x) where ϕ(x) represents an encoding and W = [w1, . . . , wL]T ∈ RL×d is the classifier bottleneck layer. For convenience, we further define the top-b prediction operator as fo() = Top-b( [f(s 1)..-..fD)]) €(1,..-.D} where fb (x) is an index set containing the top-b predicted labels. As we pointed out in Table 1, it is not only very difficult to fine-tune the Transformer encoders ϕT (x; θ ) together with the intractable classifier layer W, but also extremely slow to compute the top-K predicted labels efficiently. Label 2 Label 33 rey) [Ee 31 Label tatel | 7s 37 Instance Label3¢ ff Labels Labelss Label 1 eee | anki Rank 2 Rank j Y Cluster 2 Figure 3: The proposed X-Transformer framework. First, Semantic Label Indexing reduces the large output space. Transform- ers are then fine-tuned on the XMC sub-problem that maps instances to label clusters. Finally, linear rankers are trained conditionally on the clusters and Transformer’s output in order to re-rank the labels within the predicted clusters. High-level Sketch. To this end, we propose X-Transformer as a practical solution to fine-tune deep Transformer models on XMC problems. Figure 3 summarizes our proposed framework. In a nutshell, X-Transformer decomposes the intractable XMC problem to a feasible sub-problem with a smaller output space, which is induced from semantic label indexing, which clusters the labels. We refer to this sub-problem as the neural matcher of the following form: Given a label representation, we cluster the L labels hierarchically to form a hierarchical label tree with K leaf nodes [7, 9, 20, 32]. For simplicity, we consider binary balanced hierarchical trees [14, 20] as the default setting. Due to the lack of a direct and informative representation of the labels, the indexing system for XMC may be noisy. Fortunately, the instances in XMC are typically very informa- tive. Therefore, we can utilize the rich information of the instances to build a strong matching system as well as a strong ranker to compensate for the indexing system. д(x, k) = wT k ϕT (x), k = 1, . . . , K (1) where K is the number of clusters which is significantly smaller than the original intractable XMC problem of size O(L). Finally, X-Transformer currently uses a linear ranker that conditionally depends on the embedding of transformer models and its top-b predicted clusters дb (x). o (g(x, ¢1), h(x), ifer € go), 00, otherwise. f(D = | (2) Label embedding via label text. Given text information about labels, such as a short description of categories in the Wikipedia dataset or search queries on the Amazon shopping website, we can use this short text to represent the labels. In this work, we use a pretrained XLNet [19] to represent the words in the label. The label embedding is the mean pooling of all XLNet word embeddings in the label text. Specifically, the label embedding of label l is 1 |text(l)| Here cl ∈ {1, . . . , K } represents the cluster index of label l, д(x, cl ) is the neural matcher realized by deep pre-trained Transformers, h(x, l) is the linear ranker, and σ () is a non-linear activation function to combine the final scores from д and h. We now further introduce each of these three components in detail. 3.2 Semantic Label Indexing Inducing latent clusters with semantic meaning brings several ad- vantages to our framework. We can perform a clustering of labels that can be represented by a label-to-cluster assignment matrix C ∈ {0, 1}L×K where clk = 1 means label l belongs to cluster k. The number of clusters K is typically set to be much smaller than the original label space L. Deep Transformer models are fine-tuned on the induced XMC sub-problem where the output space is of size K, which significantly reduces the computational cost and avoids the label sparsity issue in Figure 1. Furthermore, the label clustering also plays a crucial role in the linear ranker h(x, l). For example, only labels within a cluster are used to construct negative instances for training the ranker. In prediction, ranking is only performed for labels within a few clusters predicted by our deep Transformer models. # wetext(l) where ϕxlnet (w) is the hidden embedding of token w in label l. Label embedding via embedding of positive instances. The short text of labels may not contain sufficient information and is often ambiguous and noisy for some XMC datasets. Therefore we can derive a label representation from embedding of its positive instances. Specifically, the label embedding of label l is » diiaf(Xi), 1=1,...,L, hyi=1 » dxinet(Xi), 1=1,...,L. i:yjj=1 Vpita-tiarl) = vi/Ilvill, vp Ypifa-neural(!) = vi/Ilvill, v1 We refer to this type of label embedding as Positive Instance Feature Aggregation (PIFA), which is used in recent state-of-the-art XMC methods [7, 9, 20, 32]. Note that X-Transformer is not limited by the above mentioned label representations; indeed in applications where labels encode richer meta information such as a graph, we can use label representations derived from graph clustering and graph convolution. 3.3 Deep Transformer as Neural Matcher After Semantic Label Indexing (SLI), the original intractable XMC problem morphs to a feasible XMC sub-problem with a much smaller output space of size K. See Table 2 for the exact K that we used for each XMC data set. Specifically, the deep Transformer model now aims to map each text instance to the assigned rel- evant clusters. The induced instance-to-cluster assignment ma- trix is M = YC = [m1, . . . , mi , . . . , mN ]T ∈ {0, 1}N ×K where Y ∈ RN ×L is the original instance-to-label assignment matrix and C ∈ RL×K is the label-to-cluster assignment matrix provided by the SLI stage. The goal now becomes fine-tuning deep Transformer models д(x, k; W, θ ) on {(xi , mi )|i = 1, . . . , N } such that N K . 1 ~ . 2 min NE 2, dime (0,1 — Mixg(x, k; W, 0))°, (3) i=1 s.t. д(x, k; W, θ ) = wT k ϕtransformer(x), where ˜Mik = 2Mik − 1 ∈ {−1, 1}, W = [w1, . . . , wK ]T ∈ RK ×d , and ϕtransformer(x) ∈ Rd is the embedding from the Transformers. We use the squared-hinge loss in the matching objective (3) as it has shown better ranking performance as shown in [31]. Next, we discuss engineering optimizations and implementation details that considerably improve training efficiency and model performance. Pretrained Transformers. We consider three state-of-the-art pre-trained Transformer-large-cased models (i.e., 24 layers with case-sensitive vocabulary) to fine-tune, namely BERT [5], XLNet [30], and RoBERTa [13]. The instance embedding ϕ(x) is the "[CLS]"-like hidden states from the last layer of BERT, RoBERTa and XLNet. Computationally speaking, BERT and RoBERTa are similar while XLNet is nearly 1.8 times slower. In terms of performance on XMC tasks, we found RoBERTa and XLNet to be slightly better than BERT, but the gap is not as significant as in the GLUE benchmark. More concrete analysis is available in Section 4. It is possible to use Automatic Mixed Precision (AMP) between Float32 and Float16 for model fine-tuning, which can considerably reduce the model’s GPU memory usage and training speed. How- ever, we used Float32 for all the experiments as our initial trials of training Transformers in AMP mode often led to unstable numerical results for the large-scale XMC dataset Wiki-500K. Input Sequence Length. The time and space complexity of the Transformer scales quadratically with the input sequence length, 2) [26], where T = len(x) is the number of tokenized sub- i.e., O(T words in the instance x. Using smaller T reduces not only the GPU memory usage that supports using larger batch size, but also in- creases the training speed. For example, BERT first pre-trains on inputs of sequence length 128 for 90% of the optimization, and the remaining 10% of optimization steps on inputs of sequence length 512 [5]. Interestingly, we observe that the model fine-tuned with sequence length 128 v.s. sequence length 512 does not differ signif- icantly in the downstream XMC ranking performance. Thus, we fix the input sequence length to be T = 128 for model fine-tuning, which significantly speeds up the training time. It would be interest- ing to see if we can bootstrap training the Transformer models from shorter sequence length and ramp up to larger sequence length (e.g., 32, 64, 128, 256), but we leave that as future work. — ee me | ERM yy — yc vr f a a2 i] = alifala[a a] [4 Y aia]a]afa]-ajajajala ayaja a{ajala]a 1 a ijafafa|2 aja afiafalala 1 —* mama r T Ha hy Xe Hay Xa Kay Ke KH Xs HM Xap Ke My Figure 4: Training rankers with the Teacher Forcing Nega- tives(TFN) strategy. For illustration, we have N = 6 instances, L = 20 labels, K = 4 label clusters, and M ∈ {0, 1}6×4 denotes the instance- to-cluster assignment matrix. For example, Cluster 1 with the or- ange color contains the first 5 labels. The nonzeros of the first col- umn of M correspond to {x1, x2, x6 }, which are instances with at least one positive label contained in Cluster 1. For each label in the first cluster, the ranker using Teacher Forcing Negatives (TFN) only considers these three instances. Matcher-aware Negatives (MAN) strategy is introduced in Section 3.4 to further add improved hard negatives to enhance the TFN strategy. Bootstrapping Label Clustering and Ranking. After fine- tuning a deep Transformer model, we have powerful instance rep- resentation ϕtransformer(x) that can be used to bootstrap semantic label clustering and ranking. For label clustering, the embedding label l can be constructed by aggregating the embeddings of its positive instances. For ranking, the fine-tuned Transformer embed- ding can be concatenated with the sparse TF-IDF vector for better modeling power. See details in the ablation study Table 5. 3.4 Ranking After the matching step, a small subset of label clusters is retrieved. The goal of the ranker is to model the relevance between the in- stance and the labels from the retrieved clusters. Formally, given a label l and an instance x, we use a linear one-vs-all (OVA) classifier to parameterize the ranker h(x, l) = wT l ϕ(x) and train it with a based binary loss. For each label, naively estimating the weights wl on all instances {(xi , Yi,l )}N i=1 takes O(N ), which is too expensive. Instead, we consider two sampling strategies that only include hard negative instances to reduce the computational complexity: Teacher Forcing Negatives (TFN) and Matcher-aware Negatives (MAN). Teacher Forcing Negatives (TFN). for each label l, we only include a subset of instances induced by the instance-to-cluster assignment matrix M = YC. In particular, in addition to the pos- itive instances corresponding to the l-th label, we only include instances whose labels belong to the same cluster as the l-th label, i.e., {(xi , yi,l = 1}}. In Figure 4, we illustrate the TFN strategy with a toy example. As the first five labels belong to Cluster 1, and only {x1, x2, x6} contain a positive label within this cluster, we only consider this subset of instances to train a binary classifier for each of the first five labels. Matcher-aware Negatives (MAN). The Teacher Forcing strat- egy only includes negative instances which are hard from the “teacher”, i.e., the ground truth instance-to-clustering assignment matrix M used to train our neural matcher. However, M is indepen- dent from the performance of our neural matcher. Thus, training ranker with the TFN strategy alone might introduce an exposure bias issue, i.e., training-inference discrepancy. Instead, we also con- sider including matcher-aware hard negatives for each label. In particular, we can use the instance-to-cluster prediction matrix ˆM ∈ {0, 1}N ×K from our neural matcher, where the nonzeros of the i-th row of ˆM correspond to the top-b predicted clusters from дb (xi ). In practice, we observe that a combination of TFN and MAN yields the best performance, i.e., using M′ = YC + ˆM to include hard negatives for each label. See Table 5 for a detailed Ablation study. For the ranker input representation, we not only leverage the TF-IDF features ϕtf-idf(x), but also exploit the neural embeddings ϕneural(x) from either the pre-trained or fine-tuned Transformer model. After the ranker is trained, the final ranking scores are computed via (2). We can further ensemble the scores from different X-Transformer models, which are trained on different semantic- aware label clusters or different pre-trained Transformer models such as BERT, RoBERTa and XLNet. 4 EMPIRICAL RESULTS The experiment code, including datasets and fine-tuned models are publicly available. 1 # 4.1 Datasets and Preprocessing XMC Benchmark Data. We consider four multi-label text clas- sification data sets used in AttentionXML [32] for which we had access to the raw text representation, namely Eurlex-4K, Wiki10- 31K, AmazonCat-13K and Wiki-500K. Summary statistics of the data sets are given in Table 2. We follow the training and test split of [32] and set aside 10% of the training instances as the validation set for hyperparameter tuning. Amazon Applications. We consider an internal Amazon data set, namely Prod2Query-1M, which consists of 14 million instances (products) and 1 million labels (queries) where the label is positive if a product is clicked at least once as a result of a search query. We divide the data set into 12.5 million training samples, 0.8 million validation samples and 0.7 million testing samples. # 4.2 Algorithms and Hyperparameters Comparing Methods. We compare our proposed X-Transformer method to the most representative and state-of-the-art XMC meth- ods including the embedding-based AnnexML [24]; one-versus-all DiSMEC [1]; instance tree based PfastreXML [8]; label tree based Parabel [20], eXtremeText [29], Bonsai [9]; and deep learning based XML-CNN [12], AttentionXML [32] methods. The results of all these baseline methods are obtained from [32, Table 3]. For evaluation with other XMC approaches that have not released their code or are difficult to reproduce, we have a detailed comparison in Table 6. Evaluation Metrics. We evaluate all methods with example- based ranking measures including Precision@k (k = 1, 3, 5) and Recall@k (k = 1, 3, 5), which are widely used in the XMC litera- ture [3, 8, 20, 21, 23]. Hyperparameters. For X-Transformer, all hyperparameters are chosen from the held-out validation set. The number of clusters 1https://github.com/OctoberChang/X-Transformer are listed in Table 2, which are consistent with the Parabel setting for fair comparison. We consider the 24 layers cased models of BERT [5], RoBERTa [13], and XLNet [30] using the Pytorch imple- mentation from HuggingFace Transformers [28]2. For fine-tuning the Transformer models, we set the input sequence length to be 128 for efficiency, and the batch size per GPU to be 16 along with gradient accumulation step of 4, and use 4 GPUs per model. This together amounts to a batch size of 256 in total. We use Adam [10] with linear warmup scheduling as the optimizer where the learn- ing rate is chosen from {4, 5, 6, 8} × 10−5. Models are trained until convergence, which takes 1k, 1.4k, 20k, 50k optimization steps for Eurlex-4K, Wiki10-31K, AmazonCat-13K, Wiki-500K, respectively. 4.3 Results on Public XMC Benchmark Data Table 3 compares the proposed X-Transformer with the most repre- sentative SOTA XMC methods on four benchmark datasets. Follow- ing previous XMC works, we focus on top predictions by presenting Precision@k, where k = 1, 3, 5. The proposed X-Transformer outperforms all XMC methods, ex- cept being slightly worse than AttentionXML in terms of P@3 and P@5 on the Wiki-500K dataset. We also compare X-Transformer against linear baselines using Parabel model with three different input representations: (1) ϕpre-xlnet denotes pretrained XLNet em- beddings (2) ϕtfidf denotes TF-IDF embeddings (3) ϕfnt-xlnet ⊕ ϕtfidf denotes finetuned XLNet embeddings concatenated with TF-IDF embeeddings. We clearly see that the performance of baseline (1) is significantly worse. This suggests that the ELMo-style transfer learning, though efficient, is not powerful to achieve good perfor- mance for XMC problems. The performance of baseline (2) is similar to that of Parabel, while baseline (3) further improves performance due to the use of fine-tuned XLNet embeddings. AttentionXML [32] is a very recent deep learning method that uses BiLSTM and label-aware attention layer to model the scoring function. They also leverage hierarchical label trees to recursively warm-start the models and use hard negative sampling techniques to avoid using the entire classifier bottleneck layer. Some of the techniques in AttentionXML are complementary to our proposed X- Transformer, and it would be interesting to see how X-Transformer can be improved from those techniques. 4.4 Results on Amazon Applications. Recall that the Amazon data consists of 12 million products and 1 million queries along with product-query relevance. We treat queries as output labels and product title as input. We use the default Parabel method (using TFIDF features) as the baseline method and show X-Transformer’s relative improvement of precision and recall over the baseline in Table 4. 4.5 Ablation Study We carefully conduct an ablation study of X-Transformer as shown in Table 5. We analyze the X-Transformer framework in terms of its four components: indexing, matching, ranker input representation, and training negative-sampling training algorithm. The configu- ration Index 9 represents the final best configuration as reported 2https://github.com/huggingface/transformers Dataset nt r n 15,449 14,146 AmazonCat-13K 1,186,239 1,779,881 Eurlex-4K Wiki10-31K Wiki-500K nt st 3,865 6,616 306,782 769,421 |Dtrn| 19,166,707 29,603,208 250,940,894 1,463,197,965 |Dtrn| 4,741,799 13,513,133 64,755,034 632,463,513 L 3,956 30,938 13,330 501,070 ¯L 5.30 18.64 5.04 4.75 ¯n 20.79 8.52 448.57 16.86 K 64 512 256 8192 Table 2: Data Statistics. nt r n, nt st refer to the number of instances in the training and test sets, respectively. |Dtrn|, |Dtst| refer to the number of word tokens in the training and test corpus, respectively. L is the number of labels, ¯L the average number of labels per instance, ¯n the average number of instances per label, and K is the number of clusters. The four benchmark datasets are the same as AttentionXML [32] for fair comparison. Methods Prec@1 Prec@3 Prec@5 Methods Prec@1 Prec@3 Prec@5 Eurlex-4K Wiki10-31K AnnexML [24] DiSMEC [1] PfastreXML [8] Parabel [20] eXtremeText [29] Bonsai [9] MLC2seq [16] XML-CNN [12] AttentionXML [32] ϕpre-xlnet + Parabel ϕtfidf + Parabel ϕfnt-xlnet ⊕ ϕtfidf + Parabel X-Transformer 79.66 83.21 73.14 82.12 79.17 82.30 62.77 75.32 87.12 33.53 81.71 84.09 87.22 64.94 70.39 60.16 68.91 66.80 69.55 59.06 60.14 73.99 26.71 69.15 71.50 75.12 53.52 58.73 50.54 57.89 56.09 58.35 51.32 49.21 61.92 22.15 58.11 60.12 62.90 AnnexML [24] DiSMEC [1] PfastreXML [8] Parabel [20] eXtremeText [29] Bonsai [9] MLC2seq [16] XML-CNN [12] AttentionXML [32] ϕpre-xlnet + Parabel ϕtfidf + Parabel ϕfnt-xlnet ⊕ ϕtfidf + Parabel X-Transformer 86.46 84.13 83.57 84.19 83.66 84.52 80.79 81.41 87.47 81.77 84.27 87.35 88.51 74.28 74.72 68.61 72.46 73.28 73.76 58.59 66.23 78.48 64.86 73.20 78.24 78.71 64.20 65.94 59.10 63.37 64.51 64.69 54.66 56.11 69.37 54.49 63.66 68.62 69.62 AmazonCat-13K Wiki-500K AnnexML [24] DiSMEC [1] PfastreXML [8] Parabel [20] eXtremeText [29] Bonsai [9] MLC2seq [16] XML-CNN [12] AttentionXML [32] ϕpre-xlnet + Parabel ϕtfidf + Parabel ϕfnt-xlnet ⊕ ϕtfidf + Parabel X-Transformer 93.54 93.81 91.75 93.02 92.50 92.98 94.26 93.26 95.92 80.96 92.81 95.33 96.70 78.36 79.08 77.97 79.14 78.12 79.13 69.45 77.06 82.41 63.92 78.99 82.77 83.85 63.30 64.06 63.68 64.51 63.51 64.46 57.55 61.40 67.31 50.72 64.31 67.66 68.58 AnnexML [24] DiSMEC [1] PfastreXML [8] Parabel [20] eXtremeText [29] Bonsai [9] MLC2seq [16] XML-CNN [12] AttentionXML [32] ϕpre-xlnet + Parabel ϕtfidf + Parabel ϕfnt-xlnet ⊕ ϕtfidf + Parabel X-Transformer 64.22 70.21 56.25 68.70 65.17 69.26 - - 76.95 31.83 68.75 75.57 77.28 43.15 50.57 37.32 49.57 46.32 49.80 - - 58.42 20.24 49.54 55.12 57.47 32.79 39.68 28.16 38.64 36.15 38.83 - - 46.14 15.76 38.92 43.31 45.31 Table 3: Comparing X-Transformer against state-of-the-art XMC methods on Eurlex-4K, Wiki10-31K, AmazonCat-13K, and Wiki-500K. The baselines’ results are from [32, Table 3]. Note that MLC2seq and XML-CNN are not scalable on Wiki-500K. We also present linear baselines (Parabel) with three input representations. Specifically, ϕpre-xlnet denotes pre-trained XLNet embeddings, ϕtfidf denotes TF-IDF embeddings, ϕfnt-xlnet ⊕ ϕtfidf denotes fine-tuned XLNet embeddings concatenate with TF-IDF embeddings. Methods Precision @1 @5 @10 Recall @1 @5 @10 X-Transformer 10.7% 7.4% 6.6% 12.0% 4.9% 2.8% Table 4: Relative improvement over Parabel on the Prod2Query data set. in Table 3. There are four takeaway messages from this ablation study, and we describe them in the following four paragraphs. Ranker Representation and Training. Config. ID 0, 1, 2 shows the effect of input representation and training strategy for the rank- ing. The benefit of using instance embedding from fine-tuned trans- formers can be seen from config. ID 0 to 1. In addition, from ID 1 to 2, we observe that using Teacher Forcing Negatives (TFN) is not enough for training the ranker, as it could suffer from the exposure Dataset Config. ID indexing X-Transformer Ablation Configuration matching ranker input negative-sampling P@1 P@3 Evaluation Metric P@5 R@1 R@3 R@5 Eurlex-4K Wiki-500K 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 pifa-tfidf pifa-tfidf pifa-tfidf pifa-tfidf pifa-tfidf pifa-neural text-emb all pifa-neural all pifa-tfidf pifa-tfidf pifa-tfidf pifa-tfidf pifa-tfidf pifa-neural text-emb all pifa-neural all BERT BERT BERT RoBERTa XLNet XLNet XLNet XLNet all all BERT BERT BERT RoBERTa XLNet XLNet XLNet XLNet all all ϕtfidf(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) ϕtfidf(x) ⊕ ϕneural(x) TFN TFN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN TFN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN TFN + MAN 83.93 85.02 85.51 85.33 85.07 84.81 85.25 86.55 85.92 87.22 69.52 71.90 74.68 75.40 75.45 76.34 74.12 75.85 77.44 77.28 70.59 71.83 72.95 72.89 72.75 72.39 72.76 74.24 73.43 75.12 49.87 51.58 53.64 54.32 54.50 55.50 52.85 56.08 56.84 57.47 58.69 59.87 60.83 60.79 60.69 60.38 60.20 61.96 61.53 62.90 38.71 40.10 41.50 42.06 42.24 43.04 40.53 44.24 44.37 45.31 17.05 17.21 17.32 17.32 17.25 17.19 17.29 17.54 17.40 17.69 22.30 23.27 24.56 24.85 24.81 25.15 24.18 24.80 25.61 25.48 42.08 42.79 43.45 43.39 43.29 42.98 43.25 44.16 43.69 44.73 40.62 42.14 44.26 44.93 45.00 45.88 43.30 46.36 47.18 47.82 57.14 58.30 59.21 59.16 59.01 58.70 58.54 60.24 59.86 61.17 48.65 50.42 52.50 53.30 53.44 54.53 50.98 56.35 56.55 57.95 Table 5: Ablation study of X-Transformer on Eurlex-4K and Wiki-500K data sets. We outline four take away messages: (1) Config. ID= {0, 1, 2} demonstrates better performance by using Matcher-aware Negatives (MAN) and Neural embedding for training the rankers; (2) Config. ID= {2, 3, 4} suggests that, performance-wise, XLNet is similar to RoBERTa, and slightly better than BERT; (3) Config. ID={4, 5, 6} manifests the importance of label clusters induced from different label representations. (4) Config. ID={7, 8, 9} indicates the effect of ensembling various configuration of the models. bias of only using the ground truth clustering assignment, but ig- nores the hard negatives mistakenly produced by the Transformer models. Note that techniques such as adding Matcher-aware neg- atives (MAN) from previous model’s prediction to bootstrap the next level’s model training is also used in AttentionXML [32]. Different Transformer Models. Next, we analyze how the three different Transformer models (i.e., BERT, RoBERTa, XLNet) affect the performance, as shown in Config. ID 2, 3, 4. For Wiki- 500K, we observe that the XLNet and RoBERTa are generally more powerful than the BERT models. On the other hand, such an ad- vantage is not clear for Eurlex-4K, possibly due to the nature of the data set. Label Representation for Clustering. The importance of dif- ferent label representation for clustering is demonstrated in Config. ID 4, 5, 6. For Eurlex-4K, we see that using label text embedding as representation (i.e. text-emb) leads to the strong performance compared to pifa-tfidf (id 4) and pifa-neural (id 5). In contrast, pifa- tfidf becomes the best performing representation on the Wiki-500K dataset. This phenomenon could be due to the label text of Wiki- 500K being more noisy compared to Eurlex-4K, which deteriorates the label clustering results on Wiki-500K. Ensemble Ranking. Finally, we show the advantage of ensem- bing prediction from different models as shown in Config. ID 7, 8, 9. For Eurlex-4K, combining predictions from different label represen- tations (ID 7) is better than from different Transformer models (ID 8). Combining all (ID 9) leads to our final model, X-Transformer. 4.6 Cross-Paper Comparisons Many XMC approaches have been proposed recently. However, it is sometimes difficult to compare metrics directly from different pa- pers. For example, the P@1 of Parabel on Wiki-500K is 59.34% in [7, Table 2] and 68.52% in [20, Table 2], but we see 68.70% in Table 3. The inconsistency may be due to differences in data processing, input representation, or other reasons. We propose an approach to calibrate these numbers so that various methods can be compared in a more principled way. In particular, for each metric m(·), we use the relative improvement over a common anchor method, which is set to be Parabel as it is widely used in the literature. For a compet- ing method X with a metric m(X) on a data set reported in a paper, we can compute the relative improvement over Parabel as follows: m(X)−m(Parabel) × 100%, where m(Parabel) is the metric obtained m(Parabel) by Parabel on the same data set in the same paper. Following the above approach, we include a variety of XMC approaches in our comparison. We report the relative improvement of various meth- ods on two commonly used data sets, Eurlex-4K and Wiki-500K, in Table 6. We can clearly observe that X-Transformer brings the most significant improvement over Parabel and SLICE. 5 CONCLUSIONS In this paper, we propose X-Transformer, the first scalable frame- work to fine-tune Deep Transformer models that improves state-of- the-art XMC methods on four XMC benchmark data sets. We fur- ther applied X-Transformer to a real-life application, product2query prediction, showing significant improvement over the competitive linear models, Parabel. Eurlex-4K Wiki-500K Method Source Relative Improvement over Parabel (%) Prec@3 Prec@1 Prec@5 Method Source Relative Improvement over Parabel (%) Prec@3 Prec@1 Prec@5 X-Transformer SLICE GLaS ProXML PPD-Sparse SLEEC Table 3 [7, Table 2] [6, Table 3] [2, Table 5] [20, Table 2] [9, Table 2] +6.27% +4.27% -5.18% +3.86% +1.92% -3.53% +9.08% +3.34% -5.48% +2.90% +2.93% -6.40% +8.55% +3.11% -5.34% +2.43% +2.92% -9.04% X-Transformer SLICE GLaS ProXML PPD-Sparse SLEEC Table 3 [7, Table 2] [6, Table 3] [2, Table 5] [20, Table 2] [9, Table 2] +12.49% +15.94% +17.26% +7.56% +7.02% +4.27% +3.37% + 2.92% +0.82% + 2.88% +2.33% -45.08% -40.73% +5.53% +4.77% +2.22% +2.39% -29.84% Table 6: Comparison of Relative Improvement over Parabel. The relative improvement for each state-of-the-art (SOTA) method is computed based on the metrics reported from its original paper as denoted in the Source column. REFERENCES [1] Rohit Babbar and Bernhard Schölkopf. 2017. DiSMEC: distributed sparse ma- chines for extreme multi-label classification. In WSDM. [2] Rohit Babbar and Bernhard Schölkopf. 2019. Data scarcity, robustness and extreme multi-label classification. Machine Learning (2019), 1–23. [3] Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. 2015. Sparse local embeddings for extreme multi-label classification. In NIPS. [4] Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Ku- mar. 2020. Pre-training Tasks for Embedding-based Large-scale Retrieval. In International Conference on Learning Representations. [5] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). [6] Chuan Guo, Ali Mousavi, Xiang Wu, Daniel N Holtmann-Rice, Satyen Kale, Sashank Reddi, and Sanjiv Kumar. 2019. Breaking the Glass Ceiling for Embedding-Based Classifiers for Large Output Spaces. In Advances in Neural Information Processing Systems. 4944–4954. [7] Himanshu Jain, Venkatesh Balasubramanian, Bhanu Chunduri, and Manik Varma. 2019. Slice: Scalable Linear Extreme Classifiers Trained on 100 Million Labels for Related Searches. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. ACM, 528–536. [8] Himanshu Jain, Yashoteja Prabhu, and Manik Varma. 2016. Extreme multi- label loss functions for recommendation, tagging, ranking & other missing label applications. In KDD. [9] Sujay Khandagale, Han Xiao, and Rohit Babbar. 2019. Bonsai-Diverse and Shallow Trees for Extreme Multi-label Classification. arXiv preprint arXiv:1904.08249 (2019). [10] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimiza- tion. In Proceedings of the International Conference on Learning Representations. [11] Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). [12] Jingzhou Liu, Wei-Cheng Chang, Yuexin Wu, and Yiming Yang. 2017. Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 115–124. [13] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692 (2019). [14] Mikko I Malinen and Pasi Fränti. 2014. Balanced k-means for clustering. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, 32–41. [15] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119. [18] Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. 1532–1543. [19] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). [20] Yashoteja Prabhu, Anil Kag, Shrutendra Harsola, Rahul Agrawal, and Manik Varma. 2018. Parabel: Partitioned label trees for extreme classification with application to dynamic search advertising. In WWW. [21] Yashoteja Prabhu and Manik Varma. 2014. Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning. In KDD. [22] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Im- proving language understanding by generative pre-training. (2018). [23] Sashank J Reddi, Satyen Kale, Felix Yu, Dan Holtmann-Rice, Jiecao Chen, and Sanjiv Kumar. 2019. Stochastic Negative Mining for Learning with Large Output Spaces. In AISTATS. [24] Yukihiro Tagami. 2017. AnnexML: Approximate nearest neighbor search for extreme multi-label classification. In Proceedings of the 23rd ACM SIGKDD inter- national conference on knowledge discovery and data mining. 455–464. [25] Manik Varma. 2019. The Extreme Classification Repository: Multi-label Datasets & Code. http://manikvarma.org/downloads/XC/XMLRepository.html. [26] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. [27] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 (2018). [28] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. ArXiv abs/1910.03771 (2019). [29] Marek Wydmuch, Kalina Jasinska, Mikhail Kuznetsov, Róbert Busa-Fekete, and Krzysztof Dembczynski. 2018. A no-regret generalization of hierarchical softmax to extreme multi-label classification. In NIPS. [30] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Lan- guage Understanding. In NIPS. [31] Ian EH Yen, Xiangru Huang, Wei Dai, Pradeep Ravikumar, Inderjit Dhillon, and Eric Xing. 2017. PPDsparse: A parallel primal-dual sparse method for extreme classification. In KDD. ACM. [32] Ronghui You, Zihan Zhang, Ziye Wang, Suyang Dai, Hiroshi Mamitsuka, and Shanfeng Zhu. 2019. AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification. In Ad- vances in Neural Information Processing Systems. 5812–5822. [16] Jinseok Nam, Eneldo Loza Mencía, Hyunwoo J Kim, and Johannes Fürnkranz. 2017. Maximizing Subset Accuracy with Recurrent Neural Networks in Multi- label Classification. In NIPS. [17] Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry Artieres, George Paliouras, Eric Gaussier, Ion Androutsopoulos, Massih-Reza Amini, and Patrick Galinari. 2015. LSHTC: A benchmark for large-scale text classification. arXiv preprint arXiv:1503.08581 (2015).
{ "id": "1804.07461" }
1905.01758
Investigating the Successes and Failures of BERT for Passage Re-Ranking
The bidirectional encoder representations from transformers (BERT) model has recently advanced the state-of-the-art in passage re-ranking. In this paper, we analyze the results produced by a fine-tuned BERT model to better understand the reasons behind such substantial improvements. To this aim, we focus on the MS MARCO passage re-ranking dataset and provide potential reasons for the successes and failures of BERT for retrieval. In more detail, we empirically study a set of hypotheses and provide additional analysis to explain the successful performance of BERT.
http://arxiv.org/pdf/1905.01758
Harshith Padigela, Hamed Zamani, W. Bruce Croft
cs.IR, cs.CL
null
null
cs.IR
20190505
20190505
9 1 0 2 y a M 5 ] R I . s c [ 1 v 8 5 7 1 0 . 5 0 9 1 : v i X r a # Investigating the Successes and Failures of BERT for Passage Re-Ranking Harshith Padigela, Hamed Zamani, and W. Bruce Croft College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003 {hpadigela,zamani,croft}@cs.umass.edu ABSTRACT The bidirectional encoder representations from transformers (BERT) model has recently advanced the state-of-the-art in passage re- ranking. In this paper, we analyze the results produced by a fine- tuned BERT model to better understand the reasons behind such substantial improvements. To this aim, we focus on the MS MARCO passage re-ranking dataset and provide potential reasons for the successes and failures of BERT for retrieval. In more detail, we em- pirically study a set of hypotheses and provide additional analysis to explain the successful performance of BERT. guidelines for the IR researchers for further development of neural IR models. Given this motivation, this paper mainly analyzes the results obtained by BERT for passage re-ranking and studies the rea- sons behind its success. To do so, we compare the results obtained by both BM25 and BERT, and highlight their differences. We choose BM25 as our basis for comparison, due to its effectiveness and more importantly its simplicity and explainable behavior, which makes the analysis easier. In more detail, this paper studies the following hypotheses: 1 INTRODUCTION Recent developments in deep learning and the availability of large- scale datasets have led to significant improvements in various com- puter vision and natural language processing tasks. In information retrieval (IR), the lack of publicly available large-scale datasets for many tasks, such as ad-hoc retrieval, has restricted observing sub- stantial improvements over traditional methods [6, 13]. A number of approaches, such as weak supervision [2, 12], have been recently proposed to enable deep neural models to learn from limited train- ing data. More recently, Microsoft has released MS MARCO v2 [8], a large dataset for the passage re-ranking task, to foster the neural information retrieval research. • H1: BM25 is more biased towards higher query term frequency compared to BERT. • H2: Bias towards higher query term frequency hurts the BM25 performance. H3: BERT retrieves documents with more novel words. • H4: BERT’s improvement over BM25 is higher for longer queries. In addition we also identify the query types for which BERT does and does not perform well. Our experiments provide interesting insights into the performance of this model. In this paper, we first show that a simple neural model that uses bidirectional encoder representations from Transformers (BERT) [3] for question and passage representations performs surprisingly well compared to state-of-the-art retrieval models, including traditional term-matching models, conventional feature-based learning to rank models, and recent neural ranking models. This has been also dis- covered by other researchers, such as [9] in parallel with this study. Looking at the leaderboard of the MS MARCO passage re-ranking task shows the effectiveness of the BERT representations for re- trieval.1 We believe that understanding the performance of effective neu- ral IR models, e.g., BERT, is important. It could potentially provide 2 BERT Representations learned using language modelling [7, 10] have shown to be useful in many downstream natural language tasks [1]. There exist two primary approaches for using these pre-trained representations: (1) feature-based models and (2) fine-tuning [3]. In the feature-based approach, task-specific architectures are de- signed on top of the pre-trained feature representations. While in the fine-tuning approach, minimal task specific parameters are added, which will be fine-tuned in addition to the pre-trained repre- sentations for the downstream task. BERT [3], which falls into the latter category, is a multi-layer bidirectional transformer encoder utilizing the transformer units described in [11]. The BERT model uses bidirectional self-attention to capture interaction between the input tokens and is pre-trained on the masked language modelling task [3]. 1The leaderboard is available at http://www.msmarco.org/leaders.aspx. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference’17, July 2017, Washington, DC, USA © 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Pre-trained BERT models, fine-tuned using a single additional layer have been shown to achieve state-of-the-art results in a wide range of natural language tasks, including machine reading com- prehension (MRC) and natural language inference (NLI) [3]. In this paper, we also use the same setting, by adding a single layer on top of the BERT’s large pre-trained model, and fine-tuning it using a pointwise setting with a maximum likelihood objective. This is also similar to the setting used in [9]. 3 EMPIRICAL ANALYSIS 3.1 Data We consider the MS MARCO dataset for passage re-ranking [8] in our analysis. The MS MARCO dataset is generated using queries sampled from the Bing’s query logs and corresponding relevant passages marked by human assessors. It is notable that the rele- vance judgments provided by the MS MARCO dataset are different from the traditional TREC-style relevance judgments. They utilized the information that the human assessors provided for the machine reading comprehension data. This means that a marked passage is a true positive/relevant, however an unmarked passage may not be a true negative. For every query, a set of top 1000 candidate passages are extracted using BM25 for re-ranking. Since the original rele- vant documents were picked from a set of 10 candidate documents chosen by the Bing’s ranking stack, the relevant passages might not be present in the 1000 passages chosen by BM25. The training set consists of approximately 400 million tuples of query, relevant passage, and non-relevant passage. The devel- opment set contains 6,980 queries with their corresponding set of 1,000 candidate passages. On average, each query has one relevant passage. Around 1242 queries have no marked relevant passages. We primarily focus our analysis only on the 5738 queries in the development set that have at least one relevant passage. 3.2 Experimental Setup We use BERT and BM25 for our analysis and comparison. We used the BERT large model trained on MS MARCO. The training setup is similar to the one described in [9]. For BM25 relevance matching we indexed all the passages using Elasticsearch [4] with the default analyzer and default parameters of b = 0.75 and k1 = 1.2. Since most queries have only 1 relevant document, we use mean recipro- cal rank of the top 10 retrieved passages (MRR@10) as our main retrieval metric, which is also suggested by MS MARCO [8].2 3.3 Results and Discussion The performance of various models on the entire development and evaluation (test) sets of MS MARCO are shown in Table 1. We can see that the BERT model which was originally trained on the masked language modelling (MLM) task [3] and further fine-tuned using a pointwise training on the MS MARCO data, outperforms the existing traditional retrieval models and recent neural ranking models by a large margin. In order to understand these improve- ments, we look into the BERT’s and the BM25’s performances on the development set.3 In the following, we study a set of hypotheses and provide em- pirical evidence to either validate or invalidate them. Hypothesis I: BM25 is more biased towards higher query term frequency compared to BERT. We hypothesize that in many queries the top results from BM25 were just the passages that contain multiple repetitions of query words without actually conveying any useful information, which is not the case for BERT. 2Due to the incomplete judgments, recall-oriented metrics such as mean average precision (MAP), are not suitable for this dataset. 3Note that the evaluation set is not publicly accessible. Table 1: MRR@10 percentage from the MS MARCO leaderboard. Model Eval Dev BM25 BM25 (ours) Feature-based LeToR: with RankSVM Neural Kernel Match IR (KNRM) Neural Kernel Match IR (Conv-KNRM) IRNet (Deep CNN/IR Hybrid Network) BERT + Small Training 16.49 - 19.05 19.82 27.12 28.06 35.87 16.70 17.67 19.47 21.84 29.02 27.80 36.53 Table 2: Average MRR and # of queries (in parenthesis) for ranges of FQT. FQT [0, 0.1) [0.1, 0.15) [0.15, 0.2) [0.2, 0.25) [0.25, 1] 0.29 (349) BERT 0.48 BM25 (1240) 0.23 (1163) 0.47 (2061) 0.22 (1565) 0.42 (1441) 0.20 (1316) 0.38 (652) 0.19 (1345) 0.40 (344) To validate this hypothesis, we calculate the fraction of query to- kens (FQT) as follows: For each query, we take the top k results, remove stopwords and punctuations, and calculate the fraction of query tokens in the remaining tokens. If d1, d2, · · · , dk are the set of results for a query q without stopwords and punctuation, then, k N(di, Fora) = 2) ae? ) i=l t where N (di , q) denotes the number of occurrences of query tokens q in the document di . We limit k to a maximum of 10. We find that the FQT average across queries is 0.2 for BM25 and 0.147 for BERT. In 95.96% of the queries BM25 has a higher FQT value than BERT. These results validate our first hypothesis, saying that BM25 has a higher bias towards query term frequency in document matches, compared to BERT. An example can be seen in Table 5 Query 1 Hypothesis II: Bias towards higher query term frequency hurts the BM25 performance. We hypothesize that the bias towards query term frequency affects the BM25 performance sig- nificantly, compared to BERT. To investigate this, we see how MRR changes across different ranges of FQT. The FQT range of [0,1] is split into 5 buckets and the average MRR value and the number of queries in each bucket is shown in Table 2. As FQT value increases, we can see that the MRR value decreases in both BERT and BM25. Because of BM25’s bias towards high FQT (validated by Hypothesis I and also evident by the number of queries), we can see the decrease in MRR (as we go from the lowest to highest FQT buckets) is more prominent for BM25 (34.5%) than BERT (16.7%). The signed t-test for measuring the difference between two pairs of data, applied on difference between FQT values of BM25 and BERT yields a p-value of 0.0, indicating statistically significant difference between the FQT values. Hypothesis III: BERT retrieves documents with more novel words. Since the recent neural models trained on the language modeling task have been shown to capture semantic similarities, Table 3: Average MRR with respect to query length (L). L 2 BM25 0.27 BERT 0.56 3 0.23 0.46 4 0.22 0.48 5 0.22 0.45 6 0.23 0.46 7 0.19 0.42 8 0.21 0.40 9 0.17 0.38 10 0.18 0.34 Table 4: Average MUR with respect to the different cut-off values (i). i Avg. MUR 1 0.17 2 0.45 3 0.77 4 1.1 5 1.44 6 1.78 7 2.13 8 2.46 9 2.8 10 3.12 we hypothesize that BERT can retrieve results with more novel words, compared to BM25. To validate this, we calculate the fraction of novel terms (FNT) as follows. Let d1, d2, · · · , dk be the results for a query q, which are stripped of stopwords and punctuation. Then k ‘di, FNT(q) = i » ane (2) where U (di ) gives the number of unique terms in document di and N ′(di , q) gives the number of unique terms in document di which are not present in the query q. We limit k to a maximum of 10. We find that the FNT average across queries is 0.88 for BM25 and 0.9 for BERT. In 85.85% of queries BERT has a higher FNT value than BM25. The signed t-test on the difference between FNT values of BERT and BM25 yields a p-value of 0.0, indicating statistically significant difference between the FNT values. This validates our hypothesis that BERT retrieves documents with more novel words than BM25. Hypothesis IV: BERT′s improvement over BM25 is higher for longer queries. Since the BERT model is designed to learn context-aware word representations, we hypothesize that its im- provements for longer queries, which generally provide richer con- text, are more significant. To validate this hypothesis, we calculate the average MRR per query length for both BERT and BM25, shown in Table 3. We can see that BERT performs significantly better than BM25 across all query lengths. But as the query length increases from 2 to 10, the performance of both BM25 and BERT generally decreases, and this decrease is more prominent for BERT (39%) compared to BM25 (33%), indicating its higher sensitivity to query length than BM25. The MRR difference between BERT and BM25 also decreases from 0.29 to 0.16 as query length increases from 2 to 10, which indicates that our fourth hypothesis is incorrect and that BERT’s improvement is lower for longer queries. Interestingly, BERT performs surprisingly well for very short queries compared to longer ones. The reason might be that BERT is not successful at capturing the query context properly for long queries. This can be also observed from the examples, such as Queries 3, 4 in Table 5. 3.4 Result Analysis We conduct various analyses to understand the similarities and differences between BM25 and BERT. We discuss them below. Per Query Analysis. We analyze the per query performance of BERT compared to BM25. Figure 1 plots ∆MRR per query (i.e., MRRBERT − MRRBM25), which are sorted in descending order. As depicted in the figure, in 3257 questions (57% out of 5738) BERT has MRR diff, Positive - BERT, Negative - BM25 MRR ditt 2 100 2000 + 3000 +000 -=«s00 «e000 Number of queries Figure 1: MRRBERT − MRRBM25 on MSMarco Dev set. a better performance compared to BM25 and in 690 questions (12%) BM25 performs better than BERT. For 525 (9%) queries ∆MRR is equal to 1, meaning that a relevant answer is retrieved by BERT as the first ranked passage, however, no relevant answer is retrieved by BM25 in the top 10 result list. For 1791 queries, both BERT and BM25 perform similarly. This experiments show that BERT not only performs better than BM25 (on average), but it also performs more accurately for substantially more queries. Similarity between the BERT’s and the BM25’s result lists. To measure the similarity between the results of BERT and BM25, we calculate the following metric, MUR - matches upto result. MUR(i, q) for a query q measures the number of matches in the top i results of BERT and BM25. We can see the average MUR for each i ∈ [1, 10] in Table 4 which indicates the low extent of similarity between BERT and BM25. The number of matches increases lin- early with i with slope of about 0.33 and intercept around -0.21, indicating a consistent linear relationship between BERT and BM25. Comparison by answer type. In order to understand the per- formance of these models across different types of questions, we classify questions based on the lexical answer type. We use the rule-based answer type classifier4 inspired by [5] to extract an- swer types. We classify questions based on 6 answer types, namely abbreviation, location, description, human, numerical and entity. The average MRR across these 6 types for 4105 queries (having a valid answer type) is shown in Table 6. We can see that while BERT has highest MRR on abbreviation type questions, BM25 has its lowest MRR on them. Note that BERT seems to have its lowest performance on numerical and entity type questions. Comparison using query starting ngrams: Here we look at the most frequent bigrams with which the queries start. The idea is that looking at the starting ngrams can help us understand the type of queries. We extract the most frequent 15 bigrams and compute the average MRR using BERT for each of them. The result is shown in Figure 2. We can see that the bigrams corresponding to numeric type questions, such as “how much” and “how long”, as well as location type questions like “what county / where is” and entity type questions, such as “what type” have a low MRR. This is consistent with our observations in the previous experiment (see Table 6). Semantic similarity: Being trained on a language modeling task, we expect BERT to capture various semantic relationships. While in some cases these help in arriving at the right answer sometimes 4https://github.com/superscriptjs/qtypes Table 5: Sample queries for comparison. (W) - incorrect and (C) - correct result. ID Query 1 what is the nationality 2 confident man definition 3 where can a plasma mem- brane be found in a cell 4 telephone number for amazon fire stick customer service 5 another name for reaper BM25/BERT/Relevant passage BM25: Users found this page by searching for 1 is african american a nationality 2 african american nationality 3 is black a nationality nationality african .... BERT : Nationality is the legal relationship between a person and a nation state .... BM25: definition of suave is someone smooth confident .... usually describing a man BERT : definition of a confidence man is someone who gets a victim to trust them before taking their money or property, a con man .... BERT : The Plasma membrane is found in both the animal cell and plant cell Rel: The plasma membrane is the border between the interior and exterior of a cell .... BERT : Customer Service 1 866 216 1072. .... 1 Thank you for calling Amazon.com customer service for quality assurance and training .... Rel: .... for more information contact amazon fire stick support number 1 8447451521 BERT : Reaper orginally known as Gabriel Reyes is a mercenary.... is antagonist .... in videogame Overwatch. He is voiced by Keith Ferguson who also played Lord Hater Rel: .... similar words for the term reaper. harvester reaper .... BM25(W) vs BERT(C) BM25(W) vs BERT(C) BERT(W) vs Relevant BERT(W) vs Relevant BERT(W) vs Relevant Table 6: Average MRR values for answer types. Sorted by ∆MRR. Type # queries BM25 BERT ABBR LOC DESC HUM NUM ENTY 9 0.17 0.59 493 0.25 0.50 1887 0.19 0.43 455 0.23 0.46 933 0.19 0.40 328 0.21 0.41 [cts] confident man definition [SEP] [es] man man definition confidence man 5 3 confident definition confidence 06 05 04 = ra 3 03 g 2 02 ol 00 ESFSTTSPSFILS TSE FS aSse 3 SSeFesFea¢§& Sere e FP PF S3 8 FSFE Sees FES EEC EEG E: 2 Fg BS é € $$ $3 BS 2 2 Bigrams and their frquencies Figure 3: Attention map of head 14 from BERT layer 16. [cLs] another name for reaper [SEP] “8 ~ & & Figure 4: Attention map of head 4 from BERT layer 23. Figure 2: Average MRR of Frequent Bigrams. they can also lead to incorrect answers. We will discuss two such examples below. In question 2 of Table 5, BERT captures similarity between the word “confident” in query and “confidence” in the passage, which helps it arrive at the right answer. This can be seen by visualizing the attention values between query and document words as shown in Figure 3. Similarly in Example 5 of Table 5, the question asks for another name for word “reaper”, which in this context means synonyms for the word “reaper”. However, BERT relates name to a character name reaper (see attention map 4). This leads to an incorrect answer. 4 CONCLUSIONS AND FUTURE WORK BERT performs surprisingly well for a passage re-ranking task. In this paper, we provide empirical analysis to understand the per- formance of BERT and how its results are different from a typical retrieval model, e.g., BM25. We showed that BM25 is more biased towards high query term frequency and this bias hurts its perfor- mance. We demonstrated that, as expected, BERT retrieves passages with more novel words. Surprisingly, we found out that BERT is failing at capturing the query context for long queries. Our analysis also suggested that BERT is relatively successful in answering ab- breviation answer type questions and relatively poor at numerical and entity type questions. Although BERT substantially outperforms state-of-the-art mod- els for passage retrieval, it is still far away from a perfect retrieval performance. We believe that future work investigating the rele- vance preferences captured by BERT across various query types and a better encoding of query context for longer queries could help in developing even better models. 5 ACKNOWLEDGEMENTS This work was supported in part by the Center for Intelligent In- formation Retrieval and in part by NSF IIS-1715095. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. REFERENCES [1] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. JMLR 12, Aug (2011), 2493–2537. [2] Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural ranking models with weak supervision. SIGIR (2017), 65–74. [3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805 (2018). arXiv:1810.04805 [4] Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The Definitive Guide: A Distributed Real-Time Search and Analytics Engine. " O’Reilly Media, Inc.". [5] Xin Li and Dan Roth. 2002. Learning question classifiers. COLING (2002), 1–7. [6] Jimmy Lin. 2019. The Neural Hype and Comparisons Against Weak Baselines. SIGIR Forum 52, 2 (Jan. 2019), 40–51. [7] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. NIPS (2013), 3111–3119. [8] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. CoRR abs/1611.09268 (2016). arXiv:1611.09268 [9] Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. CoRR abs/1901.04085 (2019). arXiv:1901.04085 [10] Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. EMNLP (2014), 1532–1543. [11] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. 5998–6008. [12] Hamed Zamani and W Bruce Croft. 2018. On the theory of weak supervision for information retrieval. ICTIR (2018), 147–154. [13] Hamed Zamani, Mostafa Dehghani, Fernando Diaz, Hang Li, and Nick Craswell. 2018. SIGIR 2018 workshop on learning from limited or noisy data for information retrieval. The 41st International ACM SIGIR Conference (2018), 1439–1440.
{ "id": "1611.09268" }
1904.13015
Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators
Encoder-decoder based neural architectures serve as the basis of state-of-the-art approaches in end-to-end open domain dialog systems. Since most of such systems are trained with a maximum likelihood~(MLE) objective they suffer from issues such as lack of generalizability and the generic response problem, i.e., a system response that can be an answer to a large number of user utterances, e.g., "Maybe, I don't know." Having explicit feedback on the relevance and interestingness of a system response at each turn can be a useful signal for mitigating such issues and improving system quality by selecting responses from different approaches. Towards this goal, we present a system that evaluates chatbot responses at each dialog turn for coherence and engagement. Our system provides explicit turn-level dialog quality feedback, which we show to be highly correlated with human evaluation. To show that incorporating this feedback in the neural response generation models improves dialog quality, we present two different and complementary mechanisms to incorporate explicit feedback into a neural response generation model: reranking and direct modification of the loss function during training. Our studies show that a response generation model that incorporates these combined feedback mechanisms produce more engaging and coherent responses in an open-domain spoken dialog setting, significantly improving the response quality using both automatic and human evaluation.
http://arxiv.org/pdf/1904.13015
Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20190430
20191122
9 1 0 2 v o N 2 2 ] L C . s c [ 4 v 5 1 0 3 1 . 4 0 9 1 : v i X r a Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators Sanghyun Yi1, Rahul Goel2, Chandra Khatri3, Alessandra Cervone4, Tagyoung Chung5, Behnam Hedayatnia5, Anu Venkatesh5, Raefer Gabriel5, Dilek Hakkani-Tur5 1Division of Humanities and Social Sciences, California Institute of Technology, 2Google, 3Uber AI, 5Alexa AI, Amazon 4Signals and Interactive Systems Lab, University of Trento [email protected], [email protected], {tagyoung,behnam,anuvenk,raeferg,hakkanit}@amazon.com, [email protected], [email protected] # Abstract Encoder-decoder based neural architectures serve as the basis of state-of-the-art ap- proaches in end-to-end open domain dialog systems. Since most of such systems are trained with a maximum likelihood (MLE) ob- jective they suffer from issues such as lack of generalizability and the generic response prob- lem, i.e., a system response that can be an an- swer to a large number of user utterances, e.g., “Maybe, I don’t know.” Having explicit feed- back on the relevance and interestingness of a system response at each turn can be a use- ful signal for mitigating such issues and im- proving system quality by selecting responses from different approaches. Towards this goal, we present a system that evaluates chatbot re- sponses at each dialog turn for coherence and engagement. Our system provides explicit turn-level dialog quality feedback, which we show to be highly correlated with human eval- uation. To show that incorporating this feed- back in the neural response generation models improves dialog quality, we present two differ- ent and complementary mechanisms to incor- porate explicit feedback into a neural response generation model: reranking and direct mod- ification of the loss function during training. Our studies show that a response generation model that incorporates these combined feed- back mechanisms produce more engaging and coherent responses in an open-domain spoken dialog setting, significantly improving the re- sponse quality using both automatic and hu- man evaluation. # Introduction Due to recent advances in spoken language under- standing and automatic speech recognition, conver- sational interfaces such as Alexa, Cortana, and Siri have become increasingly common. While these interfaces are task oriented, there is an increasing interest in building conversational systems that can engage in more social conversations. Building sys- tems that can have a general conversation in an open domain setting is a challenging problem, but it is an important step towards more natural human- machine interactions. Recently, there has been significant interest in building chatbots (Sordoni et al., 2015; Wen et al., 2015) fueled by the availability of dialog data sets such as Ubuntu, Twitter, and Movie dialogs (Lowe et al., 2015; Ritter et al., 2011; Danescu-Niculescu- Mizil and Lee, 2011). However, as most chatbots are text-based, work on human-machine spoken di- alog is relatively under-explored, partly due to lack of such dialog corpora. Spoken dialog poses addi- tional challenges such as automatic speech recog- nition errors and divergence between spoken and written language. (seq2seq) models (Sutskever et al., 2014) and their extensions (Lu- ong et al., 2015; Sordoni et al., 2015; Li et al., 2015), which are used for neural machine translation (MT), have been widely adopted for dialog generation systems. In MT, given a source sentence, the correctness of the target sentence can be measured by semantic similarity to the source sentence. However, in open-domain conversations, a generic utterance such as “sounds good” could be a valid response to a large variety of statements. These seq2seq models are commonly trained on a maximum likelihood objective, which leads the models to place uniform importance on all user utterance and system response pairs. Thus, these models usually choose “safe” responses as they frequently appear in the dialog training data. This phenomenon is known as the generic response problem. These responses, while arguably correct, are bland and convey little information leading to short conversations and low user satisfaction. Since response generation systems are trained by maximizing the average likelihood of the training data, they do not have a clear signal on how well the current conversation is going. We hypothesize that having a way to measure conversational suc- cess at every turn could be valuable information that can guide system response generation and help improving system quality. Such a measurement may also be useful for combining responses from various competing systems. To this end, we build a supervised conversational evaluator to assess two aspects of responses: engagement and coherence. The input to our evaluators are encoded conversa- tions represented as fixed-length vectors as well as hand-crafted dialog and turn level features. The system outputs explicit scores on coherence and engagement of the system response. We experiment with two ways to incorporate these explicit signals in response generation sys- tems. First, we use the evaluator outputs as input to a reranking model, which are used to rescore the n-best outputs obtained after beam search decod- ing. Second, we propose a technique to incorporate the evaluator loss directly into the conversational model as an additional discriminatory loss term. Using both human and automatic evaluations, we show that both of these methods significantly im- prove the system response quality. The combined model utilizing re-ranking and the composite loss outperforms models using either mechanism alone. The contributions of this work are two-fold. First, we experiment with various hand-crafted fea- tures and conversational encoding schemes to build a conversational evaluation system that can provide explicit turn-level feedback to a response gener- ation system on the highly subjective task. This system can be used independently to compare vari- ous response generation systems or as a signal to improve response generation. Second, we experi- ment with two complementary ways to incorporate explicit feedback to the response generation sys- tems and show improvement in dialog quality using automatic metrics as well as human evaluation. # 2 Related Works There are two major themes in this work. The first is building evaluators that allow us to estimate human perceptions of coherence, topicality, and interestingness of responses in a conversational context. The second is the use of evaluators to guide the generation process. As a result, this work is related to two distinct bodies of work. Automatic Evaluation of Conversations: Learning automatic evaluation of conversation quality has a long history (Walker et al., 1997). However, we still do not have widely accepted solutions. Due to the similarity between conver- sational response generation and MT, automatic MT metrics such as BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) are widely adopted for evaluating dialog generation. ROUGE (Lin and Hovy, 2003), which is also used for chatbot evaluation, is a popular metric for text summarization. These metrics primarily rely on token-level overlap over a corpus (also synonymy in the case of METEOR), and therefore are not well-suited for dialog generation since a valid conversational response may not have any token-level or even semantic-level overlap with the ground truths. While the shortcomings of these metrics are well known for MT (Graham, 2015; Es- pinosa et al., 2010), the problem is aggravated for dialog generation evaluation because of the much larger output space (Liu et al., 2016; Novikova et al., 2017). However, due to the lack of clear alternatives, these metrics are still widely used for evaluating response generation (Ritter et al., 2011; Lowe et al., 2017). To ensure comparability with other approaches, we report results on these metrics for our models. To tackle the shortcomings of automatic metrics, there have been efforts to build models to score conversations. Lowe et al. (2017) train a model to predict the score of a system response given a dialog context. However, they work with tiny data sets (around 4000 sentences) in a non-spoken setting. Tao et al. (2017) address the expensive annotation process by adding in unsupervised data. However, their metric is not interpretable, and the results are also not shown on a spoken setting. Our work differs from the aforementioned works as the output of our system is interpretable at each dialog turn. There has also been work on building evaluation systems that focus on specific aspects of dialog. Li et al. (2016c) use features for information flow, Yu et al. (2016) use features for turn-level appropriate- ness. However, these metrics are based on a narrow aspect of the conversation and fail to capture broad ranges of phenomena that lead to a good dialog. Improving System Response Generation: Seq2Seq models have allowed researchers to train dialog models without relying on handcrafted dia- log acts and slot values. Using maximum mutual information (MMI) (Li et al., 2015) was one of the earlier attempts to make conversational responses more diverse (Serban et al., 2016b,a). Shao et al. (2017) use a segment ranking beam search to pro- duce more diverse responses. Our method extends the strategy employed by Shao et al. (2017) utiliz- ing a trained model as the reranking function and is similar to Holtzman et al. (2018) but with different kind of trained model. More recently, there have been works which aim to alleviate this problem by incorporating conversation-specific rewards in the learning pro- cess. Yao et al. (2016) use the IDF value of gen- erated sentences as a reward signal. Xing et al. (2017) use topics as an additional input while de- coding to produce more specific responses. Li et al. (2016b) add personal information to make system responses more user specific.Li et al. (2017) use distillation to train different models at different lev- els of specificity and use reinforcement learning to pick the appropriate system response. Zhou et al. (2017) and Zhang et al. (2018) introduce latent fac- tors in the seq2seq models that control specificity in neural response generation. There has been re- cent work which combines responses from multi- ple sub-systems (Serban et al., 2017; Papaioannou et al., 2017) and ranks them to output the final sys- tem response. Our method complements these ap- proaches by introducing a novel learned-estimator model as the additional reward signal. # 3 Data The data used in this study was collected during the Alexa Prize (Ram et al., 2017) competition and shared with the teams who were participating in the competition. Upon initiating the conversation, users were paired with a randomly selected chatbot built by the participants. At the end of the conver- sation, the users were prompted to rate the chatbot quality, from 1–5, with 5 being the highest. We randomly sampled more than 15K conversa- tions (approximately 160K turns) collected during the competition. These were annotated for coher- ence and engagement (See Section 3.1) and used to train the conversation evaluators. For training the response generators, we selected highly-rated user conversations, which resulted in around 370K con- versations containing 4M user utterances and their corresponding system response. One notable statis- tic is that user utterances are typically very short (mean: 3.6 tokens) while the system responses gen- erally are much longer (mean: 23.2 tokens). # 3.1 Annotations Asking annotators to measure coherence and en- gagement directly is a time-consuming task. We observed that we could collect data much faster if we asked direct “yes” or “no” questions to our annotators. Hence, upon reviewing a user-chatbot interaction along with the entire conversation to the current turn, annotators1 rated each chatbot re- sponse as “yes” or “no” on the following criteria: • The system response is comprehensible: The information provided by the chatbot made sense with respect to the user utterance and is syntactially correct. The system response is on topic: The chat- bot response was on the same topic as the user utterance or was relevant to the user utterance. For example, if a user asks about a baseball player on the LA Dodgers, then the chatbot mentions something about the baseball team. • The system response is interesting: The chatbot response contains information which is novel and relevant. For example, the chat- bot would provide an answer about a baseball player and give some additional information to create a fleshed-out response. • I want to continue the conversation: Given the current state of the conversation and the system response, there is a natural way to continue the conversation. For example, this could be due to the system asking a question about the current conversation subject. We use these questions as proxies for measur- ing coherence and engagement of responses. The answers to the first two questions (“comprehensi- ble” and “on topic”) are used as a proxy for coher- ence. Similarly, the answer to the last two questions (“interesting” and “continue the conversation”) are used as a proxy for engagement. # 4 Conversation Evaluators We train conversational response evaluators to as- sess the state of a given conversation. Our models are trained on a combination of utterance and re- sponse pairs combined with context (past turn user utterances and system responses) along with other 1The data was collected through mechanical turk. Annota- tors were presented with the full context of the dialog up to the current turn. Model Average Emb. Transformer BiLSTM TREC SUBJ STS 0.45 0.90 0.80 0.48 0.91 0.83 0.45 0.90 0.84 Table 1: Sentence embedding performance. features, e.g., dialog acts and topics as described in Section 4.3. We experiment with different ways to encode the responses (Section 4.1) as well as with different feature combinations (Figure 1). # 4.1 Sentence Embeddings We pretrained models that produce sentence em- beddings using the ParlAI chitchat data set (Miller et al., 2017). We use the Quick-Thought (QT) loss (Logeswaran and Lee, 2018) to train the em- beddings. Our word embeddings are initialized with FastText (Bojanowski et al., 2016) to capture the sub-word features and then fine-tuned. We encode sentences into embeddings using the fol- lowing methods: a) Average of word embeddings (300 dim) b) The Transformer Network (1 layer, 600 dim) (Vaswani et al., 2017) c) Concatenated last states of a BiLSTM (1 layer, 600 dim) The selected dimensions and network structures followed the original paper (Vaswani et al., 2017). All models were trained with a batch size of 400 using Adam optimizer with learning rate of 5e-4. To measure the sentence embedding quality, we evaluate our models on a few standard classifica- tion tasks. The models are used to get sentence representation, which are passed through feedfor- ward networks that are trained for the following classification tasks: (i) Semantic Textual Similar- ity (STS) (Marelli et al., 2014), (ii) Question Type Classification (TREC) (Voorhees and Dang, 2003), (iii) Subjectivity Classification (SUBJ) (Pang and Lee, 2004). Table 1 shows the different models’ performances on these tasks. Based on this, we choose the Transformer as our sentence encoder as it was overall the best performing while being fast. # 4.2 Context Given the contextual nature of the problem we ex- tracted the sentence embeddings of user utterances and responses for the past 5 turns and used a 1 layer LSTM with 256 hidden units to encode conversa- tional context. The last state of LSTM is used to obtain the encoded representation, which is then concatenated with other features (Section 4.3) in a fully-connected neural network. # 4.3 Features Apart from sentence embeddings and context, the following features are also used: • Dialog Act: Serban et al. (2017) show that dialog act (DA) features could be useful for response selection rankers. Following this, we use model (Khatri et al., 2018)-predicted DAs (Stolcke et al., 1998) of user utterances and system responses as an indicator feature. • Entity Grid: Cervone et al. (2018); Barzilay and Lapata (2008) show that entities and DA transitions across turns can be strong features for assessing dialog coherence. Starting from a grid representation of the turns of the con- versation as a matrix (DAs × entities), these features are designed to capture the patterns of topic and intent shift distribution of a dialog. We employ the same strategy for our models. • Named Entity (NE) Overlap: We use named entity overlap between user utterances and their corresponding system responses as a feature. Our named entities are obtained using SpaCy2. Papaioannou et al. (2017) have also used similar NE features in their ranker. • Topic: We use a one-hot representation of a dialog turn topic predicted by a conversational topic model (Guo et al., 2017) that classifies a given dialog turn into one of 26 pre-defined classes like Sports and Movies. • Response Similarity: Cosine similarity be- tween user utterance embedding and system response embedding is used as a feature. • Length: We use the token-level length of the user utterance and the response as a feature. The above features were selected from a large pool of features through significance testing on our de- velopment set. The effect of adding these features can be seen in Table 2. Some of the features such as Topic lack previous dialog context, which could be updated to include the context. We leave this extension for future work. 2https://spacy.io/ Evaluator Comprehensible On-topic Interesting Cont. Conversation ‘Yes’ Class Distr. Accuracy 0.84 (+3%) 0.64 (+9%) 0.83 (-1%) 0.75 (+4%) 0.80 0.45 0.16 0.71 Precision 0.83 (+1%) 0.65 (+10%) 0.77 (+10%) 0.73 (+5%) Recall 0.85 (+15%) 0.64 (+18%) 0.80 (-5%) 0.72 (+31%) F-score 0.84 (+8%) 0.64 (+13%) 0.78 (+2%) 0.72 (+17%) MCC 0.37 (+107%) 0.29 (+81%) 0.12 (+inf%) 0.32(+179%) Table 2: Conversation Evaluators Performance. Numbers in parentheses denote relative changes when using our best model (all features) with respect to the baseline (no handcrafted features, only sentence embeddings). Second column shows the class imbalance in our annotations. Note that the baseline model had 0 MCC for Interesting Past Utterance, Response Context (STM Encoder) Utterance (Transformer Sentence > Comprehensible Embedding) [> On-topic FENN | = Response (Transformer Sentence |-—> Interesting Embedding) [| Continue Conv. Features (Dialog Act, Entity Grid, Topic, NE, ...) Figure 1: Conversation Evaluators # 5.1 Base Model (S2S) We extended the approach of Yao et al. (2016) where the authors used Luong’s dot attention (Lu- ong et al., 2015). In our experiments, the decoder uses the same attention (Figure 2a). As we want to observe the full impact of conversational eval- uators, we do not incorporate inverse document frequency (IDF) or conversation topics into our ob- jective. Extending the objective to include these terms can be a good direction for future work. # 4.4 Models Given the large number of features and their non- sequential nature, we train four binary classifiers using feedforward neural networks (FFNN). The input to these models is a dialog turn. Each out- put layer is a softmax function corresponding to a binary decision for each evaluation metric form- ing a four-dimensional vector. Each vector dimen- sion corresponds to an evaluation metric (See Sec- tion 3.1). For example, one possible reference out- put would be [0,1,1,0], which corresponds to “not comprehensible,” “on topic,” “interesting,” and “I don’t want to continue.” To make the response generation system more robust, we added user utterances and system re- sponses from the previous turn as context. The input to the response generation model is previous- turn user utterance, previous-turn system response, and current-turn user utterance concatenated se- quentially. We insert a special transition token (Ser- ban et al., 2016c) between turns. We then use a single RNN to encode these sentences. Our word embeddings are randomly initialized and then fine- tuned during training. We used a 1-layer Gated Recurrent Neural network with 512 hidden units for both encoder and decoder to train the seq2seq model and MLE as our training objective. We experimented with training the evaluators jointly and separately and found that training them jointly led to better performance. We suspect this is due to the objectives of all evaluators being closely related. We concatenate the aforementioned fea- tures as an input to a 3-layer FFNN with 256 hid- den units. Figure 1 depicts the architecture of the conversation evaluators. # 5.2 Reranking (S2S RR) In this approach, we do not update the underlying encoder-decoder model. We maintain a beam to get 15-best candidates from the decoder. The top candidate out of the 15 candidates is equivalent to the output of the baseline model. Here, instead of selecting the top output, the final output response is chosen using a reranking model. # 5 Response Generation System To incorporate the explicit turn level feedback pro- vided by the conversation evaluators, we augment our baseline response generation system with the softmax scores provided by the conversation eval- uators. Our baseline response generation system is described in Section 5.1. We then incorporate evaluators outputs using two techniques: reranking and fine-tuning. For our reranking model, we calculate BLEU scores for each of the 15 candidate responses against the ground truth response from the chat- bot. We then sample two responses from the k-best list and train a pairwise response reranker. The response with the higher BLEU is placed in the positive class (+1) and the one with lower BLEU is placed in the negative class (-1). We do this for all possible candidate combinations from the 15-best Minimize , Tr 1 | ‘ (Cross Entropy b/w POnI%n) nae Decoder Encoder (a) Baseline Response Generator (Seq2Seq with Attention) J max(ranker) Ranker Minimize auras Evaluators af (x,y € beam_size_k, features, context Decoder (b) Reranking Using Evaluators. Top 15 candidates from beam search are passed to the evaluators. The candidate that max- imizes the reranker score is chosen as the output. Encoder- decoder remain unchanged. Oy;Oy;OT ITO 0.1 /0.1]0.1 | 0.3 | 0.1 O;}1/O0/;0)}0 0.2 |0.5]0.1 | 0.2 | 0.2 12,1,3,0,5]=|1 | 0/0] 0] 0 |ivjxten| 4 ]4]O4 | 02] 04 |W) ie, Maximize Loss 4 Minimize ac 2 Lee word embedding } x, features, context (Cross Entropy biw POMnl%n) | Encoder Xo Vor vrs (c) Fine-tuning Using Evaluators. We minimize cross entropy loss and maximize discriminator loss. The output of softmax, i.e., likelihood over vocabulary for the length of output is passed to the evaluator along with the input (x and context). Evalua- tor generates the discriminative score over |V |×len generator output, which is subtracted from the loss. The updated loss is back-propagated to update encoder-decoder. Figure 2: Response Model Configurations. The base- line is shown at the top. The terms xn and yn corre- spond to nth utterance and response respectively. responses. We use the max-margin ranking loss to train the model. The model is a three-layered FFNN with 16 hidden units. The input to the pairwise reranker is the soft- max output of the 4 evaluators as shown in Fig- ure 1. The input to the evaluators are described in Section 4. The output of the reranker is a scalar, which, if trained right, would give a higher value for responses with higher BLEU scores. Figure 2b depicts the architecture of this model. # 5.3 Fine-tuning (S2S FT) In this approach, we use evaluators as a discrimina- tory loss to fine-tune the baseline encoder-decoder response generation system. We first train the base- line model and then, it is fine-tuned using the eval- uator outputs in the hope of generating more coher- ent and engaging responses. One issue with MLE is that the learned models are not optimized for the final metric (e.g., BLEU). To combat this problem, we add a discriminatory loss in addition to the gen- erative loss to the overall loss term as shown in Equation 1. len loss = © p(ynil2n)log(q(Gnil 2n)) (1) i=l — \Eval(an, q(.l2n)||1 where zn = xn, yn−1, . . . , x0, y0 is the conver- sational context where n is the context length. q ∈ R|V |×len of the first term corresponds to the softmax output generated by the response genera- tion model. The term ˆyni refers to its corresponding decoder response at nth conversation turn and ith word generated. In the second term, the function Eval refers to the evaluator score produced for a user utterance, xn, and decoder softmax output, q. In Equation 1, the first term corresponds to the cross-entropy loss from the encoder-decoder while the second term corresponds to the discriminative loss from the evaluator. In a standalone evaluation setting, the evaluator will take one hot representa- tion of the user utterance as input, i.e., the input is len-tokens long which is passed through an em- bedding lookup layer which makes it RD×len input to rest of the network where D is the size of the word embeddings. To make the loss differentiable, instead of performing argmax to get a decoded token, we use the output of the softmax layer (dis- tribution of likelihood across entire vocabulary for output length, i.e., R|V |×len) and use this to do a weighted embedding lookup across the entire vo- cabulary to get the same RD×len matrix as an input to rest of the evaluator network. Our updated eval- uator input becomes the following: RD×len = RD×|V | × R|V |×len (2) The evaluator score is defined as the sum of softmax outputs of all 4 models.We keep the rest of the input (context and features) for the evaluator as is. We weight the discriminator score by λ, which is a hyperparameter. We selected λ to be 10 using grid search to optimize for final BLEU on our de- velopment set. Figure 2c depicts the architecture of this approach. The decoder is fine-tuned to max- imize evaluator scores along while minimizing the cross-entropy loss. The evaluator model is trained on the original annotated corpus and parameters are frozen. # 5.4 Reranking + Fine-tuning (S2S RR FT) We also combined fine-tuning with reranking, where we obtained the 15 candidates from the fine- tuned response generator and then we select the best response using the reranker, which is trained to maximize the BLEU score. # 6 Experiments and Results # 6.1 Conversation Evaluators The conversation evaluators were trained using cross-entropy loss. We used a batch size of 128, dropout of 0.3 and Adam optimizer with a learning rate of 5e-5 for our conversational evaluators. Sen- tence embeddings for user utterances and system responses are obtained using the fast-text embed- dings and Transformer network. Table 2 shows the evaluator performance com- pared with a baseline with no handcrafted features. We present precision, recall, and f-score measures along with the accuracy. Furthermore, since the class distribution of the dataset is highly imbal- anced, we also calculate Matthews correlation co- efficient (MCC)(Matthews, 1975), which takes into account true and false positives and negatives. It is a balanced measure which can be used even if the classes sizes are very different. With the pro- posed features we observe significant improvement across all metrics. We also performed a correlation study between the model predicted scores and human annotated scores (1 to 5) on 2000 utterances. The annotators3 were asked to answer a single question: “On a scale of 1–5, how coherent and engaging is this response given the previous conversation?” From Table 3, it can be observed that evaluator predicted scores has significant correlation (moderate to high) with the overall human evaluation score on this subjective 3Same setup as previously described Metric Comprehensible On-topic Interesting Cont. Conversation Pearson Corr 0.2 0.4 0.25 0.3 p-value << 0.001 << 0.001 << 0.001 << 0.001 Table 3: Evaluators Correlation with Turn-level Rat- ings task (0.2 – 0.4 Pearson correlation with turn-level ratings). Considering the substantial individual dif- ferences in evaluating open-domain conversations, we observe that our evaluators with moderate level of correlation can be used to provide turn-level feedback for a human-chatbot conversation. # 6.2 Response Generation We first trained the baseline model (S2S) on the conversational data set (4M utterance-response pairs from the competition. Section 3). The data were split into 80% training, 10% development, and 10% test sets. The baseline model was trained using Adam with learning rate of 1e-4 and batch size of 256 until the development loss converges. The vocabulary of 30K most frequent words were used. And the reranker was trained using the 20K number of beam outputs from the baseline model on the development set. Adam with learning rate of 1e-4 and batch size of 16 was used for the fine- tuning (S2S FT). Table 5 shows the performance comparison of different generation models (Section 5) on the Alexa Prize conversational data set. We observed that reranking n-best responses using the evaluator- based reranker (S2S RR) provides nearly 100% improvement in BLEU-4 scores. Fine-tuning the generator by adding evalua- tor loss (S2S FT) does improve the performance but the gains are smaller compared to reranking. We suspect that this is due to the reranker di- rectly optimizing for BLEU. However, using a fine-tuned model and then reranking (S2S RR FT) complements each other and gives the best per- formance overall. Furthermore, we observe that even though the reranker is trained to maximize the BLEU scores, reranking shows significant gains in ROUGE scores as well. We also measured dif- ferent systems performance using Distinct-2 (Li et al., 2016a), which is the number of unique length- normalized bigrams in responses. The metric can be a surrogate for measuring diverse outputs. We see that our generators using reranking approaches Conversation 1 User: yes Chatbot: is there something specific you want to discuss User: fox news Ground Truth: my developers are working hard to get you recent news you can say topics to hear some topics S2S(Base): I found S2S RR: I heard about this S2S FT : I heard of a man it may be interested in this news mentioned S2S RR FT: I heard about this news talked about this Conversation 2 User: cool Chatbot: back at you ok we could talk about movies or music User: movies Ground Truth: shall we chat about something else I love talking about music and movies S2S(Base): so do you S2S RR: who would you want to talk S2S FT : what actor love most S2S RR FT: what actor Table 4: Two randomly selected qualitative examples of responses Metric BLEU-4 S2S (Base) 5.9 S2S RR 11.6(+97%) S2S FT 6.2(+5%) S2S RR FT 12.2(+107%) ROUGE-2 5.1 6.3(+24%) 5.3(+4%) 6.8(+33%) Distinct-2 0.011 0.017(+54%) 0.011(-1%) 0.017(+53%) Metric BLEU-4 ROUGE-2 Distinct-2 S2S(Base) 3.9 0.6 0.0047 S2S RR 7.9 (+103%) 0.8 (+33%) 0.0086 (+82%) Table 5: Generator performance on automatic metrics. Table 6: Response Generator on Reddit Conversations. Due to the size of the dataset we could not fine tune these models. improve on this metric as well. Table 4 also shows 2 sampled responses from different models. To further analyze the impact of reranker trained to optimize on BLEU score, we trained a base- line response generation system on a Reddit data set4, which comprises of 9 million comments and corresponding response comments. All the hyper- parameter setting followed the setting of training on the Alexa Prize conversational dataset. Metric S2S(Base) S2S RR S2S FT S2S RR FT 2.34 2.42 2.36 2.55 1.80 2.16 1.87 2.31 Table 7: Mean ratings for Qualitative and Human Eval- uation of Response Generators # 6.3 Human Evaluation We trained a new reranker for the Reddit data using the evaluator scores obtained from the mod- els proposed in Section 4. We show in Table 6 that even though the evaluators are trained on a dif- ferent data set, the reranker learns to select better responses nearly doubling the BLEU scores as well as improving on the Distinct-2 score. Thus, the evaluator generalizes in selecting more coherent and engaging responses in human-human interac- tions as well as human-computer interactions. As fine-tuning the evaluator is computationally expen- sive, we did not fine-tune it on the Reddit dataset. The closest baseline that used BLEU scores for evaluation in open-domain setting is from Li et al. (2015) where they trained the models on Twitter data using Maximum Mutual Information (MMI) as the objective function. They obtained a BLEU score of 5.2 in their best setting on Twitter data (average length 23 chars), which is relatively less complex than Reddit (average length 75 chars). As noted earlier, automatic evaluation metrics may not be the best way to measure chatbot response generation performance. Therefore, we performed human evaluation of our models. We asked anno- tators to provide ratings on the system responses from the models we evaluated, i.e., baseline model, S2S RR, S2S FT, and S2S RR FT. A rating was obtained on two metrics: coherence and engage- ment. Coherence measures how much the response is comprehensible and relevant to a user’s request and engagement shows interestingness of the re- sponse (Venkatesh et al. (2018)). We asked the annotators to provide the rating based on a scale of 1–5, with 5 being the best. We had four an- notators rate 250 interactions. Table 7 shows the performance of the models on the proposed met- rics. Our inter-annotator agreement is 0.42 on Co- hen’s Kappa Coefficient, which implies moderate agreement. We believe this is because the task is relatively subjective and the conversations were performed in the challenging open-domain setting. The S2S RR FT model provides the best perfor- # 4We use a publicly available data (Baumgartner, 2015). mance across all the metrics, followed by S2S RR, followed by S2S FT. # 7 Conclusion Human annotations for conversations show signifi- cant variance, but it is still possible to train models which can extract meaningful signal from the hu- man assessment of the conversations. We show that these models can provide useful turn-level guid- ance to response generation models. We design a system using various features and context encoders to provide turn-level feedback in a conversational dialog. Our feedback is interpretable on 2 major axes of conversational quality: engagement and co- herence. We also plan to provide similar evaluators to the university teams participating in the Alexa Prize competition. To show that such feedback is useful in building better conversational response systems, we propose 2 ways to incorporate this feedback, both of which help improve on the base- lines. Combining both techniques results in the best performance. We view this work as comple- mentary to other recent work in improving dialog systems such as Li et al. (2015) and Shao et al. (2017). While such open-domain systems are still in their infancy, we view the framework presented in this paper to be an important step towards build- ing end-to-end coherent and engaging chatbots. # References Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, volume 29, pages 65–72. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics, 34(1):1–34. Baumgartner. 2015. Reddit. https://archive. org/details/2015_reddit_comments_ corpus. [Accessed: 2018-07-01]. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv:1607.04606. Alessandra Cervone, Evgeny Stepanov, and Giuseppe Riccardi. 2018. Coherence models for dialogue. Proc. Interspeech 2018, pages 1011–1015. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of lin- guistic style in dialogs. In Proceedings of the Work- shop on Cognitive Modeling and Computational Lin- guistics, ACL 2011. Dominic Espinosa, Rajakrishnan Rajkumar, Michael White, and Shoshana Berleant. 2010. Further meta- evaluation of broad-coverage surface realization. In EMNLP, pages 564–574. Yvette Graham. 2015. Accurate evaluation of segment- level machine translation metrics. Fenfei Guo, Angeliki Metallinou, Chandra Khatri, Anirudh Raju, Anu Venkatesh, and Ashwin Ram. 2017. Topic-based evaluation for conversational bots. arXiv:1801.03622. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. arXiv preprint arXiv:1805.06087. Chandra Khatri, Rahul Goel, Behnam Hedayatnia, Angeliki Metanillou, Anushree Venkatesh, Rae- fer Gabriel, and Arindam Mandal. 2018. Con- textual In topic modeling for dialog systems. 2018 IEEE Spoken Language Technology Workshop (SLT), pages 892–899. IEEE. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv:1510.03055. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In Proceedings of NAACL-HLT, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural con- versation model. arXiv:1603.06155. Jiwei Li, Will Monroe, and Dan Jurafsky. 2017. Data distillation for controlling specificity in dialogue generation. arXiv:1702.06703. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep re- inforcement learning for dialogue generation. In EMNLP, pages 1192–1202. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In NAACL-HLT, pages 71–78. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue sys- tem: An empirical study of unsupervised eval- uation metrics for dialogue response generation. arXiv:1603.08023. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence represen- tations. arXiv:1803.02893. Ryan Lowe, Michael Noseworthy, Iulian V Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic tur- ing test: Learning to evaluate dialogue responses. arXiv:1708.07149. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. arXiv:1506.08909. Minh-Thang Luong, Hieu Pham, and Christo- Effective approaches translation. pher D Manning. 2015. to attention-based neural machine arXiv:1508.04025. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of com- positional distributional semantic models. In LREC, pages 216–223. Brian W Matthews. 1975. Comparison of the pre- dicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)- Protein Structure, 405(2):442–451. Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Ja- son Weston. 2017. Parlai: A dialog research soft- ware platform. arXiv:1705.06476. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need arXiv preprint new evaluation metrics for nlg. arXiv:1707.06875. Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity summa- rization based on minimum cuts. In ACL, page 271. Ioannis Papaioannou, Amanda Cercas Curry, Jose L Part, Igor Shalyminov, Xinnuo Xu, Yanchao Yu, On- drej Duˇsek, Verena Rieser, and Oliver Lemon. 2017. Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. Alexa Prize Proceedings. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL, pages 311– 318. Ashwin Ram, Rohit Prasad, Chandra Khatri, Anu Venkatesh, Raefer Gabriel, Qing Liu, Jeff Nunn, Behnam Hedayatnia, Ming Cheng, Ashish Nagar, Eric King, Kate Bland, Amanda Wartick, Yi Pan, Han Song, Sk Jayadevan, Gene Hwang, and Art Pet- tigrue. 2017. Conversational ai: The science behind the alexa prize. In 1st proceedings of Alexa Prize. Alan Ritter, Colin Cherry, and William B Dolan. 2011. Data-driven response generation in social media. In Proceedings of the conference on empirical meth- ods in natural language processing, pages 583–593. ACL. Iulian Serban, Alessandro Sordoni, Ryan Lowe, Lau- rent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016a. A hierarchical latent vari- able encoder-decoder model for generating dia- logues. arXiv:1605.06069. Iulian V Serban, Chinnadhurai Sankar, Mathieu Ger- main, Saizheng Zhang, Zhouhan Lin, Sandeep Sub- ramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, et al. 2017. A deep reinforcement learning chatbot. arXiv:1709.02349. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016b. Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models. In AAAI. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016c. Building end-to-end dialogue systems using generative hierar- chical neural network models. arXiv:1507.04808. Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversa- tion responses with sequence-to-sequence models. arXiv:1701.03185. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, and Bill Dolan. 2015. A neural network approach to context- sensitive generation of conversational responses. arXiv:1506.06714. Andreas Stolcke, Elizabeth Shriberg, Rebecca Bates, Noah Coccaro, Daniel Jurafsky, Rachel Martin, Marie Meteer, Klaus Ries, Paul Taylor, Carol Van Ess-Dykema, et al. 1998. Dialog act modelling for conversational speech. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2017. Ruber: An unsupervised method for au- tomatic evaluation of open-domain dialog systems. arXiv:1701.03079. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000–6010. Anu Venkatesh, Chandra Khatri, Ashwin Ram, Fenfei Guo, Raefer Gabriel, Ashish Nagar, Rohit Prasad, Ming Cheng, Behnam Hedayatnia, Angeliki Met- allinou, et al. 2018. On evaluating and compar- ing open domain dialog systems. arXiv preprint arXiv:1801.03625. Ellen M Voorhees and Hoa Trang Dang. 2003. Overview of the trec 2003 question answering track. In TREC, volume 2003, pages 54–68. Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. In EACL, pages 271–280. TH Wen, M Gaˇsi´c, N Mrkˇsi´c, PH Su, D Vandyke, and S Young. 2015. Semantically conditioned lstm- based natural language generation for spoken dia- logue systems. In EMNLP, pages 1711–1721. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware In AAAI, volume 17, neural response generation. pages 3351–3357. Kaisheng Yao, Baolin Peng, Geoffrey Zweig, and Kam-Fai Wong. 2016. An attentional neu- ral conversation model with improved specificity. arXiv:1606.01292. Zhou Yu, Ziyu Xu, Alan W Black, and Alexander Rud- nicky. 2016. Strategy and policy learning for non- task-oriented conversational systems. In SIGDIAL, pages 404–412. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to con- trol the specificity in neural response generation. In ACL, volume 1, pages 1108–1117. Ganbin Zhou, Ping Luo, Rongyu Cao, Fen Lin, Bo Chen, and Qing He. 2017. Mechanism-aware neural machine for dialogue response generation. In AAAI, pages 3400–3407.
{ "id": "1605.06069" }
1904.12774
Routing Networks and the Challenges of Modular and Compositional Computation
Compositionality is a key strategy for addressing combinatorial complexity and the curse of dimensionality. Recent work has shown that compositional solutions can be learned and offer substantial gains across a variety of domains, including multi-task learning, language modeling, visual question answering, machine comprehension, and others. However, such models present unique challenges during training when both the module parameters and their composition must be learned jointly. In this paper, we identify several of these issues and analyze their underlying causes. Our discussion focuses on routing networks, a general approach to this problem, and examines empirically the interplay of these challenges and a variety of design decisions. In particular, we consider the effect of how the algorithm decides on module composition, how the algorithm updates the modules, and if the algorithm uses regularization.
http://arxiv.org/pdf/1904.12774
Clemens Rosenbaum, Ignacio Cases, Matthew Riemer, Tim Klinger
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20190429
20190429
9 1 0 2 r p A 9 2 ] G L . s c [ 1 v 4 7 7 2 1 . 4 0 9 1 : v i X r a # ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # A PREPRINT Clemens Rosenbaum College of Information and Computer Sciences University of Massachusetts Amherst [email protected] # Ignacio Cases Linguistics and Computer Science Departments Stanford University [email protected] # Matthew Riemer, Tim Klinger IBM Research {mdriemer,tklinger}@us.ibm.com January 10, 2022 # ABSTRACT Compositionality is a key strategy for addressing combinatorial complexity and the curse of dimensionality. Recent work has shown that compositional solutions can be learned and offer substantial gains across a variety of domains, including multi-task learning, language modeling, visual question answering, machine comprehension, and others. However, such models present unique challenges during training when both the module parameters and their composition must be learned jointly. In this paper, we identify several of these issues and analyze their underlying causes. Our discussion focuses on routing networks, a general approach to this problem, and examines empirically the interplay of these challenges and a variety of design decisions. In particular, we consider the effect of how the algorithm decides on module composition, how the algorithm updates the modules, and if the algorithm uses regularization. Keywords Compositionality · Modularity · Meta Learning · Deep Learning · Decision Making # Introduction In machine learning, and in deep learning in particular, modularity is becoming a key approach to reducing complexity and improving generalization by encouraging a decomposition of complex systems into special- ized sub-systems. Such observations have similarly motivated studies in human cognition (compare [Bechtel and Richardson, 2010] for an overview). In theory, models without special affordances for modularity can learn to specialize groups of neurons for specific subtasks, but in practice this appears not to be the case. To address this issue, a number of models have been introduced to enforce specialization [Jacobs et al., 1991b, Miikkulainen, 1993, Jordan and Jacobs, 1994, Bengio et al., 2013, Davis and Arel, 2013, Andreas et al., 2015, Bengio et al., 2015, Shazeer et al., 2017, Fernando et al., 2017]. Much of this earlier work has either fixed the composition strategy and learned the modules (neurons or whole networks) or fixed the modules and ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION learned the composition strategy. But in its most general form, the compositionality problem is to jointly learn both the parameters of the modules and a strategy for their composition with the goal of solving new tasks. Recently we proposed a new paradigm called, routing [Rosenbaum et al., 2017], which is to our knowledge the first approach to jointly optimize both modules and their compositional strategy in the general setting. A routing network (depicted in Figure 1) consists of a set of modules (parameterized functions) from which a router (the composition strategy) can choose a composition. In a neural network setting, the modules are sub-networks and the router assembles them into a model that processes the input. In making these decisions the routing network can be viewed as selecting a model (path) from the set of combinatorially many such models, one for each possible sequence of modules. This connects routing to a body of related work in conditional computation, meta-learning, architecture search and other areas. Because routing networks jointly train their modules and the module composition strategy, they face a set of challenges which non-routed networks do not. In particular, training a routing network is non-stationary from both the perspective of the router, and from the perspective of the modules, because the optimal composition strategy depends on the module parameters and vice versa. output y = t3(t2(t1(x))) h4 t3 t1 t2 h3 π(h) t3 t1 t2 h2 router t1 t3 t2 h1 # input x Figure 1: A routing network. The router con- sists of a parameterized decision maker that iteratively selects modules (i.e. trainable func- tions). Each selected function is then applied to the latest activation, resulting in a new acti- vation, which can then again be transformed. The training of a routing network happens on- line, i.e., the output of the model is used to train the transformations using backpropaga- tion and Stochastic Gradient Descent (SGD), and is simultaneously used to provide feedback to the decision maker. _ There is relatively little research on the unique challenges faced when training these networks, which are highly dynamic. In this paper, we conduct an extensive empirical investigation of these challenges with a focus on routing networks. In Sections 2 we review notable related ideas that share a similar motivation to routing networks. In Section 3, we identify five main challenges for routing. One such challenge is to stabilize the interaction of the module and composition strategy training. Another is module collapse [Kirsch et al., 2018], which occurs when the choices made by the composition strategy lack diversity. Overfitting is also a problem which can be severe in a routed network because of the added flexibility of composition order. We also discuss the difficulty extrapolating the learning behavior of heterogeneous modules to better select those with greater potential. Adding to these difficulties is the lack of a compelling mathematical framework for models which perform modular and compositional learning. This paper is the first to consider all of these challenges jointly. One benefit of such a holistic view of the challenges has been that we were able to identify a clear relationship between collapse and overfitting that holds for any form of modular learning. We discuss this in more detail in Section 3. In Section 4, we present a detailed overview of routing, analyzing two strategies for training the router: reinforcement learning and recently introduced reparameterization strategies. There we also identify important design options, such as the choice of optimization algorithm and the router’s architecture. In Section 5, we empirically compare these different choices and how they influence the main challenges we have identified. We conclude with thoughts on promising directions for further investigation. # 2 Background and Related Work Routing networks are clearly related to task-decomposition modular networks [Jacobs et al., 1991a], and to mixtures of experts architectures [Jacobs et al., 1991b, Jordan and Jacobs, 1994] as well as their modern attention based [Riemer et al., 2016] and sparse [Shazeer et al., 2017] variants. The gating network in a typical mixtures of experts model takes in the input and chooses an appropriate weighting for the output of each expert network. This is generally implemented as a soft mixture decision as opposed to a hard routing decision, allowing the choice to be differentiable. Although the sparse and layer-wise variant presented in 2 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION [Shazeer et al., 2017] does save some computational burden, the proposed end-to-end differentiable model is only an approximation and does not model important effects such as exploration vs. exploitation trade-offs, despite their impact on the system. Mixtures of experts models, or more generally soft parameter sharing models, have been considered in the multi-task and lifelong learning setting [Stollenga et al., 2014, Aljundi et al., 2017, Misra et al., 2016, Ruder et al., 2017, Rajendran et al., 2017], but do not allow for nearly the level of specialization as those based on routing. This is because they do not eliminate weight sharing across modules and instead only gate the sharing. In practice, this still leads to significant interference across tasks as a result of a limited ability to navigate the transfer-interference trade-off [Riemer et al., 2019] in comparison to models that make hard routing decisions. # 2.1 Existing Approaches to Routing Networks Routing networks as a general paradigm of composing trainable transformations were first introduced in [Rosenbaum et al., 2017]. Since then, there have been several approaches extending them. Chang et al. [2019] show that a vanilla routing network can learn to generalize over patterns and thus to classes of unseen samples if it is trained with curriculum learning. Kirsch et al. [2018] identify one of the challenges of compositional computation, collapse, and develop a new policy gradient based routing algorithm that utilizes EM techniques to alternatingly group samples to transformations, and then applies them. Ramachandran and Le [2019] investigate the problem of architectural diversity over transformations, but use a top-k routing approach. Cases et al. [2019] show how routing can be paired with high quality linguistic annotations to learn compositional structures that maximize utilization of task-specific information. Alet et al. [2018] combine a routing-like approach with other meta-learning approaches, to allow for quick adaptation to new tasks. However, this approach relies on pre-trained composable transformations. # 2.2 Generalized Architecture based Meta-Learning Routing networks extend a popular line of recent research focused on automated architecture search, or more generally, architecture-based meta-learning. In this work, the goal is to reduce the burden on the algorithm designers by automatically learning black box algorithms that search for optimal architectures and hyperparameters. These approaches have been learned using reinforcement learning [Zoph and Le, 2017, Baker et al., 2017], evolutionary algorithms [Miikkulainen et al., 2017, Fernando et al., 2017], approximate random simulations [Brock et al., 2017], and adaptive growth [Cortes et al., 2016]. Liang et al. [2018] introduced an evolutionary algorithm approach targeting multi-task learning that comes very close to the original formulation in Rosenbaum et al. [2017]. However, routing networks are a generalization of these approaches [Ramachandran and Le, 2019] and are highly related in particular to the concept of one-shot neural architecture search [Brock et al., 2017, Pham et al., 2018, Bender et al., 2018]. The main distinction we are making here is that the benefit of routing as not solely related to parameter sharing across examples, but as also related to architectural biases inherent to specific network structures that may make them helpful for specific problems. # 2.3 Biological Plausibility The high-level idea of task specific “routing" is well founded in biological studies and theories of the human brain [Gurney et al., 2001, Buschman and Miller, 2010, Stocco et al., 2010]. The idea is that regions of the brain collaborate in a complex manner by altering the synchrony in neural activity between different areas and thus changing their effective connectivity so that signals are routed in a task specific coordination. It has been found that coincidence of spikes from multiple neurons converging on a post-synaptic neuron have a super-additive effect [Aertsen et al., 1989, Usrey and Reid, 1999, Engel et al., 2001, Salinas and Sejnowski, 3 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION 2001, Fries, 2005]. As a result, if neurons tuned to the same stimulus synchronize their firing, that stimulus will be more strongly represented in downstream areas. It is thought that local synchrony may help the brain to improve its signal to noise ratio while, at the same time, reducing the number of spikes needed to represent a stimulus [Aertsen et al., 1989, Tiesinga et al., 2002, Siegel and König, 2003]. The routing of modules can also be loosely linked to two fundamental aspects of the organization of the brain: functional segregation of neural populations and anatomic brain regions as specialized and independent cortical areas, and functional integration, the complementary aspect that accounts for coordinated interaction [Tononi et al., 1994]. Recently, Kell et al. [2018] have been able to replicate human cortical organization in the auditory cortex using a neural model with two distinct, task-dependent pathways corresponding to music and speech processing. When evaluated in real world tasks, their model performs as well as humans and makes human-like errors. This model is also able to predict with good accuracy fMRI voxel responses throughout the auditory cortex, which suggest certain level of convergence with brain-like representational transformations. Kell et al. [2018] use these promising results to suggest that the tripartite hierarchical organization of the auditory cortex commonly found in other species may also become evident in humans once more data is available and more realistic training mechanisms are employed. In particular, their results suggest that the architectural separation of the processing of the streams is compatible with functional segregation observed in the non-primary auditory cortex [Kell et al., 2018, and references therein]. Finally, as we will discuss briefly later, routing networks can be seen as a special case of coagent networks [Thomas and Barto, 2011], which are believed to be more biologically plausible than full end to end backpropogation networks. # 2.4 Other Similar Forms of Modular Learning Modular organization is central in research strategies in cognitive sciences, particularly in neuropsychology, where decomposition and localization emerge as crucial scientific research strategies [Bechtel and Richardson, 2010]. In neuropsychology, deficits in high-level cognitive functions are regularly associated with impairments of particular regions of the brain that are regarded as modules, typically under the assumptions that these modules operate independently and, to variying degrees, that their effects are local [Farah, 1994, Shallice, 1988, Bechtel and Abrahamsen, 2002]. These observations have a long tradition in cognitive science, where the nature of cognitive task-specific modules and the implementation details of compositional computation have been highly debated [Fodor, 1975, 1983, Fodor and Pylyshyn, 1988, Touretzky and Hinton, 1988, Smolensky, 1988, i.a.].1 Among others, there are two notable early attempts to build modular systems with a cognitive focus: Miikkulainen [1993] developed a modular system for Natural Language Understanding with task-specific components, some of them with correlates to cognitive functions; and Jacobs et al. [1991a]’s modular networks, a modular and conditional computation model that performed task decomposition based on the input and is, to some extent, a direct ancestor of routing networks. Another important framework for compositional learning that is also influenced by the challenges of routing discussed in this paper is the options framework [Sutton et al., 1999] for hierarchical reinforcement learning. In particular, challenges of routing are related to those experienced by end to end architectures for learning sequencing between policies over time with neural networks [Bacon et al., 2017] and extensions to hierarchical policies [Riemer et al., 2018]. Option models, similar to routing networks, are known to experience "option collapse" where either only one option is actually used or all options take on the same meaning. This common difficulty has motivated recent research showing improvement in learned options by imposing information theoretic objectives on learning that incentivize increased option diversity and increased entropy in option selection [Gregor et al., 2016, Florensa et al., 2017, Hausman et al., 2018, Eysenbach et al., 2018, 1A review of this long and at times acrimonious debate is outside of the scope of this paper. For particularly interesting accounts we refer to Bechtel and Abrahamsen [2002] and Marcus [2001]. 4 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Harutyunyan et al., 2019]. In contrast to routing networks, option models are also concerned with deciding the duration over which its decisions will last. In this paper we focus just on empirical analysis of the simpler case of routing at every time step so that we can directly address diversity of the chosen modules without introducing the additional complicating factor of sequencing selections over time. # 3 Challenges to Compositional Computation We have experienced several challenges particular to training compositional architectures, including training stability, module collapse, and overfitting as well as difficulties with performance extrapolation and formaliz- ing the setting. Instability in training may occur because of a complex dynamic interplay between the router and module training. Module collapse may occur if module selection collapses to a policy which makes the same decision for all inputs. Overfitting may be severely exacerbated in modular architectures because of their added flexibility to learn specialized modules for a very narrow subset of samples. Successfully extrapolating the performance of specific modules out over the course of training would potentially allow a more successful selection strategy, but achieving this is a difficult problem itself. Finally, we lack a good formalization of popular training methodologies, unifying reinforcement learning for the module selection training with supervised training for the modules. There has been some progress on each of these problems (for example, module collapse was discussed in [Kirsch et al., 2018]). However, we believe this is the first time they have been collected and given a systematic treatment jointly. One may argue that core problems of a more general nature underlie these issues. In fact, we can see that learning to compose is difficult because we must simultaneously balance two challenges of learning: the transfer-interference trade-off [Riemer et al., 2019], and the exploration- exploitation dilemma [Sutton and Barto, 1998]. The “transfer-interference trade-off” refers to the problem of choosing which parameters in the model are shared across different input samples or distributions. When there is a large amount of sharing of model parameters during training we may see better performance as a conse- quence of transfer, since each parameter is trained on more data. But it may also lead to worse performance if the training on different samples produces updates that ‘cancel’ each other out, or cause interference. Compositional architectures can offer an interesting balance between the two, as different modules may be active for different samples. ∇θLk • ∇θLi Interference • ∇θLk ∇θLi Transfer The “exploration-exploitation dilemma”, while mostly associated with reinforcement learning, exists for any kind of hard stochastic update. For modular learning, this means that the composer has to strike a balance between ‘exploiting’, i.e. selecting and training already known modules which may perform well at some point in time during training, and ‘exploring’ or selecting and training different modules that may not perform well yet, but which may become the globally optimal choice, given sufficient additional training. Unfortunately, high exploration increases the likelihood of both beneficial transfer and unwanted interference. But low exploration may bias the model towards selections which limit its ability to achieve an optimal balance between transfer and interference. Striking the right balance between exploration and exploitation can therefore help with interference but is not sufficient to mitigate it entirely. This entanglement complicates the learning and means we cannot treat these tradeoffs in isolation. 5 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # 3.1 Training Stability One of the most challenging problems for routing networks is overcoming the early learning phase when both modules and the router have just been initialized. In this phase, the router needs to learn the value of the modules, while these are effectively randomly trained and have not yet taken on any real meaning, since pathways and gradient-flows are not yet stable. In our experience, this “chicken-and-egg” problem of stabilizing the early training of a routing network, before paths have had the opportunity to specialize, can result in the routing dynamics never stabilizing, leaving the network at effectively random performance. This problem roughly correlates with the complexity of the decision making problem. If the router only needs to make decisions on a very local distribution of inputs, then stability is less of a problem. If, however, the router has to consider very complex distributions, it consistently struggles. -4 -2. 0 2 4 A mitigating strategy can be to slow down the learning behavior of either the modules or the router, by either fixing their parameters for a short initial period, or by reducing their learning rate. In practice, we have found that simply reducing the learning rate of the router is an effective strategy to stabilize early training. Another strategy that can be applied is curriculum learning [Bengio et al., 2009] as is done by Chang et al. [2019]. This can be very effective, but can only be applied if the training data can naturally ordered by complexity, which is not the case in many important domains. Neither of these strategies offer a general solution to the problem. They may work in some settings and fail in others. (a) A dataset with two linear modes -4 -2. 0 2 4 (b) The desired routing solution; model captures both modes # 3.2 Module Collapse Another common training difficulty is module collapse [Kirsch et al., 2018]. This occurs when the router falls into a local optimum, choosing one or two modules exclusively. Module collapse occurs when the router overestimates the expected reward of a selection. This may happen due to random initialization or because one module has a higher expected return during early training. Either way, the module will be chosen more often by the router, and therefore receive more training. This in turn improves the module so that it will be selected yet more often and receive yet more training until the module is dominant and no others seem promising. As an example, consider Figure 4, which depicts a routing problem of one dimensional linear regression (since it is easily illustrated). In plot (a), we depict a dataset with two noisy linear modes. Plot (b) shows the perfect, and desired, routing solution, where the routing model correctly assigns a routing path to each mode. Plot (c) depicts the regression curves produced by the available modules at initialization. Since the red and yellow approximations have been initialized producing too great an initial loss, the router will only update these during exploration phases, and will instead choose the green approximation. This may result in a XN 2 / : Dy Senged IN Pb fas. | 1 NAO Maasetns { S 2 wy oN perme, a \ -4) ¥¢ / ‘ Le . “4-2 6 2 4 (c) The routing modules at initialization (d) Module Collapse: the router reaches a local optimum using the green module only Figure 4: An example of how a 1- dimensional linear routing problem can collapse 6 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION suboptimal approximation (depicted in plot (d)), which can form a local optimum that will trap the router without prolonged additional, poorly rewarded exploration. This illustrates the view of collapse as a local optimum achieved by maximizing for early transfer. When every selection is “bad”, training a transformation even on generally less compatible samples, such as shown in Figure 4, will improve its representative power. Similarly, it can be seen as a local optimum in the exploration-exploitation dilemma, since the router correctly believes at a given time that one module is better than others, and may thus not explore the other options sufficiently. Existing solutions to this problem include: adding regularization rewards that incentivize diversity [Cases et al., 2019]; separating the training of the router from the training of the modules with an EM-like approach that learns over mini-batches, explicitly grouping samples first [Kirsch et al., 2018]; and conditioning the router entirely on discrete meta-data associated with the sample, e.g., task labels [Rosenbaum et al., 2017, Cases et al., 2019]. This has the additional benefit of reducing the dimensionality of the information on which the decisions are made, which, in turn, makes the routing problem easier. # 3.3 Overfitting (a) A dataset (b) A linear approximation (c) A routed approximation Figure 5: Illustration of how a routed model may overfit. The learned parameter values of the three linear transformations are a = 3, b = 0.1, c = 0.8. Previous work [Kirsch et al., 2018, Shazeer et al., 2017] observes that models of conditional computation can overfit more than their “base”-models, but do examine this issue in depth. We have also encountered this problem multiple times, and believe that it stems from the flexibility of routing models to compose highly local approximations. Consider the example in Figure 5, where we again route a linear scalar approximator. Suppose we have a training dataset consisting of observations of a noisy linear process, such as the one in Figure 5(a). It can be clearly seen that a linear model gives good (and desired) approximations, as in Figure 5(b), and that it does not overfit to the noise present in the data. However, imagine that we wanted to learn such an approximation with a routing model that consists of three parameterized functions, f1 = a · x, f2 = b · x, f3 = c · x, which the router can combine to a maximum depth of three. If the router now routes purely based on input information, i.e., if the router can route each sample through a different route, then it may choose a different path for different subsets of the training data, resulting in different (local) approximations as illustrated in Figure 5(c). For neural architectures in particular, this implies that routing breaks an implicit smoothness assumption which, arguably, allows them to generalize well in spite of high dimensionality (see Zhang et al. [2016] for a further discussion). Since the space of routes or paths grows exponentially with routing depth, deep routing networks can potentially learn highly local approximations with only few samples covered per approximation. This perspective also explains why Rosenbaum et al. [2017] and Cases et al. [2019] do not experience overfitting: if the path of a sample is independent of its input value or its intermediate activations, the resulting 7 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION paths and approximations cannot learn to match individual areas in activation-space only, but are constrained by the meta-information instead. Within the transfer-interference trade-off, overfitting is a natural consequence of avoiding training interference. In models of uniform, i.e., non-compositional, computation, the training algorithm always enforces some amount of interference over the model’s parameters. While often harmful, it can also steer training away from local suboptimal solutions that cause the model to overfit. Since this problem is not well investigated for machine learning models, we can only speculate on possible solutions. The most obvious is to apply regularization of some kind that prevents the router from fitting samples to highly expressive local approximators. A very successful solution with a different set of problems is the ‘routing-by-meta-information’ architecture of [Rosenbaum et al., 2017, Cases et al., 2019]. Another possible solution might be module diversity: if only different kinds of approximators can be fit the “locality” of routing does not have the same effect. This would explain why Ramachandran and Le [2019] were able to achieve very high performance with a modular architecture even when learning with few examples, although we need to take their particular top-k architecture into account. Another uninvestigated area which seems promising for exploration is the tradeoff in capacity between the router and the modules. Is it better to have a ‘smart’ router and ‘dumb’ modules or vice versa? Either will probably have an effect on overfitting. # The Flexibility Dilemma for Modular Learning Comparing the two challenges discussed this far, it becomes clear that they are not unrelated, but are an effect of too little – or too much – flexibility on the router’s part, where flexibility is a routing network’s ability to localize its routing decisions. If the router is not flexible enough to realize how truly different modes in a distribution should be treated differently, it will cause underfitting, oftentimes through collapse. If, on the other hand, the router is too flexible in mapping different inputs to different routing paths, it creates hyperlocal approximations that can overfit badly. n o i t a z i l a r e n e G t s e T Underfitting Overfitting Flexibility This goes to the core of what routing does, and why it can be such a powerful machine learning model. Routing always has an enormous degree of expressivity, as we have a combinatorial number of implicitly defined models, in the form of paths. If the model finds a good ‘locality’ of approximations, routing can allow a model to achieve truly impressive levels of general- ization (compare [Chang et al., 2019], and in particular the section on nested implicatives in [Cases et al., 2019]), as it can adapt to unseen samples by mapping these to known solutions in a highly variable way. However, on both sides of this ‘locality’ of approximation lie the above mentioned pitfalls of collapse and overfitting. While we illustrated and explained this problem on routing (neural models), it is by no means limited to it. Any modular approach that can treat different samples differently will be exposed to this dilemma between under- and overfitting, as it comes with any form of local approximation approach. # 3.4 Module Diversity While much of the prior work has focused on the case where each module is of the same kind but with different parameters, routing modules that have different architectures has the potential to be even more 8 > ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION powerful and sample efficient. This allows for a larger coverage of functional capabilities, thereby increasing the expressive power of a routing network. Interestingly, this is a specific instance of the “Algorithm Selection Problem” [Rice, 1976]. For learning problems, however, the problem is complicated by the different learning dynamics that different architectures might exhibit. Consider Figure 7. For any learning algorithm selection problem, deciding that Module 2 will yield superior performance requires the meta-learning algorithm to sufficiently explore – and thus train – both selections. However, for routing networks, exploration correlates with interference. This means that even if the router were to explore Module 1 suffi- ciently, the sampling noise of exploration could potentially interfere with the training procedure of selections taken before or after. While some progress on this problem has been made [Ramachandran and Le, 2019], it focuses on selections with similar learning characteristics. The full problem, for selections with arbitrary learning characteristics, was only explored in an offline setting, where selections are trained until completion and not varied while training [Zoph and Le, 2017, Liu et al., 2018]. Finding optimal selections online remains an open problem due to the difficulty of anticipating future rewards. # 3.5 A uniform formal framework All of the previous challenges are complicated by the lack of a principled mathematical formalization for compositional learning. In particular, it is not clear how the training of the router relates to and interacts with the training of the modules. Although existing approaches have found very successful solutions, even without such a formalization, a principled framework may provide additional insight, convergence guarantees and directions for the development of better algorithms. Specifically, it is not clear how training of the modules is impacted by the training of the routers, and vice versa. For how compositionality may interfere with the training of the modules, compare section 5.3. For the other direction, consider the case of a reinforcement learning strategy for training the routers (arguably the most relevant update strategy). Reinforcement learning relies fundamentally on the assumption that the environment is governed by an underlying Markov Decision Process (MDP) [Sutton and Barto, 1998]. A naive though intuitive characterization of routing in this context might interpret individual modules as actions. But an MDP requires a static, non-changing set of actions and here the modules are themselves trained and change over time. Although in practice this strategy has been shown to work in any case [Rosenbaum et al., 2017, Chang et al., 2019, Cases et al., 2019], it lacks theoretical justification, and more principled approaches may yield superior performance. This critique extends to the two special cases of MDPs introduced in the literature: Meta-MDPs and stochastic games. Meta-MDPs were introduced by Hay et al. [2014] and applied to routing by Chang et al. [2019], and specifically model the computational steps required to make a decision in another underlying MDP. That is, their states are partial results, and their actions are computations. Once they terminate, the resulting computation produces an action in the underlying decision problem. This approach emphasizes that the routing problem consists of computation steps, and not steps taken in an environment. A stochastic game, related to routing by Rosenbaum et al. [2017] is the multi-agent extension to MDPs and allows for an arbitrary number of agents to exist, which collaborate or compete in the same environment. As other agents interact with the environment, the environment becomes non-stationary from the perspective of any single agent. While this approach consequently models the non-stationarity of a routing problem better than a Meta-MDP, it still does not solve the more fundamental problem of possible interference between update strategies. 9 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION There are two scenarios which have more principled solutions to this criticism, although they, too, may benefit from a more targeted solution as they suffer from many of the same problems in practice. In the first, for neural networks, we use some form of reparameterized sampling function to update the composition (compare section 4.2). This addresses the core of the problem since the reparameterized decision making algorithms explicitly use the same update strategy as the (neural) models. In the second scenario, the problem solved with a routing network is optimizing a policy for a given MDP. In this case, the routing network is the policy, and can consequently be modeled as a coagent network [Thomas and Barto, 2011]. A coagent network is a theoretical framework that, like Meta-MDPs, models the decision making in an underlying MDP but explicitly allows for non-stationary interactions between parts of the policy. These parts, called coagents, can update locally on stochastic information. For routing, we can model the policy solving the underlying MDP as a routing network with routing coagents and transformation coagents (i.e., the module parameters are part of the policy). Then the policy gradient theorems in [Thomas, 2011] for the acyclic and [Kostas et al., 2019] for the cyclic case hold, as the modules will also be updated with REINFORCE. Extending this work on coagent networks to also cover non-reinforcement learning problems such as supervised learning as well might offer a principled way of modeling routing and similar RL-integrated approaches for conditional computation. # 4 Routing Routing describes a general framework of repeatedly selecting trainable function modules with a trainable router. As such, arbitrary components of a machine learning model can be routed, as long as the model is composed by a sequential application of functions in a set of (compatible) functions. We will focus on routing networks, where the learnable function modules are neural networks and parts thereof. The router receives the state of the computation, the current activation – the input x at step 0 – and evaluates it to select the best function, which will produce a new activation, which then triggers a new computational loop. If the router decides that the result has been processed sufficiently, the last activation will be handed to other computations or interpreted as the output of the model. This result will then be used to jointly train the function modules and the router. As routing relies on hard decisions to select the modules (as opposed to ‘soft’ decisions, where several modules are activated and combined in different ways), training algorithms for the router are limited. While other approaches are conceivable (in particular genetic algorithms and other stochastic training techniques), we focus on Reinforcement Learning (RL) and Stochastic Reparameterization in this work. router(f1(f3(x)), m) router(f3(x), m) f1 f1 ˆy = f1(f3(v, t)) f2 f3 f2 f3 ⊥ ⊥ f1 f2 f3 ⊥ router(x, m) input: x input :2¢€ R?, d the representation dim; m possible metainformation; output :h - the tensor fa, © faz 0... 0 far (x) rhea 2 while True do 3 a + router(h,m) store tuple (h, m, a) 4 5 ifa = | then 6 | return 1 7 8 else L h © module, (h) input: x Figure 8: Routing (forward) Example Algorithm 1: Routing Forward 10 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # 4.1 Reinforcement Learning Reinforcement learning describes a general machine learning paradigm in which an agent that makes a series of hard decisions is trained by providing simple scalar feedback, the reward. Applied to routing, the general outline is that the output of the model ˆy has to be translated to a reward r which is then used to improve the router’s policy π. # 4.1.1 Routing MDPs Markov Decision Processes (MDPs) are a common model for formally describing sequential decision making problems. While we have argued in section 3.5 that existing attempts to model compositional computation are problematic, we will still adopt a common formalism for now so that we can discuss some relevant concepts. Given a routing MDP M = (S, A, R, P,y), a set of applicable transformations F’, the space of samples X, the space of transformations H (i.e., the space of applications of members of F' to X), and the space of possibly available meta-information /, we can define: the states S = (X ∪ H) × M the actions A = F ∪ {⊥}, where ⊥ is a termination action 1, ifs=ha=f,s =f(h) hes, feF the transition probability P(s, a, s’) = : 0 otherwise the discount factor γ = 1 The reward function R can freely be designed by the model designer and is not defined any further for now. Solving an MDP generally consists of finding a policy 7*(@) (parameterized by parameters 0) that maximizes the objective, or score function, J(m(0)) := E(>>, ri|7(0)). # 4.1.2 Reward Design One of the most important questions when modeling a problem as an MDP is the question of how to design the reward function R. For a routing network there are two types of rewards to be considered. The first is the final reward that reflects the models performance of solving the main problem, i.e. of predicting a sample-label. The second are different forms of regularization rewards that can either be computed for entire trajectories or as immediate responses to individual actions. Final Rewards The final reward rf models the overall performance of the model for a sequence of routing decisions. For classification problems, the outcome is binary – correct or false – and an obvious choice is a simple binary reward of ±1 corresponding to the prediction agreement with the target label. As the objective for the training of the modules is a loss function Lm(y, ˆy), a different reward design for the same objective can be derived as the negative of the module loss: rf = −Lm(y, ˆy). The reward has to be the negative of the module loss as – by convention – we minimize model losses but maximize RL rewards. In addition, this allows for a natural application of routing networks to regression problems, as these cannot be translated into a simple binary ±1 reward. While the second reward design seems like the better choice, as it both richer in information and a more principled meta-learning objective that coincides with the main model’s objective, we found that it does not necessarily perform better in practice depending on the problem of focus. Regularization Rewards As the reward signal is sparse for complex compositions, it can also be quite helpful to incorporate additional rewards that can act as an intrinsic reward or regularizer of the choices made by the router. One option is to model problem-specific information in this reward – e.g. to reflect the cost of computation of choosing a specific function, or to incorporate domain-knowledge, such as intuitions as to 11 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION which model should be preferable. The second option, as employed in Rosenbaum et al. [2017], is to use this regularizer to incentivize transfer between different kinds of samples. There, the reward can be defined to correlate with how often a particular function a is chosen within some window w, C(a) = w samples – thereby motivating the router to choose, and thereby share, that function for a wider set of samples. This reward is defined as R(a) = α t C(a), with a ratio α, normalized by the trajectory length t – such that even for long trajectories this reward may be limited to be smaller than the final reward rf . Rosenbaum et al. [2017] only investigate values of α ∈ [0, 1]. However, this can lead to a lack of variety in decision making, as the selection collapses – as discussed in section 3.4 in detail. To compensate, Cases et al. [2019] investigate a different set of values for α, α ∈ [−1, 0], with the goal of incentivizing diversity of decision making. # 4.1.3 Algorithms Several RL algorithms have been successfully used to train routing policies. Rosenbaum et al. [2017] compare a variety of algorithms, but settle for a multi-agent algorithm, fitting their multi-agent approach to multi-task learning. Cases et al. [2019] find that plain Q-Learning performs the best for their domain. Kirsch et al. [2018] extend a REINFORCE based algorithm with an alternating, EM-like training for modules and router that also optimizes for module diversity and does lookaheads. Meanwhile, Chang et al. [2019] use Proximal Policy Optimization. While this might seem to suggest that there is not necessarily a “best” reinforcement learning algorithm, there are some constraints to be considered. The first is that the nonstationarity of the parameters of F prevents the use of replay buffers, as these would be invalidated after each transformation training step. Additionally, we argue that RL algorithms that can accommodate the change in the modules’ respective values over training will perform the best. Consider the following example: Given a set of module choices M , and a value estimator for a set of states S, ˆv(s, m), m ∈ M, s ∈ S. Given some training, the module parameters change, and thereby the expected value for each of the modules given a specific state sk; using some value-based RL algorithm, we also update ˆv. As, for routing, we generally want to exploit more than explore, the module estimates for state sk are of different quality, as some modules have been sampled less from state sk than others. However, these modules may have been trained a lot from a different state sl. Given that there can be transfer from state sk to state sl, this implies that the value of the less sampled modules from state sk will probably be underestimated, as the estimator for ˆv does not consider the additional training from state sl. Consequently, this may “lock” the router in its selection (see also section 3.2 for a discussion of this problem) early on. A possible solution to this problem – at least for RL algorithms with a value-based component – might be to use the Advantage instead of (Q-)values: Aπ(s, a) = Qπ(s, a) − Vπ(s), as the value component compensates for the inductive bias built up by training all transformations. Put differently, the increase in return for the on-policy average return, even when effectively only training one transformation, is not captured by the Q-component, but instead by the value component of the router. This may keep the difference in Q-value between different transformations low enough to lock the router permanently into a particular transformation, thereby helping to overcome selection collapse. # 4.2 Stochastic Reparameterization When it comes to hard decision making in differentiable architectures, stochastic reparameterization was recently introduced as an important alternative to Reinforcement Learning. While earlier work on stochastic reparameterization focused on low-dimensional stochastic distributions, Maddison et al. [2016] and Jang et al. [2016] discovered that the concrete – or Gumbel Softmax – distribution allows to reparameterize k 12 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION dimensional distributions (we will refer to the corresponding decision making algorithm as “Gumbel”). Considering that these can be implemented as differentiable ‘layers’ in deep architectures, they appear to be the better choice for a routing networks. Maddison et al. [2016] already note that the concrete distribution and its reparameterization do not yield an unbiased estimate of the gradient. The REINFORCE estimator [Williams, 1992] for the gradients of the score function, on the other hand, is known to be unbiased (but has very high variance). Therefore two more techniques were introduced to estimate the gradient of discrete random variables. The first, REBAR [Tucker et al., 2017] compensates for REINFORCE’s variance by introducing a control variate based on the concrete distribution. The second, RELAX [Grathwohl et al., 2018], generalizes this for non-discrete and non-differentiable stochastic functions, and allows a neural network to model the control variate. To better understand the relative performance of reparameterized ap- proaches when compared to a reinforcement learning algorithm, we performed the following analysis: Given a known distribution as a policy π parameterized by θ over a known reward function r, we do not sample, but instead analytically compute the value of the score- function J(θ) = E(r|π(θ)) = r · π(θ), where · denotes the inner product. We can continue by analytically computing the gradient for all parameters θ. We know that these gradients are the ground truth that policy gradient and reparameterization approaches approximate only. Relying on this ground-truth information, we can then approxi- mate the same gradients δJ δθ using REINFORCE, Gumbel and RELAX, and finally compute the average difference between the ground truth values and the approximations. — Actor Q Critic — REINFORCE BI —- Gumbel RELAX — REINFORCE |/ { 107 f_ ro" Average MSE 68 ‘Average Var 60 0 20 40 60 80 Number of Actions # inner. For Figure 9, we use this approach to compute both the average gradient differences and the average variances over all parameters. More specifically, we initialize a random reward function (where each reward is uniformly sampled from [0, 1]), of dimensionality k. We also randomly initialize a policy of dimensionality k, parameterized by θ. After computing the ground truth gradient for a given pair of reward function and policy, we use the same reward function and policy to sample the gradients for θ 22 ∗ k times (so that on average each action will be sampled equally often, even with increasing dimensionality). For each k, we compute and approximate the gradients for 22 reward and 22 policy values, totalling over 10000 ∗ k datapoints for each point in Figure 9. Figure 9 shows the mean squared error between the ground-truth gradient values and the sampled values using reparameterization and different RL policy gradient algorithms on top, and the corresponding variances on the bottom. While the gradients estimated by the Gumbel softmax trick are of much lower variance, they are also biased, with an average MSE larger than the MSE of the Policy Gradient algorithms. This is consistent with the analysis in [Maddison et al., 2016]. As for RELAX, we found very low variance for high dimensionalities, but – interestingly enough – a much higher MSE than REINFORCE and even than Gumbel, suggesting a higher bias. However, this may stem from the problem that RELAX relies on a trained surrogate network, which is difficult to train for these single-sample experiments. While we try to accommodate for this requirement by updating this network over multiple samples first, this might not suffice to fully initialize RELAX. However, as we will show in Section 5, where RELAX and its surrogate network are trained over millions of iterations, it is still very difficult to draw clear conclusions about how reparameterization techniques relate to REINFORCE based approaches in practice. 13 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # 4.3 Simultaneously Selecting Multiple Function Blocks While in principle a routing network that only chooses a single function block at each step is capable of learning any function, it could be useful for sample efficiency to combine the output of multiple parallel routing paths. One popular approach for implementing this idea is to use a sparse version of the mixtures of experts architecture as in [Shazeer et al., 2017, Ramachandran and Le, 2019]. This approach leverages a typical mixtures of experts gating network that combines the output of multiple experts using a weighted averaging procedure. However, in general we would like to have an even more expressive mechanism of combining the output from multiple parallel routing paths that can represent more complex relationships between the output of each path. As sparse mixtures of experts architectures only use the top K experts at each step, there is also an inherent exploitation and exploration dilemma that impacts the system despite the fact that is not modeled explicitly. As a result, sparse mixtures of experts architectures must include special procedures for adding noise to the system and load balancing to achieve strong performance [Shazeer et al., 2017]. In order to arrive at a general purpose solution, it makes more sense to model the selection of a subset of experts as an explicit reinforcement learning problem. For example, in [Bengio et al., 2015] each "expert" is gated by a Boolean routing decision that is modeled as a reinforcement learning problem. # 4.4 Training rf (ˆy, y) ˆy = f1(f3(v, t)) L(ˆy, y) +r(a2) +r(a1) a3 a2 ⊥ f2 f3 θ(f2) θ(f3) ∂L ∂f2 ∂L ∂f3 a1 input :The network’s output ˆy, the ground-truth target y, and the decision making trajectory T 1 compute the model loss L(G, y) 2 compute the final reward r ¢(g, y) 3 for each tuple (sx, 44,7 (ax), $h41,@k41) inT do 4 compute the Bellman error (or other corresponding compute the Bellman error (or other corresponding loss) for the current tuple, and add it to LRL 5 backprop on L + LRL and update using SGD input: x Figure 10: Routing (backward) Example Algorithm 2: Backward step Training a routing network is illustrated in Figure 10 and the corresponding algorithm is Algorithm 2. The core idea is that the training of the router and of the modules happens simultaneously, after completing an episode, i.e., the forward pass. After the network has been assembled in the forward pass, the output of the network is translated into a loss for the module parameters and a final reward for a reinforcement learner. That reward, in combination with any accumulated per-action rewards, can be used to define a training loss for the router, either a Bellman Error, a negative log probability loss or some other loss function used to train a decision maker. The resulting losses can be added, and then backpropagated along the decision making parameters to define a gradient for each parameter in the model. Backpropagating along the sum of the losses allows for higher mini-batch parallelization in the update process of the network. SGD Finally, a gradient descent style algorithm can use these gradients to derive a new set of parameter values. However, it is worth noticing that routing networks can be highly sensitive to the choice of optimization algorithm. We will discuss this further in Section 5. Exploratory Actions Another useful change to the training procedure is to limit the updates of the transformations if they were chosen by an exploratory action. The intuition is that we do not want the network to add interference to the training of modules if they were just evaluated and found not fitting to a particular 14 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION sample. As, however, any exploratory action has an effect on any return from the entire trajectory, we simply squash the optimization step size of the modules for the particular trajectory using the following formula: \“ α = 1 − #exploratoryactions trajectorylength (1) where α is factor to the learning rate, and κ is a hyperparameter. For κ = 0, exploratory actions will be treated no differently, while for a very large κ, the transformations are effectively not trained on the entire trajectory if even only one action was taken non-greedily. Splitting training data One of the challenges of training a routing network is overfitting (see Section 3.3 for a thorough explanation). To prevent this, we investigate splitting the training data into a part for training the transformations, and a part to train the router. # 4.5 Architectures Model Architectures In general, any machine learning model can be routed. However, for non-layered architectures, routing collapses to the better investigated model selection problem. For layered architectures, any single layer can be routed by creating parallel copies of the respective layer, each with different (hyper)parameters. Existing routing architectures have included routing several fully connected layers with identical hyperparameters but different parameters [Rosenbaum et al., 2017, Kirsch et al., 2018, Chang et al., 2019, Cases et al., 2019], routing entire convolutional networks [Kirsch et al., 2018], routing the hidden to hidden transformation of recurrent neural architectures [Kirsch et al., 2018, Cases et al., 2019], routing the input to hidden transformation [Cases et al., 2019]; and routing word representations [Cases et al., 2019]. However, only Ramachandran and Le [2019] have studied the effect of routing among modules that have different architectures. h4 h4 h4 t1 t2 t3 t1 t2 t3 t1 t2 t3 h3 π3(h3) h3 πa(h) ... πk(h) h3 π(h) t1 t2 t3 π2(h2) t1 t2 t3 t1 t2 t3 router h2 π1(h1) router h2 π(x) router h2 t1 t2 t3 t1 t2 t3 t1 t2 t3 (a) single h1 (b) per-decision h1 (c) dispatched h1 = x Figure 11: Routing architectures Router Architectures In the conceptually simplest version used by Chang et al. [2019], Kirsch et al. [2018], Ramachandran and Le [2019] and unsuccessfully tried by Rosenbaum et al. [2017], there is only one router, that learns to make all required decisions (compare Figure 11(a)). This router’s statespace contains the possible space of all activations, including the input activation. However, it requires additional work to stabilize – curriculum learning for Chang et al. [2019], separate grouping for Kirsch et al. [2018] and soft-routing for Ramachandran and Le [2019]. Rosenbaum et al. [2017] introduce two other interesting router architectures that consist of multiple subrouters. The first is assigning one subrouter to each decision problem (see Figure 11(b)). Consequently, each subrouter has a statespace constrained to the activations possible at the respective depth. In general, this approach has the disadvantage that it only allows a recursion depth not deeper than the number of subrouters defined. However, it is an effective way to implement routing 15 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION architectures where the modules available at different timesteps have to be different2. The second option discussed in [Rosenbaum et al., 2017] is what they call a “dispatched” routing network (compare Figure 11(c)). This hierarchical configuration has an extra preceding subrouter with the interesting task to assign – or cluster – samples in input space first, before assigning the sample to be routed to one of a set of parallel subrouters, each of which exclusively works in activation space. Another routing architecture design choice is the available action space. One already discussed option – particularly relevant for fully recursive routing models – is to include a “termination” action to stop routing and forward the last activation. This termination action, is not required for per-decision router designs (Figure 11(b)), but could be implemented if model constraints permit. Another one, introduced by Rosenbaum et al. [2017] for per-decision routing networks, is to include a “skip” action that does not terminate routing, but instead simply skips one sub-router. These two actions can result in identical behavior for limited-depth routing networks. In practice, dividing the decision problem over multiple subrouters, each with only a subset of actions and states to learn, can make training the router considerably easier. In particular in early stages of training, the problem of training a policy on modules that are similarly untrained can make fully recursive, single subrouter routing models hard to train. We will discuss this – and related problems – in more detail in the following section. # 5 Evaluation For evaluation, we consider two different domains: image classification and natural language inference. For image classification, we follow the architecture of Rosenbaum et al. [2017], i.e., we route the fully connected layers of a simple convolutional network with three 3 × 3 convolutional layers. For natural language inference, we follow the successful architecture by Cases et al. [2019], i.e., we route the word projections of a standard sentence to sentence comparison architecture. Following the general architectures choices described in Section 4.5, we try to do each experiment twice. One experiment relies on meta-information, i.e., task-labels, using a task-wise dispatched architecture, where each task is assigned to a separate subrouter which only routes based on meta-information and stores its policy in a table. The other experiment covers architectures without a dispatching subagent that route without any meta-information based on input sample information and consecutive activations. For image classification experiments, we use CIFAR 100, as it has predefined ‘tasks’ in form of its ‘coarse’ label structure, allowing us to naturally implement routing networks relying on meta-information. All experiments try to predict the ‘fine’ label in the context of the coarse label, i.e., the problem is five-way classification. All language inference experiments are on the Stanford Corpus of Implicatives [Cases et al., 2019], which is a three-way classificaion task. For experiments relying on meta-information, we use the provided ‘signatures’ as task-labels. All results shown in this section are results on test datasets. The entropy plots show the entropy of the selection distribution over the entire dataset, also at test time. As most of the results are meant to be qualitative in nature, we did not do extensive hyperparameter searches, and we do expect better results to be found. As we will show in the following section, Q-Learning is consistently among the best performing decision making algorithms. Thus, unless otherwise noted, all plots utilize Q-Learning. 2This can result as a consequence of e.g., dimensionality constraints; other approaches to solve this problem may include solutions where the agent, if it chooses an incompatible action, is heavily penalized and the episode ends immediately. We assume that several existing approaches also use separate approximations to deal with these dimensionality problems, though they do not mention this. 16 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # 5.1 Decision Making Strategies In this section, we evaluate different decision making strategies. Evaluated strategies include reparameteriza- tion strategies and several reinforcement learning algorithms. # 5.1.1 Learning with meta-information — AAC — QLeaming RELAX — AdvantageL. REINFORCE = WPL. — Gumbel — AAC — QLeaming RELAX — Advantagel. REINFORCE = WPL — Gumbel — AAC — QLeaming RELAX — Advantagel. REINFORCE = WPL — Gumbel accuracy in % accuracy in % accuracy in % 2.50 2.25) ss 10 20 30 40 entropy ©" entropy S entropy 10 20 30 40 5 epochs epochs epochs # (a) CIFAR 100 MTL, with depth of 1 # (b) CIFAR 100 MTL, with depth of 2 # (c) SCI MTL, with depth of 3 Figure 12: Multi-task results for different decision making strategies Figure 12 shows results for different decision making algorithms when relying on meta-information. The starkest result is that the policy gradient based approaches, including the related reparameterization algorithms — Gumbel and RELAX, are consistently outperformed by value-based reinforcement learning based approaches. As can be seen when comparing Figures 12(a) with (b), this difference grows with routing depth — i.e., number of routed layers. We stipulate that this difference results from the effect exploration has on interference, as discussed in section 3.2. In contrast to policy gradient strategies, e-greedy strategies are guaranteed to interfere with another transformation, even of near-identical value, only a fraction of the time purely determined by e. This also explains the best performing policy gradient based approach, the weighted policy learner (WPL) [Abdallah and Lesser, 2006], which was already successfully tested in the routing multi-agent setting of Rosenbaum et al. [2017]. WPL is a non-standard policy gradient algorithm (in that it does not use the REINFORCE log-probability trick) that was explicitly designed for stochastic games, and that has specific properties to emphasize exploitation even for similar-value actions. This effectively lowers its exploration and thereby its interference. In Figure 12(c), the language inference experiments, it is apparent how different algorithms stabilize (or not) over time. As for CIFAR, the value-based approaches Q-Learning and Advantage learning have a clear edge over the policy gradient based approaches. It is particularly interesting to see how both algorithms have nearly the same learning behavior for the first 20 epochs, until Advantage learning, and 20 epochs later, Q-Learning, apparently stabilize to then quickly leave the other algorithms behind. We speculate that the benefit of Advantage learning stems from the argument in Section 4.1.3, as it is able to offset the general increase in value of the different transformations. # 5.1.2 Learning without meta-information Routing decisions that do not rely on meta-information rely on the activation at the input to the routed layer instead. That is, they only consider the activation produced by the previous layer, which may or may not be routed, and which may even be the input. This allows the most fine-grained control over the routing path, as – possibly – each sample may be routed through a different path. The most straightforward implementation is a ‘single router’ architecture depicted in Figure 11(a), where the router consists of only one subrouter – one parameterization that gets passed in activations from any layer. This also allows arbitrary depth routing networks, as the router may decide to apply an arbitrary number of transformations before 17 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION stopping. Unfortunately, this complicates the routing process in at least two ways. The first is that the subrouter does not only need to learn where to route based on an activation alone, but even needs to do so for activations that may have been produced at completely different steps in the routing network. This makes the distribution of activations the router will have to decide on dramatically more difficult for the router to interpret, as it is likely that each ‘depth’ of activations adds another mode to the distribution. The second, related, reason is that the activations are highly volatile, as any change to the subrouter’s policy may dramatically transform the distribution of activations, making the modes non-stationary and even more difficult to disentangle. For illustration, consider a simple single router model where the router can select from {t1, t2, t3}. At a given training time, the router only selects t1 or t2. When an update changes the router to use t3 instead of t2, the space of activations will now have been created by {t1, t3}k, instead of by {t1, t2}k. It is obvious how this can ‘confuse’ a router, in particular when this non-stationarity is combined with the already existing non-stationarity produced by updating the transformations. Consequently, performance on both CIFAR and SCI, as de- picted in Figure 13 is basi- cally the same as random. For the results on language infer- ence in Figure 13(b), the rout- ing problem becomes so com- plex that the router even picks a fourth class for language infer- ence (which generally only has three) that only exists for imple- mentation reasons, thereby con- sistently achieving 0% accuracy instead of the 33% of the random baseline. (a) CIFAR, single router (b) SCI, single router Figure 13: Single router results To investigate the problem posed by these results, we additionally experiment with two more archi- tectures not relying on meta-information. In the first, we fix the depth for the single router archi- tecture, and in the second, we design a per-layer subrouter architecture depicted in Figure 11(b). Other than the single router architecture used for the experiments in Figures 13, this architecture does not have one approximation network over all routing layers, of which there may be an arbi- trary number, but instead only makes (at most) k decisions, with a separate subrouter for each layer. As shown in Figure 14(a), fix- ing the routing depth does help to stabilize the training for some algorithms. As with other ex- periments, the (Q-)value based approaches start learning, but, more surprisingly, RELAX is able to stabilize, although at the cost of complete collapse. We as- sume that in this highly volatile environment the rich gradient in- formation provided by RELAX can help the most. In Figure 14, we show results for a separate subrouter for each layer, thereby limiting the maximum number of applied transformations to 3 (each subrouter is able to terminate earlier). This makes the decision making problem easier so that some algorithms are able to stabilize. However, most policy gradient approaches, including RELAX, collapse to achieve this stability, while the remaining policy gradient 18 % # accuracy in = 10 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION algorithms, WPL and Gumbel, do not achieve noteworthy performance. Only Q-Learning and Advantage learning learn while maintaining selection entropy. Interestingly, the router apparently does not learn to completely minimize interference, as it does not achieve non-routed performance, not even when it collapses. It should be noted, though, that even for algorithms that successfully stabilize, the solutions found are local optima of very bad performance. For both the results in Figure 14, the final performance stays about 30% below the performance achieved by models relying on meta-information. These results strongly suggest that routing networks – and quite probably any other kind of approach to compositional computation – have difficulty overcoming the initial instability of the training process, if they are not provided with a strong external signal. One such signal, as discussed in the previous section, are task- labels or other pieces of meta-information that allow the routing network to find paths at the dimensionality of the meta-information, which tends to be much smaller than the full space of activations. Another approach, as introduced by Chang et al. [2019], is to carefully curate the order in which samples are provided to the network. This, also, drastically limits the complexity of the decision problem, as only ‘similar’ samples are shown at any given time during training, but relies on such an order to exist for a given dataset. # 5.1.3 Reparameterization Techniques and Exploration Strategies In general, the results for reparameterized routing, i.e., using Gum- — coreedy REINFORCE © REINFORCE . . . . —— Qlearnin bel and RELAX, are very similar to the results for policy gradient 2 approaches. We even found that for deeper architectures, the reparam- ” E eterization techniques suffer from the same decrease in performance 60 2 as other PG algorithms, as shown in Figure 12(b). As mentioned 50 ‘ above, we assume that this stems from inferior exploration behavior — 275 . for on-policy stochastic sampling. To verify, we designed a version of 7 eeaadnemeemamea | REINFORCE that samples actions using an e-greedy policy, i.e., that io 20 30 a0 30 epochs takes the best action € of the time, and that samples from the actual policy 1 — € of the time. When training, we compensate for the dif- ference in sampling and training policies using importance sampling [Precup et al., 2000]. As shown in Figure 15, this indeed changes the behavior of REINFORCE to act more like a value-based approach.’ These results suggest that routing may greatly benefit from a targeted decision making algorithm. In particular exploration, with its added complexity of causing interference, seems to be a promising direction for future research. For example, it may be useful to consider algorithms that have separate exploitation and exploration policies, along the lines of Garcia and Thomas [2019], or algorithms that can incorporate more complex exploration mechanics into reparameterization techniques. Figure 15: A greedy version of REINFORCE on CIFAR 100 MTL # 5.2 Reward Design Figure 16 shows results for different final reward functions. In general, the correct/incorrect final reward strategy, ±1 for correct and incorrect classification, appears clearly superior to the negative classification (cross-entropy) loss reward (rf = −L(ˆy, y). While this seems initially surprising, as the negative loss reward contains more information, we believe that this is exactly what makes learning more difficult, for two reasons. First, a negative classification loss may overemphasize outliers, as, relatively to other samples, it may add a much larger part to the total loss. While this may be useful for supervised learning, a reinforce- ment learning router may update to accommodate these samples at the cost of other, non-outlyer samples. 3We should also note that exploration for the Q-Learning experiments is very high, with over 0.4 for the first 5 epochs, which suggests that the amount of exploration may play a role, but that the exploitation strategy dominates the results. 19 # ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Second, as suggested by the high fluctuation in the learning on SCI, the finer granularity of the negative classification loss reward may reduce the differ- ence between the value of dif- ferent transformations, resulting in even small updates changing the greedy strategy. As this re- duces stability, it will also de- crease overall performance. (a) Final reward on CIFAR 100 MTL (b) Final reward on SCI MTL Figure 16: Multi-task results for different reward functions Figure 17 shows plots for different intermediate reward values. These rewards are computed as a frac- tion of the overall probability of choosing a particular transformation. Positive values are expected to increase transfer and lower entropy, while negative values are expected to decrease interference and in- crease entropy. Comparing the results for CIFAR in Figure 17(a) with the results for SCI in Figure 17(b) it becomes apparent that the domain – or the architecture – plays a major role in the effect of this regularization reward. While the reward has no discernable effect on CIFAR, it dramatically changes the convergence behavior – if not the final performance – on SCI. Interestingly, negative reward values, meant to increase diversity, ap- pear to stabilize learning as the model learns much faster, but have no effect on the routing en- tropy. While a complete inter- pretation of this result is very difficult, we speculate that this results from the higher poten- tial for interference on SCI, and maybe even on language do- mains in general. The potential for interference is higher when training on SCI because the effective input dimensionality is nearly an order of magnitude lower than it is for CIFAR, which leads to a larger average overlap between examples in the input space. Additionally, as is also the case for us here, models used for NLP tend to use smaller hidden representations than computer vision models, resulting in a larger average overlap between examples in the activation space as well. This would not only explain why ‘pushing’ the router to diversify stabilizes learning, but also why the results with the negative loss reward function have such a high variance, as any change in routing decision may end up with a much higher amount of interference. These results suggest that future research should consider different new reward functions. In particular, it could be worthwhile to find a final reward function that can rely on the information contained in the negative loss, but that is less susceptible to its problems. Additionally, it would be interesting to investigate an adaptive intermediate reward that incentivizes diversity as long as it is needed for stability, but that eventually anneals to zero, so that the router only optimizes for the overall model performance. # 5.3 Other Design Choices Figure 18 shows the effect that lowering the training of transformations chosen non-greedily has. As one can see, κ has very little effect on the overall performance and only mild effect on the entropy. 20 # accuracy in % # entropy ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION —o00 — 02 —o05 —10 2.0 — 05 — 07 — 10) — ascD — Adam — Rprop —— Adadelta Adamax — sGcD — Adagrad RMSprop —~ e 70 . s s 707 > > g 60 3 60 3 g 504 ——— 50 40 z 25 z 2787 tt oma... 5 20 5 22 10 20 30 40 50 10 20 30 40 50 10 20 30 40 epochs. epochs epochs 70 60 50 2.75 # oly 2.25 Figure 18: Squashing training for ex- ploratory trajectories on CIFAR100 MTL Figure router- transformation training on CIFAR100 MTL 19: Stochastic Figure 20: The effect of the optimizer choice on CIFAR100 MTL Splitting the training set into samples to train the transformations and samples to train the router, on the other hand, has a clear effect on the performance, as shown in Figure 19. Here, the curves show the percentage of the data used to train the transformations. As one can see, using the full data benefits both performance and entropy. More interesting is Figure 20. As already reported by Rosenbaum et al. [2017], Cases et al. [2019], routing becomes unstable for many optimizers, and consistently yields the best performance for ‘plain’ SGD. We stipulate that similar problems can be observed for any architecture with an adaptive computation graph. Consider a simple example, where we route a sample x through a two-layer routing network. The first routing decision can choose from transformations {t1,1, t1,2} and the second from {t2,1, t2,2}. Now consider the gradients for t1,2 over two different routing paths, t1,2(t2,1(x)) and t1,2(t2,2(x)). It is obvious that for those two paths, t2,1 and t2,2 may yield vastly different activations, and thereby vastly different inputs to t1,2. If an optimizer relies on parameter-specific approximations, such as, e.g., momentum, an approximation for t2,2 may be completely incompatible with t2,1, thereby making training difficult, or even impossible. This explains why plain optimization strategies that do not compute parameter-specific information generally do better in the context of dynamic computation graphs. # 5.4 Stability As we argued before, stability is a consequence of the routing ‘chicken-and-egg’ problem: Upon initialization of a routing network, both the modules and the router do not yet have any information on the problem, and act randomly. The router cannot stabilize, as it cannot discern which selected module caused any increase in performance, and the modules cannot stabilize as they are trained with samples and activations so different that interference can destroy any learning that may happen. This, in turn, may destroy any progress the router may have made with the credit assignment. This problem is never worse than for the ‘single router’ architecture (compare Figure 11), for the reasons described in Section 5.1.2. Consequently, a routing network may fail to learn anything, as depicted in Figure 13. # 5.5 Module Collapse Figure 14 already showed how different router training algorithms can lead to collapse in differ- ent domains. Additionally, consider Figure 21, where the experiment is on CIFAR with a dis- patched architecture4. For this architecture, collapse is not a consequence of the routing algo- rithm, but of the general architecture, as all algorithms lead to collapse. However, in difference to the results in Figure 14, the results achieved are good, reaching standard non-routed performance. 4The same fc layers are routed as in the other experiments. However, the dispatching action is based on the activation after the convolutional layers. The dispatched subrouters are tabular, and do not consider the intermediate activations. 21 # Fa < = # g > # z 50 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION This is a not surprising result, as collapse will stabilize the router, and as there are no later activations to consider. This is reflected by the test-time selection entropy of nearly all algorithms going to zero within ten epochs. Surprisingly, this extends to the stochastic reparameterization algorithms Gumbel and RELAX which do not rely purely on rewards, further establishing that in the context of routing, Gumbel and RELAX behave ‘just as’ any other PG algorithm. a ee ee = accuracyin% ~ ©" entropy & 10 20 30 epochs The only algorithms that do not collapse completely are WPL and Advantage learning. We assume that WPL does not collapse as much as it was specifically designed for non-stationary returns, and that Advantage learning performs well as a consequence of the argument put forth in 4.1.3. Figure 21: Collapse on CIFAR with a dis- patched architecture _ # 5.6 Overfitting As the main reference architectures for this section – routing the clas- sifier of a convolutional network, and routing the word representations of a natural language inference model – were chosen because of their performance, they overfit only little. — Basic — Routing |. Les ale 60 50 40 test acc in % —100 80 05 (View aww 0.0 30 60 train acc in % 40 entropy i} 10 20 30 40 50 epochs However, if we consider an sequence-to-sequence architecture for language inference with a dispatched routed classifier, we can show how routing can lead a model to overfit. Consider Figure 22, which shows from top to bottom the test accuracy, train accuracy and test entropy on SCI. While both a basic, non-routed network and the routed network easily reach perfect train accuracy (> 99.5%), the routing architecture starts flattening out even earlier than the basic architecture, reaching accuracy around 10% lower than the basic model, and over 20% lower than the routed model. As discussed in section 3.3, overfitting is less of a problem in the presence of metainformation, as the router can ignore spurious acti- vations in these cases. However, as the goal of modular architectures is to adaptively modularize without any kind of external information guiding the composition, future work should investigate how routing architectures can be regularized to achieve good generalization properties. # 6 Conclusion Summary of Results: Considering the results presented in this paper, it becomes clear how much of the performance of a routing network still depends on design- and hyperparameter decisions. However, many other results are also clear: • In the context of routing, reparameterization techniques behave similar to RL policy gradient approaches, and have oftentimes even identical performance and selection behavior. • All policy gradient approaches, including reparameterization techniques, fall short compared to value-based learning. We established that the problem of PG approaches is their exploitation strategy. 22 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION • Routing networks often need externally provided structure to properly stabilize and to achieve maximal performance given typical dataset sizes today. This structure may come in form of meta- information, or as a guided curricula of training samples. • We evaluated the role that the reward design has on training reinforcement learning routers, observing that simple rewards tend to perform better than high information rewards that may be more difficult to interpret. • The choice of optimization algorithm has a huge impact on the overall stability of routing networks, and presumably on any kind of compositional model of learning. Here, modern, more sophisticated algorithms can fail completely, with a simple SGD update performing the best. • We investigated each of the challenges introduced and discussed in Section 3, and illustrated how different design decisions can lead to instability, to collapse, and how the high expressivity of a routing network can lead to overfitting. • We additionally identified a dilemma of flexibility for modular learning. This arises out of a need to balance the flexibility (or locality) of modular approaches to avoid both collapse and overfitting. Open Questions: Research into compositional architectures has accelerated in recent years. In this area, routing, as a general formulation of functional composition relying on trainable modules, can provide good insights into the challenges ahead. Stability, selection collapse, overfitting, mathematical justifications and learning over different architectures may only be some of the challenges that need to be solved before a fully compositional and modular architecture that is able to quickly adapt to any new problem, task, or context can be designed. With this paper we hope to contribute an overview that is useful for establishing these problems within the research community and lay out some future steps. Moreover, our analysis in this paper reveals that decision making strategies for routing, different routing architectures, and new reward design strategies seem to be very promising directions for future research that have the potential to lead to significant improvements in general purpose models with compositional architectures. 23 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION # References Abdallah, S. and V. Lesser 2006. Learning the task allocation game. In Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, Pp. 850–857. ACM. Aertsen, A., G. Gerstein, M. Habib, and G. Palm 1989. Dynamics of neuronal firing correlation: modulation of" effective connectivity". Journal of neurophysiology, 61(5):900–917. Alet, F., T. Lozano-Pérez, and L. P. Kaelbling 2018. Modular meta-learning. CoRR, abs/1806.10166. Aljundi, R., J. Chakravarty, and T. Tuytelaars 2017. Expert gate: Lifelong learning with a network of experts. In Proceedings CVPR 2017, Pp. 3366– 3375. Andreas, J., M. Rohrbach, T. Darrell, and D. Klein 2015. Deep compositional question answering with neural module networks. CoRR, abs/1511.02799. Bacon, P.-L., J. Harb, and D. Precup 2017. The option-critic architecture. AAAI. Baker, B., O. Gupta, N. Naik, and R. Raskar 2017. Designing neural network architectures using reinforcement learning. ICLR. Bechtel, W. and A. Abrahamsen 2002. Connectionism and the mind: Parallel processing, dynamics, and evolution in networks. Blackwell Publishing. Bechtel, W. and R. C. Richardson 2010. Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research. Mit Press. Bender, G., P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le 2018. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, Pp. 549–558. Bengio, E., P. Bacon, J. Pineau, and D. Precup 2015. Conditional computation in neural networks for faster models. CoRR, abs/1511.06297. Bengio, Y., N. Léonard, and A. C. Courville 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. CoRR, abs/1308.3432. Bengio, Y., J. Louradour, R. Collobert, and J. Weston 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, Pp. 41–48. ACM. Brock, A., T. Lim, J. M. Ritchie, and N. Weston 2017. SMASH: one-shot model architecture search through hypernetworks. CoRR, abs/1708.05344. Buschman, T. J. and E. K. Miller 2010. Shifting the spotlight of attention: evidence for discrete computations in cognition. Frontiers in human neuroscience, 4. Cases, I., C. Rosenbaum, M. Riemer, A. Geiger, T. Klinger, A. Tamkin, O. Li, S. Agarwal, J. D. Greene, D. Jurafsky, C. Potts, and L. Karttunen 2019. Recursive routing networks: Learning to compose modules for language understanding. 24 # In ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Chang, M., A. Gupta, S. Levine, and T. L. Griffiths 2019. Automatically composing representation transformations as a means for generalization. In Interna- tional Conference on Learning Representations. Cortes, C., X. Gonzalvo, V. Kuznetsov, M. Mohri, and S. Yang 2016. Adanet: Adaptive structural learning of artificial neural networks. arXiv preprint arXiv:1607.01097. Davis, A. and I. Arel 2013. Low-rank approximations for conditional feedforward computation in deep neural networks. arXiv preprint arXiv:1312.4461. Engel, A. K., P. Fries, and W. Singer 2001. Dynamic predictions: oscillations and synchrony in top–down processing. Nature Reviews Neuro- science, 2(10):704. Eysenbach, B., A. Gupta, J. Ibarz, and S. Levine 2018. Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070. Farah, M. J. 1994. Neuropsychological inference with an interactive brain: A critique of the "locality" assumption. Behavioral and Brain Sciences, 17(1), 43-104. Fernando, C., D. Banarse, C. Blundell, Y. Zwols, D. Ha, A. A. Rusu, A. Pritzel, and D. Wierstra 2017. arXiv:1701.08734. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint Florensa, C., Y. Duan, and P. Abbeel 2017. Stochastic neural networks for hierarchical reinforcement learning. arXiv preprint arXiv:1704.03012. Fodor, J. A. 1975. The Language of Thought. Harvard University Press. Fodor, J. A. 1983. The Modularity of Mind. MIT Press. Fodor, J. A. and Z. W. Pylyshyn 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71. Fries, P. 2005. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends in cognitive sciences, 9(10):474–480. Garcia, F. M. and P. S. Thomas 2019. A meta-mdp approach to exploration for lifelong reinforcement learning. CoRR, abs/1902.00843. Grathwohl, W., D. Choi, Y. Wu, G. Roeder, and D. Duvenaud 2018. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. In International Conference on Learning Representations. Gregor, K., D. J. Rezende, and D. Wierstra 2016. Variational intrinsic control. arXiv preprint arXiv:1611.07507. Gurney, K., T. J. Prescott, and P. Redgrave 2001. A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biological cybernetics, 84(6):401–410. 25 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Harutyunyan, A., W. Dabney, D. Borsa, N. Heess, R. Munos, and D. Precup 2019. The termination critic. arXiv preprint arXiv:1902.09996. Hausman, K., J. T. Springenberg, Z. Wang, N. Heess, and M. Riedmiller 2018. Learning an embedding space for transferable robot skills. In International Conference on Learning Representations. Hay, N., S. Russell, D. Tolpin, and S. E. Shimony 2014. Selecting computations: Theory and applications. arXiv preprint arXiv:1408.2048. Jacobs, R. A., M. I. Jordan, and A. G. Barto 1991a. Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science, 15(2):219 – 250. Jacobs, R. A., M. I. Jordan, S. J. Nowlan, and G. E. Hinton 1991b. Adaptive mixtures of local experts. Neural computation, 3(1):79–87. Jang, E., S. Gu, and B. Poole 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Jordan, M. I. and R. A. Jacobs 1994. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181–214. Kell, A. J., D. L. Yamins, E. N. Shook, S. V. Norman-Haignere, and J. H. McDermott 2018. A task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron, 98(3):630 – 644.e16. Kirsch, L., J. Kunze, and D. Barber 2018. Modular networks: Learning to decompose neural computation. In Advances in Neural Information Processing Systems, Pp. 2414–2423. Kostas, J., C. Nota, and P. S. Thomas 2019. Reinforcement learning without backpropagation or a clock. arXiv preprint arXiv:1902.05650. Liang, J. Z., E. Meyerson, and R. Miikkulainen 2018. Evolutionary architecture search for deep multitask networks. CoRR, abs/1803.03745. Liu, C., B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy 2018. Progressive neural architecture search. In The European Conference on Computer Vision (ECCV). Maddison, C. J., A. Mnih, and Y. W. Teh 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712. Marcus, G. F. 2001. The Algebraic Mind. MIT Press. Miikkulainen, R. 1993. Subsymbolic natural language processing - an integrated model of scripts, lexicon, and memory, Neural network modeling and connectionism. MIT Press. Miikkulainen, R., J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, A. Navruzyan, N. Duffy, and B. Hodjat 2017. Evolving deep neural networks. arXiv preprint arXiv:1703.00548. Misra, I., A. Shrivastava, A. Gupta, and M. Hebert 2016. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Pp. 3994–4003. 26 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Pham, H., M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean 2018. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Precup, D., R. S. Sutton, and S. P. Singh 2000. Eligibility traces for off-policy policy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning, Pp. 759–766. Morgan Kaufmann Publishers Inc. Rajendran, J., P. Prasanna, B. Ravindran, and M. M. Khapra 2017. ADAAPT: attend, adapt, and transfer: Attentative deep architecture for adaptive policy transfer from multiple sources in the same domain. ICLR, abs/1510.02879. Ramachandran, P. and Q. V. Le 2019. Diversity and depth in per-example routing models. In International Conference on Learning Representations. Rice, J. R. 1976. The algorithm selection problem. In Advances in computers, volume 15, Pp. 65–118. Elsevier. Riemer, M., I. Cases, R. Ajemian, M. Liu, I. Rish, Y. Tu, and G. Tesauro 2019. Learning to learn without forgetting by maximizing transfer and minimizing interference. In In International Conference on Learning Representations (ICLR). Riemer, M., M. Liu, and G. Tesauro 2018. Learning abstract options. In Advances in Neural Information Processing Systems, Pp. 10424–10434. Riemer, M., A. Vempaty, F. Calmon, F. Heath, R. Hull, and E. Khabiri 2016. Correcting forecasts with multifactor neural attention. In International Conference on Machine Learning, Pp. 3010–3019. Rosenbaum, C., T. Klinger, and M. Riemer 2017. Routing networks: Adaptive selection of non-linear functions for multi-task learning. CoRR, abs/1711.01239. Ruder, S., J. Bingel, I. Augenstein, and A. Søgaard 2017. Sluice networks: Learning what to share between loosely related tasks. arXiv:1705.08142. Salinas, E. and T. J. Sejnowski 2001. Correlated neuronal activity and the flow of neural information. Nature reviews neuroscience, 2(8):539. Shallice, T. 1988. From neuropsychology to mental structure. Cambridge University Press. Shazeer, N., A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. CoRR, abs/1701.06538. Siegel, M. and P. König 2003. A functional gamma-band defined by stimulus-dependent synchronization in area 18 of awake behaving cats. Journal of Neuroscience, 23(10):4251–4260. Smolensky, P. 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences, 11(1):1–23. Stocco, A., C. Lebiere, and J. R. Anderson 2010. Conditional routing of information to the cortex: A model of the basal ganglia’s role in cognitive coordination. Psychological review, 117(2):541. 27 ROUTING NETWORKS AND THE CHALLENGES OF MODULAR AND COMPOSITIONAL COMPUTATION Stollenga, M. F., J. Masci, F. Gomez, and J. Schmidhuber 2014. Deep networks with internal selective attention through feedback connections. In Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, eds., Pp. 3545–3553. Curran Associates, Inc. Sutton, R. S. and A. G. Barto 1998. Introduction to Reinforcement Learning, 1st edition. Cambridge, MA, USA: MIT Press. Sutton, R. S., D. Precup, and S. Singh 1999. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2):181–211. Thomas, P. S. 2011. Policy gradient coagent networks. In Advances in Neural Information Processing Systems, Pp. 1944– 1952. Thomas, P. S. and A. G. Barto 2011. Conjugate markov decision processes. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), Pp. 137–144. Tiesinga, P., J.-M. Fellous, J. Jos, and T. Sejnowski 2002. Information transfer in entrained cortical neurons. Network: Computation in Neural Systems, 13(1):41–66. Tononi, G., O. Sporns, and G. M. Edelman 1994. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proceedings of the National Academy of Sciences, 91(11):5033–5037. Touretzky, D. S. and G. E. Hinton 1988. A distributed connectionist production system. Cognitive Science, 12(3):423–466. Tucker, G., A. Mnih, C. J. Maddison, and J. Sohl-Dickstein 2017. REBAR: low-variance, unbiased gradient estimates for discrete latent variable models. CoRR, abs/1703.07370. Usrey, W. M. and R. C. Reid 1999. Synchronous activity in the visual system. Annual review of physiology, 61(1):435–456. Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Zhang, C., S. Bengio, M. Hardt, B. Recht, and O. Vinyals 2016. Understanding deep learning requires rethinking generalization. CoRR, abs/1611.03530. Zoph, B. and Q. V. Le 2017. Neural architecture search with reinforcement learning. ICLR. 28
{ "id": "1611.01144" }
1904.10509
Generating Long Sequences with Sparse Transformers
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.
http://arxiv.org/pdf/1904.10509
Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever
cs.LG, stat.ML
null
null
cs.LG
20190423
20190423
9 1 0 2 r p A 3 2 ] G L . s c [ 1 v 9 0 5 0 1 . 4 0 9 1 : v i X r a # Generating Long Sequences with Sparse Transformers # Rewon Child 1 Scott Gray 1 Alec Radford 1 Ilya Sutskever 1 Abstract Transformers are powerful sequence models, but require time and memory that grows quadrati- cally with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to O(n n). We also introduce a) a variation on architecture and initial- ization to train deeper networks, b) the recompu- tation of attention matrices to save memory, and c) fast attention kernels for training. We call net- works with these changes Sparse Transformers, and show they can model sequences tens of thou- sands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR- 10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more. # 1. Introduction Estimating complex, high-dimensional data distributions is a central problem in unsupervised learning, as many down- stream applications of interest involve generation of text, images, audio, and other data. Additionally, it is believed to be a key component of unsupervised representation learning. Recently, neural autoregressive models have achieved im- pressive results in this domain, achieving state-of-the-art in modeling natural language (Jozefowicz et al., 2016) (Rad- ford et al., 2018) (Dai et al., 2018), raw audio (Van Den Oord et al., 2016) (Mehri et al., 2016), and images (Oord et al., 2016) (Menick & Kalchbrenner, 2018) (Salimans et al., 2017) (Reed et al., 2017) (Chen et al., 2017). These methods decompose a joint probability distribution into a product of conditional ones. Modeling these condi- tional distributions is extremely challenging, however, as they contain many complex, long-range dependencies and require a suitably expressive model architecture to learn them. Architectures based off CNNs (Oord et al., 2016) have made Figure 1. Unconditional samples from our neural autoregressive model on ImageNet 64 and a classical music dataset. We used the same self-attention based architecture for audio, images, and text. The samples above were generated with softmax temperature 1.0, and had lengths 12,288 and 65,536. Audio samples be listened to at https://openai.com/blog/sparse-transformer great progress in this direction, but require significant depth to expand their receptive field. To address this, WaveNet (Van Den Oord et al., 2016) introduced dilated convolutions, which allowed the network to model long-range dependen- cies in a logarithmic number of layers. Separately, the Transformer (Vaswani et al., 2017) has been shown to excel on many natural language tasks, which may be in part due to its ability to model arbitrary dependencies in a constant number of layers. As each self-attention layer has a global receptive field, the network can allocate rep- resentational capacity to the input regions for which it is Generating Long Sequences with Sparse Transformers most useful. Thus the architecture may be more flexible at generating diverse data types than networks with fixed connectivity patterns. However, the memory and computational requirements of such networks grows quadratically with sequence length, which excludes their use on long sequences. Outside of generative modeling, there are several works relevant to improving the efficiency of attention based off chunking (Chiu & Raffel, 2017) or using fixed length repre- sentations (Britz et al., 2017). Other works have investigated attention with multiple ”hops”, such as (Sukhbaatar et al., 2015) and (Gehring et al., 2017). The main contribution of this work is to introduce several sparse factorizations of the attention matrix, which scale √ n) with the sequence length without sacrificing as O(n p performance. These work by separating the full attention computation into several faster attention operations which, when combined, can approximate the dense attention oper- ation. We use this to apply self-attention to sequences of unprecedented length. Additionally, we introduce several other changes to the Transformer, including: It is worth noting that the Gated Pixel CNN (Oord et al., 2016) and WaveNet (Van Den Oord et al., 2016) use multi- plicative interactions in their networks, which are related to self-attention. # 3. Background We consider the task of autoregressive sequence gener- ation, where the joint probability of a sequence x = {x1, x2, ..., xn} is modeled as the product of conditional probability distributions and parameterized by a network θ. • A restructured residual block and weight initialization to improve training of very deep networks • A set of sparse attention kernels which efficiently com- pute subsets of the attention matrix P(x) =| [Giles ..., 21-159) dd) i=l • Recomputation of attention weights during the back- wards pass to reduce memory usage We empirically validate that models augmented in this man- ner can achieve state-of-the-art compression and generation of natural language, raw audio, and natural images. The simplicity of the architecture leads us to believe it may be useful for many problems of interest. We treat images, text, and audio as a sequence of discrete tokens, typically raw bytes. The network θ takes in the se- quence of tokens and outputs a categorical distribution over the v possible values of the next token using the softmax function, where v is the size of the vocabulary. The training objective is to maximize the log-probability of the data with respect to θ. # 2. Related Work The most related work involves other techniques for scaling up autoregressive generative models. For images, (Reed et al., 2017) models conditional independence between the pixels in order to generate many locations in parallel, and (Menick & Kalchbrenner, 2018) imposes an ordering and multi-scale upsampling procedure to generate high fidelity samples. (Parmar et al., 2018) uses blocks of local attention to apply Transformers to images. For text, (Dai et al., 2018) introduces a state reuse ”memory” for modeling long-term dependencies. And for audio, in addition to (Van Den Oord et al., 2016), (Mehri et al., 2016) used a hierarchical struc- ture and RNNs of varying clock-rates to use long contexts during inference, similar to (Koutnik et al., 2014). (Huang et al., 2018) apply Transformers to MIDI generation with an efficient relative attention. A simple and powerful choice for model θ is a Transformer (Vaswani et al., 2017) in decoder-only mode, as demon- strated by (Radford et al., 2018) and (Liu et al., 2018). These models transform the input sequence with blocks of mul- tihead self-attention over the entire sequence, followed by dense transformations over each sequence element. The self- attention portion of the network must compute n weightings for each of n elements, however, which can quickly become intractable as the sequence length grows. In the following sections, we describe our modifications to the Transformer architecture which make it more suitable for modeling long sequences. # 4. Factorized Self-Attention Our work is simpler than many of the techniques above and can be applied equally across images, text, and audio. Many of the above techniques are orthogonal to ours, moreover, and could be used in conjunction with ours. Sparse Transformers separate the full self-attention opera- tion across several steps of attention, as visualized in Figure 3(b) and 3(c). To motivate our approach, we first perform a qualitative assessment of attention patterns learned by a standard Transformer on an image dataset. Generating Long Sequences with Sparse Transformers SSE SEsp BSES BNE Figure 2. Learned attention patterns from a 128-layer network on CIFAR-10 trained with full attention. White highlights denote attention weights for a head while generating a given pixel, and black denotes the autoregressive mask. Layers are able to learn a variety of specialized sparse structures, which may explain their ability to adapt to different domains. a) Many early layers in the network learn locally connected patterns, which resemble convolution. b) In layers 19 and 20, the network learned to split the attention across a row attention and column attention, effectively factorizing the global attention calculation. c) Several attention layers showed global, data-dependent access patterns. d) Typical layers in layers 64-128 exhibited high sparsity, with positions activating rarely and only for specific input patterns. (a) Transformer (b) Sparse Transformer (strided) (c) Sparse Transformer (fixed) Figure 3. Two 2d factorized attention schemes we evaluated in comparison to the full attention of a standard Transformer (a). The top row indicates, for an example 6x6 image, which positions two attention heads receive as input when computing a given output. The bottom row shows the connectivity matrix (not to scale) between all such outputs (rows) and inputs (columns). Sparsity in the connectivity matrix can lead to significantly faster computation. In (b) and (c), full connectivity between elements is preserved when the two heads are computed sequentially. We tested whether such factorizations could match in performance the rich connectivity patterns of Figure 2. Generating Long Sequences with Sparse Transformers # 4.1. Qualitative assessment of learned attention patterns Additionally, for the time being we consider valid choices of A, where all input positions are connected to all future output positions across the p steps of attention. We visualized the attention patterns learned by a 128-layer self-attention network on CIFAR-10, and present several examples in Figure 2. Visual inspection showed that most layers had sparse attention patterns across most data points, suggesting that some form of sparsity could be introduced without significantly affecting performance. Several layers (Figure 2c) clearly exhibited global patterns, however, and others exhibited data-dependent sparsity (Figure 2d), both of which would be impacted by introducing a predetermined sparsity pattern into all of the attention matrices. In this paper, we restricted our investigation to a class of sparse attention patterns that have connectivity between all positions over several steps of attention. These methods can be more efficient than full attention while still providing global context to any given position. We aimed to empiri- cally validate the performance of these factorized patterns on a range of tasks, given that they are unable to learn the exact same mappings as those in Figure 2. We present the formulation of factorized attention below. For every j ≤ i pair, we set every A such that i can attend to j through a path of locations with maximum length p + 1. Specifically, if (j, a, b, c, ..., i) is the path of indices, then j ∈ A(1) # b These two criteria allow us keep the ability of Transformers to propagate signals from arbitrary input positions to arbi- trary output positions in a constant number of steps, while √ reducing the total effective computation to O(n p n). We also note that softening the validity criterion (for instance, having a series of only locally connected layers) may be a useful inductive bias for certain domains. In this work, we explore two factorizations for p = 2, which we describe in the following section, though we note that the same techniques can be easily extended to higher dimen- sions. # 4.3. Two-dimensional factorized attention # 4.2. Factorized self-attention A self-attention layer maps a matrix of input embeddings X to an output matrix and is parameterized by a connectiv- ity pattern S = {S1, ..., Sn}, where Si denotes the set of indices of the input vectors to which the ith output vector attends. The output vector is a weighted sum of transforma- tions of the input vectors: A natural approach to defining a factorized attention pattern in two dimensions is to have one head attend to the previous l locations, and the other head attend to every lth location, where l is the stride and chosen to be close to n, a method we call strided attention. Formally, A(1) and A(2) visualized in Figure 3(b). Attend(X, S'}) = (a0.5)) 2, i n} (2) 2, i n} KE a(xi, Si) = softmax (Wqxi)K T Si √ d VSi (3) Ks, = (Wis) Vs. = (mx) @ This formulation is convenient if the data naturally has a structure that aligns with the stride, like images or some types of music. For data without a periodic structure, like text, however, we find that the network can fail to properly route information with the strided pattern, as spatial coor- dinates for an element do not necessarily correlate with the positions where the element may be most relevant in the future. Here Wq, Wk, and Wv represent the weight matrices which transform a given xi into a query, key, or value, and d is the inner dimension of the queries and keys. The output at each position is a sum of the values weighted by the scaled dot-product similarity of the keys and queries. Full self-attention for autoregressive models defines Si = {j : j ≤ i}, allowing every element to attend to all previous positions and its own position. Factorized self-attention instead has p separate attention heads, where the mth head defines a subset of the indices A(m) . We are i chiefly interested in efficient choices for the subset A, where |A(m) i In those cases, we instead use a fixed attention pattern (Fig- ure 3(c)), where specific cells summarize previous locations and propagate that information to all future cells. Formally, Al) = {7 : ({7/l| = |i/1|)}, where the brackets denote the floor operation, and Ae?) = {j : jmodl € {t,t +1,...,]}, where t = 1 — cand c is a hyperparameter. Concretely, if the stride is 128 and c = 8, then all future positions greater than 128 can attend to positions 120-128, all positions greater than 256 can attend to 248-256, and so forth. A fixed-attention pattern with c = 1 limits the expressivity of the network significantly, as many representations in Generating Long Sequences with Sparse Transformers the network are only used for one block whereas a small number of locations are used by all blocks. We instead found choosing c ∈ {8, 16, 32} for typical values of l ∈ {128, 256} to perform well, although it should be noted that this increases the computational cost of this method by c in comparison to the strided attention. Additionally, we found that when using multiple heads, having them attend to distinct subblocks of length c within the block of size l was preferable to having them attend to the same subblock. In the subsequent section, we describe how to incorporate factorized attention into the Sparse Transformer architec- ture. # 5. Sparse Transformer Here we fully describe the Sparse Transformer architecture, which is a modified version of the Transformer (Vaswani et al., 2017). # 5.1. Factorized attention heads Standard dense attention simply performs a linear transfor- mation of the attend function defined in Equation 2: attention(X) = Wp · attend(X, S) (5) (21, 2,.-.,n) embed | norm dropout dropout where Wp denotes the post-attention weight matrix. The simplest technique for integrating factorized self-attention is to use one attention type per residual block, and interleave them sequentially or at a ratio determined as a hyperparam- eter: attention(X) = Wp · attend(X, A(r mod p)) Here r is the index of the current residual block and p is the number of factorized attention heads. Figure 4. Diagram depicting one residual block of the Sparse Trans- former. The shaded background indicates tensors which are check- pointed (Chen et al., 2016) and stored in GPU memory. The other tensors, including the attention weights and feedforward network activations, are recomputed during the calculation of gradients, reducing memory usage substantially. A second approach is to have a single head attend to the locations of the pixels that both factorized heads would attend to, which we call a merged head: P attention(X) = W, - attend(X, U A™) (7) m=1 This is slightly more computationally intensive, but only by a constant factor. A third approach is to use multi-head attention (Vaswani et al., 2017), where nh attention products are computed in parallel, then concatenated along the feature dimension: Here, the A can be the separate attention patterns, the merged patterns, or interleaved as in Eq. 2. Also, the di- mensions of the weight matrices inside the attend function are reduced by a factor of 1/nh, such that the number of parameters are invariant across values of nh. We typically find multiple heads to work well, though for extremely long sequences where the attention dominates the computation time, it is more worthwhile to perform them one at a time and sequentially. # 5.2. Scaling to hundreds of layers attention(X) = W,, (attend (x, A)) (8) We found that Transformers were difficult to train with many layers, as noted by (Al-Rfou et al., 2018). Instead of incorporating auxillary losses, we adopted the following Generating Long Sequences with Sparse Transformers architectural changes. First, we use the pre-activation residual block of (He et al., 2016), defining a network of N layers in the following way: H0 = embed(X, We) (9) For images, we used data embeddings, where ddata = 3 for the row, column, and channel location of each input byte. For text and audio, we used two-dimensional attention embeddings, where dattn = 2 and the index corresponds to each position’s row and column index in a matrix of width equal to the stride. Hk = Hk−1 + resblock(Hk−1) (10) y = softmax(norm(HN )Wout) (11) # 5.4. Saving memory by recomputing attention weights where embed is a function we describe in the next section, Wout is a weight matrix, and resblock(h) normalizes the input to the attention block and a positionwise feedforward network in the following way: a(H) = dropout(attention(norm(H))) Gradient checkpointing has been shown to be effective in reducing the memory requirements of training deep neural networks (Chen et al., 2016), (Gruslys et al., 2016). It is worth noting, however, that this technique is particularly effective for self-attention layers when long sequences are processed, as memory usage is high for these layers relative to the cost of computing them. b(H) = dropout(ff(norm(H + a(H)))) resblock(H) = a(H) + b(H) (14) The norm function denotes Layer Normalization (Ba et al., 2016), and f(z) = W, f(W 2 + b,) + bg. Our choice of f is the Gaussian Error Linear Unit (Hendrycks & Gimpel, 2016), f(X) = X © sigmoid(1.702 - X), as used in (Rad- ford et al., 2018). The output dimension of W is 4.0 times the input dimension, unless otherwise noted. Observe that HN is the sum of N applications of functions a and b, and thus each function block receives a gradient directly from the output layer . We scale the initialization 1√ of W2 and Wp in Eq. 5 by to keep the ratio of input embedding scale to residual block scale invariant across values of N . # 5.3. Modeling diverse data types In addition to the embedding of input symbols, positional embeddings are typically used in Transformers and other location-agnostic architectures to encode the spatial relation- ships of data (Gehring et al., 2017), (Parmar et al., 2018). We found using learned embeddings which either encoded the structure of the data or the factorized attention patterns were important for performance of our models. We added either nemb = ddata or nemb = dattn embed- dings to each input location, where ddata refers to the num- ber of dimensions of the data, and dattn is the number of dimensions of the factorized attention. If xi is the one-hot encoded ith element in the sequence, and o(j) represents the one-hot encoded position of xi in the jth dimension (1 ≤ j ≤ nemb), then: Nemb xiWet >> of W; (15) j=l embed(X, W.) = Using recomputation alone, we are able to train dense atten- tion networks with hundreds of layers on sequence lengths of 16,384, which would be infeasible on modern hardware otherwise. In our experiments, we recompute the attention and feed- forward blocks during the backwards pass. To simplify our implementation, we do not apply dropout within the attention blocks, as in (Vaswani et al., 2017), and instead only apply it at the end of each residual addition, as seen in Figure 4. # 5.5. Efficient block-sparse attention kernels The sparse attention masks in 3(b) and 3(c) can be efficiently computed by slicing out sub-blocks from the query, key, and value matrices and computing the product in blocks. Atten- tion over a local window can be computed as-is, whereas attention with a stride of k can be computed by transposing the matrix and computing a local window. Fixed attention positions can be aggregated and computed in blocks. In order to ease experimentation, we implemented a set of GPU kernels which efficiently perform these operations. The softmax operation is fused into a single kernel and also uses registers to eliminate loading the input data more than once, allowing it to run at the same speed as a simple nonlinearity. The upper triangle of the attention matrix is never computed, moreover, removing the need for the negative bias term of (Vaswani et al., 2017) and halving the number of operations to be performed. # 5.6. Mixed-precision training We store network weights in single-precision floating-point, but otherwise compute network activations and gradients in half-precision, as in (Micikevicius et al., 2017). This acceler- ates our training due to the usage of Tensor Core operations on the V100 GPU. During the gradient calculation, we use Generating Long Sequences with Sparse Transformers Figure 5. Unconditional samples from ImageNet 64x64, generated with an unmodified softmax temperature of 1.0. We are able to learn long-range dependencies directly from pixels without using a multi-scale architecture. dynamic loss scaling to reduce numerical underflow, and we communicate half-precision gradients when averaging across multiple GPUs. When sampling, we cast the queries and keys to single-precision, as the query-key product can sometimes overflow the max value of half-precision. ized to 0 and all weights are initialized from N (0, 0.125√ ) din where din is the fan-in dimension. The weight matrix for the output logits was initialized to 0. # 7. Experiments # 6. Training We use the Adam optimizer with a linear warmup of 5000 iterations and a gradient clipping of 1.0, both of which we found important for model stability. We use a weight decay penalty of 0.01. We annealed the learning rate according to a cosine decay as in (Radford et al., 2018). We train on 8 V100 GPUs unless otherwise noted. We empirically test our architecture on density modeling tasks including natural images, text, and raw audio. A summary of the results is available in Table 1. We found that, in addition to running significantly faster than full attention, sparse patterns also converged to lower error, as shown in Table 2. This may point to a useful inductive bias from the sparsity patterns we introduced, or an underlying optimization issue with full attention. All embeddings are of a constant dimension d, usually one of {256, 512, 1024}. By default, all linear transforms are to the same dimension, with the exception of the feed-forward network, which projects the input to 4d, unless we use “half-size” transformations, where it is 2d. Additionally, sometimes we halve the size of the query and key transfor- mations. We initialize the token embedding We from N (0, 0.125√ ) and d the position embeddings from N (0, ). Within the attention and feedforward components, all biases are initial- # 7.1. CIFAR-10 We train strided Sparse Transformers on CIFAR-10 images represented as sequences of 3072 bytes. Models have 2 heads, 128 layers, d = 256, half-size feedforward network and query-key projections, and are trained for 120 epochs with a learning rate of 0.00035 and a dropout rate of 0.25 until validation error stops decreasing. We use 48000 examples for training and 2000 examples for validation, evaluating the performance of our best models on Generating Long Sequences with Sparse Transformers Table 1. Summary of our findings for density modeling tasks. Re- sults are reported in bits per byte, which is equivalent to bits per dim for image tasks. M refers to millions of parameters. Table 2. Sparse patterns showed increased speed and also better loss on the datasets where we could compare both, which may point to a useful inductive bias in the patterns we learned or an underlying optimization issue with full attention. Model Bits per byte CIFAR-10 PixelCNN (Oord et al., 2016) PixelCNN++ (Salimans et al., 2017) Image Transformer (Parmar et al., 2018) PixelSNAIL (Chen et al., 2017) Sparse Transformer 59M (strided) 3.03 2.92 2.90 2.85 2.80 Enwik8 Deeper Self-Attention (Al-Rfou et al., 2018) Transformer-XL 88M (Dai et al., 2018) Transformer-XL 277M (Dai et al., 2018) Sparse Transformer 95M (fixed) 1.06 1.03 0.99 0.99 ImageNet 64x64 PixelCNN (Oord et al., 2016) Parallel Multiscale (Reed et al., 2017) Glow (Kingma & Dhariwal, 2018) SPN 150M (Menick & Kalchbrenner, 2018) Sparse Transformer 152M (strided) Classical music, 5 seconds at 12 kHz Sparse Transformer 152M (strided) 3.57 3.7 3.81 3.52 3.44 1.97 Model Bits per byte Time/Iter Enwik8 (12,288 context) Dense Attention Sparse Transformer (Fixed) Sparse Transformer (Strided) 1.00 0.99 1.13 1.31 0.55 0.35 CIFAR-10 (3,072 context) Dense Attention Sparse Transformer (Fixed) Sparse Transformer (Strided) 2.82 2.85 2.80 0.54 0.47 0.38 Table 3. We observe increased compression of Enwik8 with longer contexts, suggesting the Sparse Transformer can effectively incor- porate long-term dependencies. Minimum context length during evaluation Bits per byte 6,144 tokens 9,216 tokens 10,752 tokens 11,904 tokens 12,096 tokens 12,160 tokens 0.9952 0.9936 0.9932 0.9930 0.9922 0.9908 the test set. The model achieves 2.80 bits per dim (2.798 ± 0.004 over seeds 1, 2, 3) versus the previous 2.85 state of the art (Chen et al., 2017). We also compare performance of different attention patterns in Table 2. The strided attention reaches the lowest error in the shortest amount of time, surpassing the error of dense attention at 2.82 bits per dim. # 7.2. Text In order to assess Sparse Transformers on datasets without a strong two-dimensional structure, we trained models on the EnWik8 dataset, which represents the first 108 bytes of Wikipedia and contains a great degree of variability in periodic structure. We trained with a context length of 12,288, which is longer than previous approaches. the number of parameters. Strided attention failed to do well on this dataset, whereas fixed patterns were able to recover and surpass the performance of dense attention, as listed in Table 2. Additionally, during evaluation of the test set, we modified the minimum context length the network could use by evalu- ating fewer tokens in parallel. We saw monotonic increases in performance with more tokens used, up to 12,160 out of the 12,288 tokens used for training (see Table 3), which suggests the network is effectively incorporating long-term dependencies. # 7.3. ImageNet 64x64 We trained on the first 90 million tokens and reserved the last 10 million for validation and test. We used 30-layer fixed Sparse Transformers with 8 heads, d = 512, and a dropout rate of 0.40. We trained for 80 epochs until validation loss stopped decreasing. We used a stride of 128, c = 32, and merged the factorized attention heads. In order to test the ability of the model to learn long range dependencies and scale to a large dataset, we train on the version of downsampled ImageNet released by (Oord et al., 2016) and evaluate on the validation set. We used a 48 layer strided Sparse Transformer with 16 attention heads and d = 512, totaling 152 million parameters. We used a stride of 128, a dropout of 0.01, and trained for 70 epochs, which took 7 days on 64 V100 GPUs. Our best model reached 0.99 bits per dim (0.992 ± 0.001 over seeds 1, 2, 3), surpassing the 1.03 state-of-the-art for a similarly-sized Transformer-XL (Dai et al., 2018) and matching the 0.99 of a model trained with more than double Our model achieves a loss of 3.44 bits per dim (3.437 across 1 run), in comparison to the previous 3.52 (Menick & Kalch- brenner, 2018). Generating Long Sequences with Sparse Transformers Additionally, we generate unconditional samples (Figure 5) at an unmodified softmax temperature of 1.0, from the model and from one trained with twice the layers (300M parameters total). We include here samples from the 300M parameter model. On visual assessment we find no artifacts from the sparsity patterns and see evidence of long-term structure in most images. # 9. Acknowledgements We would like to thank Ashish Vaswani for insightful dis- cussions during the genesis of the project. We also thank Joshua Meier and Mark Chen for helpful discussions, and Johannes Otterbach, Prafulla Dhariwal, and David Luan for feedback on drafts of this paper. # 7.4. Classical music from raw audio # References To test the extent to which Sparse Transformers are able to scale to very long contexts, we trained models on the classical music dataset released by (Dieleman et al., 2018). As details of the dataset processing are unavailable, we omit any direct comparison to other work and instead study what size of Sparse Transformer we can train with increasing context size. For each sequence length, we attempted to train the largest model which could entirely fit into 16GB V100 accelerators without model parallelism. Al-Rfou, R., Choe, D., Constant, N., Guo, M., and Jones, L. Character-level language modeling with deeper self- attention. arXiv preprint arXiv:1808.04444, 2018. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Britz, D., Guan, M. Y., and Luong, M.-T. Efficient attention using a fixed-size memory representation. arXiv preprint arXiv:1707.00110, 2017. Overall, we found that increasing the sequence length by a factor of 4 requires a reduction in model capacity of approx- 4 = 8. Thus we found we could use factorized imately 4 self-attention on sequences over 1 million timesteps long, albeit with extremely few parameters (3 million). Chen, T., Xu, B., Zhang, C., and Guestrin, C. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016. Samples are available for sequences of length 65,536, which correspond to around 5 seconds of generated audio at 12kHz. The samples clearly demonstrate global coherence over the sampled period, and exhibit a variety of play styles and tones, swapping from rhythmic playing to forceful. To listen to samples, visit https://openai.com/blog/ sparse-transformer. Sample quality quickly de- grades for greater sequence lengths due to reduced model capacity. Chen, X., Mishra, N., Rohaninejad, M., and Abbeel, P. Pixelsnail: An improved autoregressive generative model. arXiv preprint arXiv:1712.09763, 2017. Chiu, C.-C. and Raffel, C. Monotonic chunkwise attention. arXiv preprint arXiv:1712.05382, 2017. Dai, Z., Yang, Z., Yang, Y., Cohen, W. W., Carbonell, J., Le, Q. V., and Salakhutdinov, R. Transformer-xl: Language modeling with longer-term dependency. 2018. Table 4. Performance of a strided Sparse Transformer on a classical audio dataset (µ-law encoded at 12 kHz) as a function of sequence length and model size. Dieleman, S., van den Oord, A., and Simonyan, K. The chal- lenge of realistic music generation: modelling raw audio at scale. In Advances in Neural Information Processing Systems, pp. 8000–8010, 2018. Sequence length 65,536 262,144 1,048,576 Parameters Bits per byte 152M 25M 3M 1.97 2.17 2.99 # 8. Conclusion Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017. Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., and Graves, A. Memory-efficient backpropagation through In Advances in Neural Information Processing time. Systems, pp. 4125–4133, 2016. We introduced Sparse Transformers and showed they attain equivalent or better performance on density modeling of long sequences than standard Transformers while requiring significantly fewer operations. This performance is state- of-the-art in images and text and is easily adaptable to raw audio. The model demonstrates usage of long-term context and generates globally coherent samples. He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. Hendrycks, D. and Gimpel, K. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415, 2016. Generating Long Sequences with Sparse Transformers Huang, C.-Z. A., Vaswani, A., Uszkoreit, J., Shazeer, N., Hawthorne, C., Dai, A. M., Hoffman, M. D., and Eck, D. An improved relative self-attention mechanism for transformer with application to music generation. arXiv preprint arXiv:1809.04281, 2018. Jozefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., and Wu, Y. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. Sukhbaatar, S., Weston, J., Fergus, R., et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440–2448, 2015. Van Den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. Wavenet: A generative model for raw audio. CoRR abs/1609.03499, 2016. Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10236–10245, 2018. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Atten- tion is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017. Koutnik, J., Greff, K., Gomez, F., and Schmidhuber, J. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014. Liu, P. J., Saleh, M., Pot, E., Goodrich, B., Sepa- ssi, R., Kaiser, L., and Shazeer, N. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018. Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S., Sotelo, J., Courville, A., and Bengio, Y. Samplernn: An unconditional end-to-end neural audio generation model. arXiv preprint arXiv:1612.07837, 2016. Menick, J. and Kalchbrenner, N. Generating high fidelity im- ages with subscale pixel networks and multidimensional upscaling. arXiv preprint arXiv:1812.01608, 2018. Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaev, O., Venkatesh, G., et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017. Oord, A. v. d., Kalchbrenner, N., and Kavukcuoglu, K. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, Ł., Shazeer, Image transformer. arXiv preprint N., and Ku, A. arXiv:1802.05751, 2018. Radford, A., Narasimhan, K., Salimans, T., and Sutskever, Improving language understanding by genera- I. URL https://s3-us-west-2. ama- tive pre-training. zonaws. com/openai-assets/research-covers/language- unsupervised/language understanding paper. pdf, 2018. Reed, S., Oord, A. v. d., Kalchbrenner, N., Colmenarejo, S. G., Wang, Z., Belov, D., and de Freitas, N. Paral- lel multiscale autoregressive density estimation. arXiv preprint arXiv:1703.03664, 2017. Salimans, T., Karpathy, A., Chen, X., and Kingma, D. P. Pixelcnn++: Improving the pixelcnn with discretized lo- gistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
{ "id": "1603.05027" }
1904.09708
Compositional generalization in a deep seq2seq model by separating syntax and semantics
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in neuroscience suggesting separate brain systems for syntactic and semantic processing, we implement a modification to standard approaches in neural machine translation, imposing an analogous separation. The novel model, which we call Syntactic Attention, substantially outperforms standard methods in deep learning on the SCAN dataset, a compositional generalization task, without any hand-engineered features or additional supervision. Our work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure.
http://arxiv.org/pdf/1904.09708
Jake Russin, Jason Jo, Randall C. O'Reilly, Yoshua Bengio
cs.LG, cs.CL, stat.ML
18 pages, 15 figures, preprint version of submission to NeurIPS 2019, under review
null
cs.LG
20190422
20190523
9 1 0 2 y a M 3 2 ] G L . s c [ 3 v 8 0 7 9 0 . 4 0 9 1 : v i X r a # Compositional generalization in a deep seq2seq model by separating syntax and semantics # Jake Russin Department of Psychology and Neuroscience University of Colorado Boulder [email protected] Jason Jo MILA Université de Montréal # Randall C. O’Reilly Department of Psychology and Neuroscience University of Colorado Boulder Yoshua Bengio MILA, Université de Montréal CIFAR Senior Fellow # Abstract Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic gener- alization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in neuroscience suggesting separate brain systems for syntactic and semantic processing, we implement a modification to standard approaches in neural machine translation, imposing an analogous separation. The novel model, which we call Syntactic Attention, substantially outperforms standard methods in deep learning on the SCAN dataset, a compositional generalization task, without any hand-engineered features or additional supervision. Our work suggests that separating syntactic from semantic learning may be a useful heuristic for capturing compositional structure. # Introduction A crucial property underlying the expressive power of human language is its systematicity [16; 9]: syntactic or grammatical rules allow arbitrary elements to be combined in novel ways, making the number of sentences possible in a language to be exponential in the number of its basic elements. Recent work has shown that standard deep learning methods in natural language processing fail to capture this important property: when tested on unseen combinations of known elements, state-of- the-art models fail to generalize [15; 17; 4]. It has been suggested that this failure represents a major deficiency of current deep learning models, especially when they are compared to human learners [19; 16]. A recently published dataset called SCAN [15] (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a sequence-to-sequence (seq2seq) setting by systematically holding out of the training set all inputs containing a basic primitive verb ("jump"), and testing on sequences containing that verb. Success on this difficult problem requires models to generalize knowledge gained about the other primitive verbs ("walk", "run" and "look") to the novel verb "jump," without having seen "jump" in any but the most basic context ("jump" → JUMP). It is trivial for human learners to generalize in this way (e.g. if I tell you that "dax" is a verb, you can generalize its usage to all kinds of constructions, like "dax twice and then dax again", without even knowing what the word means) [15]. However, standard recurrent seq2seq models fail miserably on this task, with the best-reported model (a gated recurrent unit augmented with an attention mechanism) achieving Preprint. Under review. ¢ Train © Test Figure 1: Simplified illustration of out-of-domain (o.o.d.) extrapolation required by SCAN compo- sitional generalization task. Shapes represent the distribution of all possible command sequences. In a simple split, train and test data are independent and identically distributed (i.i.d.), but in the add-primitive splits, models are required to extrapolate out-of-domain from a single example. only 12.5% accuracy on the test set [15; 4]. Recently, convolutional neural networks (CNN) were shown to perform better on this test, but still only achieved 69.2% accuracy on the test set. From a statistical-learning perspective, this failure is quite natural. The neural networks trained on the SCAN task fail to generalize because they have memorized biases that do indeed exist in the training set. Because "jump" has never been seen with any adverb, it would not be irrational to assume that "jump twice" is an invalid sentence in this language. The SCAN task requires networks to make an inferential leap about the entire structure of part of the distribution that they have not seen - that is, it requires them to make an out-of-domain (o.o.d.) extrapolation [19], rather than merely interpolate according to the assumption that train and test data are independent and identically distributed (i.i.d.) (see Figure 1). Seen another way, the SCAN task and its analogues in human learning (e.g. "dax"), require models not to learn some of the correlations that are actually present in the training data [14]. Given that humans can perform well on certain kinds of o.o.d. extrapolation tasks, the human brain must be implementing principles that allow humans to generalize systematically, but which are lacking in current deep learning models. One prominent idea from neuroscience research on language processing that may offer such a principle is that the brain contains partially separate systems for processing syntax and semantics. In this paper, we motivate such a separation from a machine- learning perspective, and test a simple implementation on the SCAN dataset. Our novel model, which we call Syntactic Attention, encodes syntactic and semantic information in separate streams before producing output sequences. Our experiments show that our novel architecture achieves substantially improved compositional generalization performance over other recurrent networks on the SCAN dataset. # 1.1 Syntax and prefrontal cortex Syntax is the aspect of language underlying its systematicity [9]. When given a novel verb like "dax," humans can generalize its usage to many different constructions that they have never seen before, by applying known syntactic or grammatical rules about verbs (e.g. rules about how to conjugate to a different tense or about how adverbs modify verbs). It has long been thought that humans possess specialized cognitive machinery for learning the syntactic or grammatical structure of language [7]. A part of the prefrontal cortex called Broca’s area, originally thought only to be involved in language production, was later found to be important for comprehending syntactically complex sentences, leading some to conclude that it is important for syntactic processing in general [6; 26]. For example, patients with lesions to this area showed poor comprehension on sentences such as "The girl that the boy is chasing is tall". Sentences such as this one require listeners to process syntactic information because semantics is not enough to understand their meanings - e.g. either the boy or the girl could be doing the chasing, and either could be tall. A more nuanced view situates the functioning of Broca’s area within the context of prefrontal cortex in general, noting that it may simply be a part of prefrontal cortex specialized for language [26]. The prefrontal cortex is known to be important for cognitive control, or the active maintenance of top-down attentional signals that bias processing in other areas of the brain [21] (see diagram on the 2 Commands Actions Encoder Decoder ‘turn’ “LTURN’ il Attention ‘JUMP’ ‘jump’ Semantics ‘twice’ Commands Actions DA reward ~ Encoder Decoder ‘turn’ “LTURN’ il Attention ‘JUMP’ ‘jump’ Semantics ‘twice’ DA reward ~ Figure 2: (left) Syntactic Attention architecture. Syntactic and semantic information are maintained in separate streams. The semantic stream processes words with a simple linear transformation, so that sequential information is not maintained. This information is used to directly produce actions. The syntactic stream processes inputs with a recurrent neural network, allowing it to capture temporal dependencies between words. This stream determines the attention over semantic representations at each time step during decoding. (right) Diagram of an influential computational model of prefrontal cortex (PFC) [21]. Prefrontal cortex dynamically modulates processes in other parts of the brain through top-down selective attention signals. A part of the prefrontal cortex, Broca’s area, is thought to be important for syntactic processing [26]. Figure reproduced from [20]. right of Figure 2). In this framework, Broca’s area can be thought of as a part of prefrontal cortex specialized for language, and responsible for selectively attending to linguistic representations housed in other areas of the brain [26]. The prefrontal cortex has received much attention from computational neuroscientists [21; 22], and one model even showed a capacity for compositional generalization [14]. However, these ideas have not been taken up in deep learning research. Here, we emphasize the idea that the brain contains two separate systems for processing syntax and semantics, where the semantic system learns and stores representations of the meanings of words, and the syntactic system, housed in Broca’s area of the prefrontal cortex, learns how to selectively attend to these semantic representations according to grammatical rules. # 2 Syntactic Attention The Syntactic Attention model improves the compositional generalization capability of an existing attention mechanism [2] by implementing two separate streams of information processing for syntax and semantics (see Figure 2). Here, by "semantics" we mean the information in each word in the input that determines its meaning (in terms of target outputs), and by "syntax" we mean the information contained in the input sequence that should determine the alignment of input to target words. We describe the mechanisms of this separation and the other details of the model below, following the notation of [2], where possible. # 2.1 Separation assumption In the seq2seq problem, models must learn a mapping from arbitrary-length sequences of inputs x = {x1, x2, ..., xTx } to arbitrary-length sequences of outputs y = {y1, y2, ..., yTy }: p(y|x). The attention mehcanism of [2] models the conditional probability of each target word given the input sequence and previous targets: p(yi|y1, y2, ..., yi−1, x). This is accomplished by processing the input sequence with a recurrent neural network (RNN) in the encoder. The outputs of this RNN are used both for encoding individual words in the input for later translation, and for determining their alignment to targets during decoding. The underlying assumption made by the Syntactic Attention architecture is that the dependence of target words on the input sequence can be separated into two independent factors. One factor, p(yi|xj), which we refer to as "semantics," models the conditional distribution from individual words in the input to individual words in the target. Note that, unlike in the model of Bahdanau et al. [2], these xj do not contain any information about the other words in the input sequence because they are not processed with an RNN. They are "semantic" in the sense that they contain the information relevant to translating into the target language. The other factor, p(j → i|x), which we refer to as 3 "syntax," models the conditional probability that word j in the input is relevant to word i in the target sequence, given the entire input sequence. This alignment is accomplished from encodings of the inputs produced by an RNN. This factor is "syntactic" in the sense that it must capture all of the temporal information in the input that is relevant to determining the serial order of outputs. The crucial architectural assumption, then, is that any temporal dependency between individual words in the input that can be captured by an RNN should only be relevant to their alignment to words in the target sequence, and not to the translation of individual words. This assumption will be made clearer in the model description below. # 2.2 Encoder The encoder produces two separate vector representations for each word in the input sequence. Unlike the previous attention model [2]), we separately extract the semantic information from each word with a linear transformation: mj = Wmxj, mj = Wmxj, (1) where Wm is a learned weight matrix that multiplies the one-hot encodings {x1, ..., xTx }. Note that the semantic representation of each word does not contain any information about the other words in the sentence. As in the previous attention mechanism [2], we use a bidirectional RNN (biRNN) to extract what we now interpret as the syntactic information from each word in the input sequence. −−→ −→ The biRNN produces a vector for each word on the forward pass, ( hTx ), and a vector for each h1, ..., ←−− ←− word on the backward pass, ( hTx ). The syntactic information (or "annotations" [2]) of each h1, ..., word xj is determined by the two vectors ←−− hj+1 corresponding to the words surrounding it: −−→ hj−1; ←−− hj+1] hj = [ (2) In all experiments, we used a bidirectional Long Short-Term Memory (LSTM) for this purpose. Note that because there is no sequence information in the semantic representations, all of the information required to parse (i.e. align) the input sequence correctly (e.g. phrase structure, modifying relationships, etc.) must be encoded by the biRNN. # 2.3 Decoder The decoder models the conditional probability of each target word given the input and the previous targets: p(yi|y1, y2, ..., yi−1, x), where yi is the target translation and x is the whole input sequence. As in the previous model, we use an RNN to determine an attention distribution over the inputs at each time step (i.e. to align words in the input to the current target). However, our decoder diverges from this model in that the mapping from inputs to outputs is performed from a weighted average of the semantic representations of the input words: yi-1,*) = f(di) (3) Ty d= SPaiymy — vlyilyr, ye, j=l where f is parameterized by a linear function with a softmax nonlinearity, and the αij are the weights determined by the attention model. We note again that the mj are produced directly from corresponding xj, and do not depend on the other inputs. The attention weights are computed by a function measuring how well the syntactic information of a given word in the input sequence aligns with the current hidden state of the decoder RNN, si: exp(e;;) Wika exp (ein) ay = ei; = a(s;,h;) (4) where eij can be thought of as measuring the importance of a given input word xj to the current target word yi, and si is the current hidden state of the decoder RNN. Bahdanau et al. [2] model the function a with a feedforward network, but following [11], we choose to use a simple dot product: a(si, hj) = si · hj, (5) 4 relying on the end-to-end backpropagation during training to allow the model to learn to make appropriate use of this function. Finally, the hidden state of the RNN is updated with the same weighted combination of the syntactic representations of the inputs: $i = 9(Si-1, Ci) q= augh; (6) S IL » where g is the decoder RNN, si is the current hidden state, and ci can be thought of as the information in the attended words that can be used to determine what to attend to on the next time step. Again, in all experiments an LSTM was used. # 3 Experiments # 3.1 SCAN dataset JUMP LTURN JUMP RTURN JUMP RTURN JUMP RTURN JUMP RTURN JUMP LTURN LTURN JUMP JUMP JUMP LTURN LTURN JUMP WALK WALK WALK LTURN WALK LTURN WALK LTURN WALK LTURN WALK LTURN LTURN JUMP jump jump left jump around right tum left twice jump thrice jump opposite left and walk thrice jump opposite left after walk around left VYHHYND Figure 3: Examples from SCAN dataset. Figure reproduced from [15]. The SCAN1 dataset is composed of sequences of commands that must be mapped to sequences of actions [15] (see Figure 3 and supplementary materials for further details). The dataset is generated from a simple finite phrase-structure grammar that includes things like adverbs and conjunctions. There are 20,910 total examples in the dataset that can be split systematically into training and testing sets in different ways. These splits include the following: Simple split: training and testing data are split randomly • Length split: training includes only shorter sequences • Add primitive split: a primitive command (e.g. "turn left" or "jump") is held out of the training set, except in its most basic form (e.g. "jump" → JUMP) Here we focus on the most difficult problem in the SCAN dataset, the add-jump split, where "jump" is held out of the training set. The best test accuracy reported in the original paper [15], using standard seq2seq models, was 1.2%. More recent work has tested other kinds of seq2seq models, including Gated Recurrent Units (GRU) augmented with attention [4] and convolutional neural networks (CNNs) [8]. Here, we compare the Syntactic Attention model to the best previously reported results. # Implementation details Experimental procedure is described in detail in the supplementary materials. Train and test sets were kept as they were in the original dataset, but following [4], we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Unless stated otherwise, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch. Details of the hyperparameter search are given in supplementary materials. Our best model used LSTMs, with 2 layers and 200 hidden units in the encoder, and 1 layer and 400 hidden units in the decoder, and 120-dimensional semantic vectors. The model included a dropout rate of 0.5, and was optimized using an Adam optimizer [13] with a learning rate of 0.001. 1The SCAN dataset can be downloaded at https://github.com/brendenlake/SCAN 5 # 3.3 Results The Syntactic Attention model achieves state-of-the-art performance on the key compositional generalization task of the SCAN dataset (see table 1). The table shows results (mean test accuracy (%) ± standard deviation) on the test splits of the dataset. Syntactic Attention is compared to the previous best models, which were a CNN [8], and GRUs augmented with an attention mechanism ("+ attn"), which either included or did not include a dependency ("- dep") in the decoder on the previous action [4]. The best model from the hyperparameter search showed strong compositional generalization performance, attaining a mean accuracy of 91.1% (median = 98.5%) on the test set of the add-jump split. However, as in Dessì and Baroni [8], we found that our model showed variance across initialization seeds. We suggest that this may be due to the nature of the add-jump split: since "jump" has only been encountered in the simplest context, it may be that slight changes to the way that this verb is encoded can make big differences when models are tested on more complicated constructions. For this reason, we ran the best model 25 times on the add-jump split to get a more accurate assessment of performance. These results were highly skewed, with a mean accuracy of 78.4 % but a median of 91.0 % (see supplementary materials for detailed results). Overall, this represents an improvement over the best previously reported results on this task [4; 8], and does so without any hand-engineered features or additional supervision. Table 1: Compositional generalization results. The Syntactic Attention model achieves an improve- ment on the compositional generalization tasks of the SCAN dataset, compared to the best previously reported models [4; 8]. Star* indicates median of 25 runs. Model GRU + attn [4] GRU + attn - dep [4] CNN [8] Syntactic Attention Simple 100.0 ± 0.0 100.0 ± 0.0 100.0 ± 0.0 100.0 ± 0.0 Length 18.1 ± 1.1 17.8 ± 1.7 - 15.2 ± 0.7 Add turn left 59.1 ± 16.8 90.8 ± 3.6 - 99.9 ± 0.16 Add jump 12.5 ± 6.6 0.7 ± 0.4 69.2 ± 8.2 91.0* ± 27.4 # 3.4 Additional experiments To test our hypothesis that compositional generalization requires a separation between syntax (i.e. sequential information used for alignment), and semantics (i.e. the mapping from individual source words to individual targets), we conducted two more experiments: Sequential semantics. An additional biLSTM was used to process the semantics of the sentence: mj = [−→mj; ←−mj], where −→mj and ←−mj are the vectors produced for the source word xj by a biLSTM on the forward and backward passes, respectively. These mj replace those generated by the simple linear layer in the Syntactic Attention model (in equation (1)). • Syntax-action. Syntactic information was allowed to directly influence the output at each time step in the decoder: p(yi|y1, y2, ..., yi−1, x) = f ([di; ci]), where again f is parameter- ized with a linear function and a softmax output nonlinearity. The results of the additional experiments (mean test accuracy (%) ± standard deviations) are shown in table 2. These results partially confirmed our hypothesis: performance on the jump-split test set was worse when the strict separation between syntax and semantics was violated by allowing sequential information to be processed in the semantic stream. However, "syntax-action," which included sequential information produced by a biLSTM (in the syntactic stream) in the final production of actions, maintained good compositional generalization performance. We hypothesize that this was because in this setup, it was easier for the model to learn to use the semantic information to directly translate actions, so it largely ignored the syntactic information. This experiment suggests that the separation between syntax and semantics does not have to be perfectly strict, as long as non-sequential semantic representations are available for direct translation. # 4 Dicussion The Syntactic Attention model was designed to incorporate a key principle that has been hypothesized to describe the organization of the linguistic brain: mechanisms for learning rule-like or syntactic 6 Table 2: Results of additional experiments. Star* indicates median of 25 runs. Model Sequential semantics Syntax-action Syntactic Attention Simple 99.3 ± 0.7 99.3 ± 0.85 100.0± 0.0 Length 13.1 ± 2.5 15.2 ± 1.9 15.2 ± 0.7 Add turn left 99.4 ± 1.1 98.2 ± 2.2 99.9 ± 0.16 Add jump 42.3 ± 32.7 88.7 ± 14.2 91.0* ± 27.4 information are separated from mechanisms for learning semantic information. Our experiments confirm that this simple organizational principle encourages systematicity in recurrent neural networks in the seq2seq setting, as shown by the substantial improvement in the model’s performance on the compositional generalization tasks in the SCAN dataset. The model makes the assumption that the translation of individual words in the input should be independent of their alignment to words in the target sequence. To this end, two separate encodings are produced for the words in the input: semantic representations in which each word is not influenced by other words in the sentence, and syntactic representations which are produced by an RNN that can capture temporal dependencies in the input sequence (e.g. modifying relationships, binding to grammatical roles). Just as Broca’s area of the prefrontal cortex is thought to play a role in syntactic processing through a dynamic selective-attention mechanism that biases processing in other areas of the brain, the syntactic system in our model encodes serial information and is constrained to influence outputs through an attention mechanism alone. Patients with lesions to Broca’s area are able to comprehend sentences like "The girl is kicking a green ball", where semantics can be used to infer the grammatical roles of the words (e.g. that the girl, not the ball, is doing the kicking) [6]. However, these patients struggle with sentences such as "The girl that the boy is chasing is tall", where the sequential order of the words, rather than semantics, must be used to infer grammatical roles (e.g. either the boy or the girl could be doing the chasing). In our model, the syntactic stream can be seen as analogous to Broca’s area, because without it the model would not be able to learn about the temporal dependencies that determine the grammatical roles of words in the input. The separation of semantics and syntax, which is in the end a constraint, forces the model to learn, in a relatively independent fashion, 1) the individual meanings of words and 2) how the words are being used in a sentence (e.g. how they can modify one another, what grammatical role each is playing, etc.). This encourages systematic generalization because, even if a word has only been encountered in a single context (e.g. "jump" in the add-jump split), as long as its syntactic role is known (e.g. that it is a verb that can be modified by adverbs such as "twice"), it can be used in many other constructions that follow the rules for that syntactic role (see supplementary materials for visualizations). Additional experiments confirmed this intuition, showing that when sequential information is allowed to be processed by the semantic system ("sequential semantics"), systematic generalization performance is substantially reduced. The Syntactic Attention model bears some resemblance to a symbolic system - the paradigm example of systematicity - in the following sense: in symbolic systems, representational content (e.g. the value of a variable stored in memory) is maintained separately from the computations that are performed on that content. This separation ensures that the manipulation of the content stored in variables is fairly independent of the content itself, and will therefore generalize to arbitrary elements. Our model implements an analogous separation, but in a purely neural architecture that does not rely on hand-coded rules or additional supervision. In this way, it can be seen as transforming a difficult out-of-domain (o.o.d.) generalization problem into two separate i.i.d. generalization problems - one where the individual meanings of words are learned, and one where how words are used (e.g. how adverbs modify verbs) is learned (see Figure 4). It is unlikely that the human brain has such a strict separation between semantic and syntactic processing, and in the end, there must be more of an interaction between the two streams. We expect that the separation between syntax and semantics in the brain is only a relative one, but we have shown here that this kind of separation can be useful for encouraging systematicity and allowing for compositional generalization. 7 jump” “jump twice” Figure 4: Illustration of the transformation of an out-of-domain (o.o.d.) generalization problem into two independent, identically distributed (i.i.d.) generalization problems. This transformation is accomplished by the Syntactic Attention model without hand-coding grammatical rules or supervising with additional information such as parts-of-speech tags. # 5 Other related work Our model integrates ideas from computational and cognitive neuroscience [26; 22; 14; 21], into the neural machine translation framework. Much of the work in neural machine translation uses an encoder-decoder framework, where one RNN is used to encode the source sentence, and then a decoder neural network decodes the representations given by the encoder to produce the words in the target sentence [25]. Earlier work attempted to encode the source sentence into a single fixed-length vector (the final hidden state of the encoder RNN), but it was subsequently shown that better performance could be achieved by encoding each word in the source, and using an attention mechanism to align these encodings with each target word during the decoding process [2]. The current work builds directly on this attention model, while incorporating a separation between syntactic and semantic information streams. The principle of compositionality has recently regained the attention of deep learning researchers [1; 3; 16; 15; 5; 12] . In particular, the issue has been explored in the visual-question answering (VQA) setting [1; 11; 12; 23; 10; 24; 27]. Many of the successful models in this setting learn hand-coded operations [1; 10], use highly specialized components [11; 24], or use additional supervision [10; 27]. In contrast, our model uses standard recurrent networks and simply imposes the additional constraint that syntactic and semantic information are processed in separate streams. Some of the recent research on compositionality in machine learning has had a special focus on the use of attention. For example, in the Compositional Attention Network, built for VQA, a strict separation is maintained between the representations used to encode images and the representations used to encode questions [11]. This separation is enforced by restricting them to interact only through attention distributions. Our model utilizes a similar restriction, reinforcing the idea that compositionality is enhanced when information from different modalities (in our case syntax and semantics) are only allowed to interact through discrete probability distributions. Previous research on compositionality in machine learning has also focused on the incorporation of symbol-like processing into deep learning models [1; 10; 27]. These methods generally rely on hand-coding or additional supervision for the symbolic representations or algorithmic processes to emerge. For example, in neural module networks [1], a neural network is constructed out of composable neural modules that each learn a specific operation. These networks have shown an impressive capacity for systematic generalization on VQA tasks [3]. These models can be seen as accomplishing a similar transformation as depicted in Figure 4, because the learning in each module is somewhat independent of the mechanism that composes them. However, Bahdanau et al. [3] find that when these networks are trained end-to-end (i.e. without hand-coded parameterizations and layouts) their systematicity is significantly degraded. In contrast, our model learns in an end-to-end way to generalize systematically without any explicit symbolic processes built in. This offers an alternative way in which symbol-like processing can be achieved with neural networks - by enforcing a separation between mechanisms for learning representational content (semantics) and mechanisms for learning how to dynamically attend to or manipulate that content (syntax) in the context of a cognitive operation or reasoning problem. 8 # 6 Conclusion The Syntactic Attention model incorporates ideas from cognitive and computational neuroscience into the neural machine translation framework, and produces the kind of systematic generalization thought to be a key component of human language-learning and intelligence. The key feature of the architecture is the separation of sequential information used for alignment (syntax) from information used for mapping individual inputs to outputs (semantics). This separation allows the model to generalize the usage of a word with known syntax to many of its valid grammatical constructions. This principle may be a useful heuristic in other natural language processing tasks, and in other systematic or compositional generalization tasks. The success of our approach suggests a conceptual link between dynamic selective-attention mechanisms in the prefrontal cortex and the systematicity of human cognition, and points to the untapped potential of incorporating ideas from cognitive science and neuroscience into modern approaches in deep learning and artificial intelligence [18]. # References [1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural Module Networks. arXiv:1511.02799 [cs], Nov. 2015. [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv:1409.0473 [cs, stat], Sept. 2014. [3] D. Bahdanau, S. Murty, M. Noukhovitch, T. H. Nguyen, H. de Vries, and A. Courville. System- atic Generalization: What Is Required and Can It Be Learned? arXiv:1811.12889 [cs], Nov. 2018. [4] J. Bastings, M. Baroni, J. Weston, K. Cho, and D. Kiela. Jump to better conclusions: SCAN both left and right. arXiv:1809.04640 [cs], Sept. 2018. [5] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulcehre, F. Song, A. Ballard, J. Gilmer, G. Dahl, A. Vaswani, K. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu. Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261 [cs, stat], June 2018. [6] A. Caramazza and E. B. Zurif. Dissociation of algorithmic and heuristic processes in language comprehension: Evidence from aphasia. Brain and Language, 3(4):572–582, Oct. 1976. ISSN 0093-934X. doi: 10.1016/0093-934X(76)90048-1. [7] N. Chomsky, editor. Syntactic Structures. Mouton & Co., The Hague, Jan. 1957. [8] R. Dessì and M. Baroni. CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. arXiv:1905.08527 [cs], May 2019. [9] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71, Apr. 1988. [10] R. Hu, J. Andreas, M. Rohrbach, T. Darrell, and K. Saenko. Learning to Reason: End-to-End Module Networks for Visual Question Answering. arXiv:1704.05526 [cs], Apr. 2017. [11] D. A. Hudson and C. D. Manning. Compositional attention networks for machine reasoning. arXiv:1803.03067 [cs], Mar. 2018. [12] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. Girshick. CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. arXiv:1612.06890 [cs], Dec. 2016. [13] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs], Dec. 2014. [14] T. Kriete, D. C. Noelle, J. D. Cohen, and R. C. O’Reilly. Indirection and symbol-like processing in the prefrontal cortex and basal ganglia. Proceedings of the National Academy of Sciences, 110 (41):16390–16395, Oct. 2013. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.1303547110. 9 [15] B. M. Lake and M. Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. arXiv:1711.00350 [cs], Oct. 2017. [16] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. The Behavioral and Brain Sciences, 40:e253, Jan. 2017. ISSN 1469-1825. doi: 10.1017/S0140525X16001837. [17] J. Loula, M. Baroni, and B. M. Lake. Rearranging the familiar: Testing compositional general- ization in recurrent networks. arXiv:1807.07545 [cs], July 2018. [18] A. H. Marblestone, G. Wayne, and K. P. Kording. Toward an Integration of Deep Learning and Neuroscience. Frontiers in Computational Neuroscience, 10, 2016. ISSN 1662-5188. doi: 10.3389/fncom.2016.00094. [19] G. Marcus. Deep learning: A critical appraisal. Jan. 2018. [20] E. K. Miller. The “working” of working memory. Dialogues in Clinical Neuroscience, 15(4): 411–418, Dec. 2013. ISSN 1294-8322. [21] E. K. Miller and J. D. Cohen. An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24:167–202, 2001. [22] R. C. O’Reilly and M. J. Frank. Making working memory work: A computational model of learning in the prefrontal cortex and basal ganglia. Neural Computation, 18(2):283–328, 2006. [23] E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. Courville. FiLM: Visual Reasoning with a General Conditioning Layer. arXiv:1709.07871 [cs, stat], Sept. 2017. [24] A. Santoro, D. Raposo, D. G. T. Barrett, M. Malinowski, R. Pascanu, P. Battaglia, and T. Lilli- crap. A simple neural network module for relational reasoning. arXiv:1706.01427 [cs], June 2017. [25] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to Sequence Learning with Neural Networks. arXiv:1409.3215 [cs], Sept. 2014. [26] S. L. Thompson-Schill. Dissecting the language organ : A new look at the role of Broca ’ s area in language processing. 2004. [27] K. Yi, J. Wu, C. Gan, A. Torralba, P. Kohli, and J. B. Tenenbaum. Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding. arXiv:1810.02338 [cs], Oct. 2018. 10 # 7 Supplementary materials # 7.1 SCAN dataset details The SCAN dataset [15] generates sequences of commands using the pharase-structure grammar described in Figure 5. This simple grammar is not recursive, and so can generate a finite number of command sequences (20,910 total). C—+SandS V-—Df[1] opposite D[2] D -— turn left C- SafterS V—D[1] around D[2] D — turn right cS VD U - walk S-Vtwice VOU U = look S—Vthrice D- Uleft U- run S->V D — Uright U — jump Figure 5: Phrase-structure grammar used to generate SCAN dataset. Figure reproduced from [15]. These commands are interpreted according to the rules shown in Figure 6. Although the grammar used to generate and interpret the commands is simple compared to any natural language, it captures the basic properties that are important for testing compositionality (e.g. modifying relationships, discrete grammatical roles, etc.). The add-primitive splits (described in main text) are meant to be analogous to the capacity of humans to generalize the usage of a novel verb (e.g. "dax") to many constructions [15]. [walk ] = WALK [u opposite left] = [turn opposite left] [[u] [look] = LOOK [u opposite right] = [turn opposite right] [u] [run] = RUN [turn around left] = LTURN LTURN LTURN LTURN [jump] = JUMP [turn around right] = RTURN RTURN RTURN RTURN [turn left] = LTURN [w around left] = LTURN [u] LTURN [u] LTURN [u] LTURN [uJ] {turn right] = RTURN [u around right] = RTURN [u] RTURN [ul] RTURN [ul] RTURN [u] [u left] = LTURN [u] FI [: {u right] = RTURN [wu] a] [turn opposite left] = LTURN LTURN ar [turn opposite right] = RTURN RTURN | [2 after x2] = [2] [71] Figure 6: Rules for interpreting command sequences to generate actions in SCAN dataset. Figure reproduced from [15]. # 7.2 Experimental procedure details The cluster used for all experiments consists of 3 nodes, with 68 cores in total (48 times Intel(R) Xeon(R) CPU E5-2650 v4 at 2.20GHz, 20 times Intel(R) Xeon(R) CPU E5-2650 v3 at 2.30GHz), with 128GB of ram each, connected through a 56Gbit infiniband network. It has 8 pascal Titan X GPUs and runs Ubuntu 16.04. All experiments were conducted with the SCAN dataset as it was originally published [15]. No data were excluded, and no preprocessing was done except to encode words in the input and action sequences into one-hot vectors, and to add special tokens for start-of-sequence and end-of-sequence tokens. Train and test sets were kept as they were in the original dataset, but following [4], we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Except for the additional batch of 25 runs for the add-jump split, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch. Initial experimentation included different implementations of the assumption that syntactic informa- tion be separated from semantic information. After the architecture described in the main text showed promising results, a hyperparameter search was conducted to determine optimization (stochastic gradient descent vs. Adam), RNN-type (GRU vs. LSTM), regularizers (dropout, weight decay), and 11 number of layers (1 vs. 2 layers for encoder and decoder RNNs). We found that the Adam optimizer [13] with a learning rate of 0.001, two layers in the encoder RNN and 1 layer in the decoder RNN, and dropout worked the best, so all further experiments used these specifications. Then, a grid-search was conducted to find the number of hidden units (in both semantic and syntactic streams) and dropout rate. We tried hidden dimensions ranging from 50 to 400, and dropout rates ranging from 0.0 to 0.5. The best model used an LSTM with 2 layers and 200 hidden units in the encoder, and an LSTM with 1 layer and 400 hidden units in the decoder, and used 120-dimensional semantic vectors, and a dropout rate of 0.5. The results for this model are reported in the main text. All additional experiments were done with models derived from this one, with the same hyperparameter settings. All evaluation runs are reported in the main text: for each evaluation except for the add-jump split, models were trained 5 times with different random seeds, and performance was measured with means and standard deviations of accuracy. For the add-jump split, we included 25 runs to get a more accurate assessment of performance. This revealed a strong skew in the distribution of results, so we included the median as the main measure of performance. Occasionally, the model did not train at all due to an unknown error (possibly very poor random initialization, high learning rate or numerical error). For this reason, we excluded runs in which training accuracy did not get above 10%. No other runs were excluded. # 7.3 Skew of add-jump results As mentioned in the results section of the main text, we found that test accuracy on the add-jump split was variable and highly skewed. Figure 7 shows a histogram of these results (proportion correct). The model performs near-perfectly most of the time, but is also prone to catastrophic failures. This may be because, at least for our model, the add-jump split represents a highly nonlinear problem in the sense that slight differences in the way the primitive verb "jump" is encoded during training can have huge differences for how the model performs on more complicated constructions. We recommend that future experiments with this kind of compositional generalization problem take note of this phenomenon, and conduct especially comprehensive analyses of variability in results. Future research will also be needed to better understand the factors that determine this variability, and whether it can be overcome with other priors or regularization techniques. Histogram of add-jump test accuracies 00 02 04 0.6 08 10 add-jump test accuracy (proportion) Figure 7: Histogram of test accuracies across all 25 runs of add-jump split. 12 # 7.4 Supplementary experiments # 7.4.1 Testing nonlinear semantics Our main hypothesis is that the separation between sequential information used for alignment (syntax) and information about the meanings of individual words (semantics) encourages systematicity. The results reported in the main text are largely consistent with this hypothesis, as shown by the performance of the Syntactic Attention model on the composotional generalization tests of the SCAN dataset. However, it is also possible that the simplicity of the semantic stream in the model is also important for improving compositional generalization. To test this, we replaced the linear layer in the semantic stream with a nonlinear neural network. From the model description in the main text: p(yi|y1, y2, ..., yi−1, x) = f (di), (7) In the original model, f was parameterized with a simple linear layer, but here we use a two-layer feedforward network with a ReLU nonlinearity, before a softmax is applied to generate a distribution over the possible actions. We tested this model on the add-primitive splits of the SCAN dataset. The results (mean (%) with standard deviations) are shown in Table 3, with comparison to the baseline Syntactic Attention model. Table 3: Results of nonlinear semantics experiment. Star* indicates median of 25 runs. Model Nonlinear semantics Syntactic Attention Add turn left 99.0 ± 1.7 99.9 ± 0.16 Add jump 84.4 ± 14.1 91.0* ± 27.4 The results show that this modification did not substantially degrade compositional generalization performance, suggesting that the success of the Syntactic Attention model does not depend on the parameterization of the semantic stream with a simple linear function. # 7.4.2 Add-jump split with additional examples The original SCAN dataset was published with compositional generalization splits that have more than one example of the held-out primitive verb [15]. The training sets in these splits of the dataset include 1, 2, 4, 8, 16, or 32 random samples of command sequences with the "jump" command, allowing for a more fine-grained measurement of the ability to generalize the usage of a primitive verb from few examples. For each number of "jump" commands included in the training set, five different random samples were taken to capture any variance in results due to the selection of particular commands to train on. Lake and Baroni [15] found that their best model (an LSTM without an attention mechansim) did not generalize well (below 39%), even when it was trained on 8 random examples that included the "jump" command, but that the addition of further examples to the training set improved performance. Subsequent work showed better performance at lower numbers of "jump" examples, with GRU’s augmented with an attention mechanism ("+ attn"), and either with or without a dependence in the decoder on the previous target ("- dep") [4]. Here, we compare the Syntactic Attention model to these results. The Syntactic Attention model shows a substantial improvement over previously reported results at the lowest numbers of "jump" examples used for training (see Figure 8 and Table 4). Compositional generalization performance is already quite high at 1 example, and at 2 examples is almost perfect (99.997% correct). # 7.4.3 Template splits The compositional generalization splits of the SCAN dataset were originally designed to test for the ability to generalize known primitive verbs to valid unseen constructions [15]. Further work with SCAN augmented this set of tests to include compositional generalization based not on known verbs but on known templates [17]. These template splits included the following (see Figure 9 for examples): • Jump around right: All command sequences with the phrase "jump around right" are held out of the training set and subsequently tested. 13 Compositional generalization with additional examples e ° mmm GRU + Attn GRU + Attn - dep wa Syntactic attention Test Accuracy e es © & & & ° N 0.0 1 2 4 8 16 2 Number of "jump" examples in training Figure 8: Compositional generalization performance on add-jump split with additional examples. Syntactic Attention model is compared to previously reported models [4] on test accuracy as command sequences with "jump" are added to the training set. Mean accuracy (proportion correct) was computed with 5 different random samples of "jump" commands. Error bars represent standard deviations. Table 4: Results of Syntactic Attention compared to models of Bastings et al. [4] on jump-split with additional examples. Mean accuracy (% - rounded to tenths) is shown with standard deviations. Same data as depicted in Figure 8. Number of jump commands in training set Model 1 2 4 8 16 32 58.2±12.0 70.9±11.5 84.4±28.5 67.8±3.4 61.3±13.5 100.0±0.01 80.3±7.0 83.5±6.1 100.0±0.02 88.0±6.0 99.0±0.4 99.9±0.2 98.3±1.8 99.7±0.2 100.0±0.01 99.6±0.2 100.0±0.0 99.9±0.2 # GRU + attn GRU + attn - dep Syntactic Attention • Primitive right: All command sequences containing primitive verbs modified by "right" are held out of the training set and subsequently tested. • Primitive opposite right: All command sequences containing primitive verbs modified by "opposite right" are held out of the training set and subsequently tested. • Primitive around right: All command sequences containing primitive verbs modified by "around right" are held out of the training set and subsequently tested. Condition Example train commands Example test commands jump around right “jump left”, “jump around left”, | “jump around right”, “jump “walk around right” around right and walk” Primitive right “jump left”, “walk around right” | “jump right’, “walk right” Primitive opposite right | “jump left”, “jump opposite | “jump opposite right”, “walk op- left”, “walk right” posite right” Primitive around right | “jump left”, “jump around left”, | “jump around right”, “walk “walk right” around right” Figure 9: Table of example command sequences for each template split. Reproduced from [17] . Results of the Syntactic Attention model on these template splits are compared to those originally published [17] in Table 5. The model, like the one reported in [17], performs well on the jump around right split, consistent with the idea that this task does not present a problem for neural networks. The rest of the results are mixed: Syntactic Attention shows good compositional generalization performance on the Primitive right split, but fails on the Primitive opposite right and Primitive around right splits. All of the template tasks require models to generalize based on the symmetry between 14 "left" and "right" in the dataset. However, in the opposite right and around right splits, this symmetry is substantially violated, as one of the two prepositional phrases in which they can occur is never seen with "right." Further research is required to determine whether a model implementing similar principles to Syntactic Attention can perform well on this task. Table 5: Results of Syntactic Attention compared to models of Loula et al. [17] on template splits of SCAN dataset. Mean accuracy (%) is shown with standard deviations. P = Primitive Template split Model jump around right P right LSTM (Loula et al. [17]) Syntactic Attention 98.43±0.54 98.9±2.3 23.49±8.09 99.1±1.8 47.62±17.72 10.5±8.8 2.46±2.68 28.9±34.8 # 7.5 Visualizing attention The way that the attention mechanism of Bahdanau et al. [2] is set up allows for easy visualization of the model’s attention. Here, we visualize the attention distributions over the words in the command sequence at each step during the decoding process. In the following figures (Figures 10 to 15), the attention weights on each command (in the columns of the image) is shown for each of the model’s outputs (in the rows of the image) for some illustrative examples. Darker blue indicates a higher weight. The examples are shown in pairs for a model trained and tested on the add-jump split, with one example drawn from the training set and a corresponding example drawn from the test set. Examples are shown in increasing complexity, with a failure mode depicted in Figure 15. In general, it can be seen that although the attention distributions on the test examples are not exactly the same as those from the corresponding training examples, they are usually good enough for the model to produce the correct action sequence. This shows the model’s ability to apply the same syntactic rules it learned on the other verbs to the novel verb "jump." In the example shown in Figure 15, the model fails to attend to the correct sequence of commands, resulting in an error. <S0S> jump left <EOS> |_TURN_LEFT \ JUMP <EOS> <S0S> walk left <EOS> |_TURN_LEFT WALK <EOS> <S0S> jump left <EOS> <S0S> walk left <EOS> |_TURN_LEFT |_TURN_LEFT \ JUMP WALK <EOS> <EOS> Figure 10: Attention distributions: correct example 15 <SOS> jump twice <EOS> JUMP JUMP <EOS> <SOS> walk twice <EOS> |_WALK WALK <EOS> <SOS> jump twice <EOS> <SOS> walk twice <EOS> JUMP |_WALK JUMP WALK <EOS> <EOS> Figure 11: Attention distributions: correct example <SOS> jump opposite left <EOS> <SOS> walk opposite left <EOS> L_TURN_LEFT L_TURN_LEFT \_TURN_LEFT \_TURN_LEFT | JUMP | |_WALK i <EOS> <EOS> <SOS> jump opposite left <EOS> L_TURN_LEFT \_TURN_LEFT | JUMP | <EOS> <SOS> walk opposite left <EOS> L_TURN_LEFT \_TURN_LEFT |_WALK i <EOS> Figure 12: Attention distributions: correct example <SOS> jump around left <EOS> LTURN_LEFT 7 || \jumP || LTURN_LEFT || \jume || LTURN_LEFT || \jumP || LTURN_LEFT || \jumP || <EOS> | <SOS> walk around left <EOS> LTURN_LEFT 7 || WALK | | LTURN_LEFT || LLWALK | | LTURN_LEFT || LWALK || LTURN_LEFT || WALK || <EOS> | <SOS> jump around left <EOS> <SOS> walk around left <EOS> LTURN_LEFT 7 || LTURN_LEFT 7 || \jumP || WALK | | LTURN_LEFT || LTURN_LEFT || \jume || LLWALK | | LTURN_LEFT || LTURN_LEFT || \jumP || LWALK || LTURN_LEFT || LTURN_LEFT || \jumP || WALK || <EOS> | <EOS> | Figure 13: Attention distributions: correct example 16 TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume <£05> <505> tum ‘around fight and jump ‘around fight twice <£05> <505> tum ‘around fight and walk ‘around fight twice <£05> TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK <£05> TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume TURN RIGHT Lume <£05> <505> tum ‘around fight and jump ‘around fight twice <£05> <505> tum ‘around fight and walk ‘around fight twice <£05> TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK TURN RIGHT WALK <£05> Figure 14: Attention distributions: correct example 17 |_TURN_LEFT |_TURN_LEFT |_TURN_RIGHT |_TURN_RIGHT |_TURN_RIGHT |_TURN_RIGHT <EOS> \yume <EOS> |_TURN_RIGHT <EOS> <SOS> jump opposite right twice after jump left twice <EOS> |_TURN_LEFT |_TURN_LEFT |_TURN_RIGHT |_TURN_RIGHT |_TURN_RIGHT |_TURN_RIGHT <EOS> \yume <EOS> |_TURN_RIGHT <EOS> \_TURN_LEFT L_WALK \_TURN_LEFT L_WALK |_TURN_RIGHT |_TURN_RIGHT L_WALK |_TURN_RIGHT jump opposite right jump <SOS> walk opposite right twice after walk left twice <EOS> \_TURN_LEFT L_WALK \_TURN_LEFT L_WALK |_TURN_RIGHT |_TURN_RIGHT L_WALK |_TURN_RIGHT L_TURN_RIGHT L_WALK <EOS> <SOS> walk opposite right twice after walk left 18 twice <EOS> # 18 Figure 15: Attention distributions: incorrect example
{ "id": "1905.08527" }
1905.01969
Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring
The use of deep pre-trained bidirectional transformers has led to remarkable progress in a number of applications (Devlin et al., 2018). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on three existing tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
http://arxiv.org/pdf/1905.01969
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston
cs.CL, cs.AI
ICLR 2020
null
cs.CL
20190422
20200325
0 2 0 2 r a M 5 2 ] L C . s c [ 4 v 9 6 9 1 0 . 5 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # Poly-encoders: architectures and pre-training strategies for fast and accurate multi-sentence scoring # Samuel Humeau∗, Kurt Shuster∗, Marie-Anne Lachaux, Jason Weston Facebook AI Research {samuelhumeau,kshuster,malachaux,jase}@fb.com # Abstract The use of deep pre-trained transformers has led to remarkable progress in a num- ber of applications (Devlin et al., 2019). For tasks that make pairwise compar- isons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new trans- former architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks. # 1 Introduction Recently, substantial improvements to state-of-the-art benchmarks on a variety of language under- standing tasks have been achieved through the use of deep pre-trained language models followed by fine-tuning (Devlin et al., 2019). In this work we explore improvements to this approach for the class of tasks that require multi-sentence scoring: given an input context, score a set of candidate labels, a setup common in retrieval and dialogue tasks, amongst others. Performance in such tasks has to be measured via two axes: prediction quality and prediction speed, as scoring many candidates can be prohibitively slow. The current state-of-the-art focuses on using BERT models for pre-training (Devlin et al., 2019), which employ large text corpora on general subjects: Wikipedia and the Toronto Books Corpus (Zhu et al., 2015). Two classes of fine-tuned architecture are typically built on top: Bi-encoders and Cross-encoders. Cross-encoders (Wolf et al., 2019; Vig & Ramea, 2019), which perform full (cross) self-attention over a given input and label candidate, tend to attain much higher accuracies than their counterparts, Bi-encoders (Mazar´e et al., 2018; Dinan et al., 2019), which perform self-attention over the input and candidate label separately and combine them at the end for a final representa- tion. As the representations are separate, Bi-encoders are able to cache the encoded candidates, and reuse these representations for each input resulting in fast prediction times. Cross-encoders must recompute the encoding for each input and label; as a result, they are prohibitively slow at test time. In this work, we provide novel contributions that improve both the quality and speed axes over the current state-of-the-art. We introduce the Poly-encoder, an architecture with an additional learnt at- tention mechanism that represents more global features from which to perform self-attention, result- ing in performance gains over Bi-encoders and large speed gains over Cross-Encoders. To pre-train our architectures, we show that choosing abundant data more similar to our downstream task also brings significant gains over BERT pre-training. This is true across all different architecture choices and downstream tasks we try. We conduct experiments comparing the new approaches, in addition to analysis of what works best for various setups of existing methods, on four existing datasets in the domains of dialogue and in- formation retrieval (IR), with pre-training strategies based on Reddit (Mazar´e et al., 2018) compared # ∗ Joint First Authors. 1 Published as a conference paper at ICLR 2020 to Wikipedia/Toronto Books (i.e., BERT). We obtain a new state-of-the-art on all four datasets with our best architectures and pre-training strategies, as well as providing practical implementations for real-time use. Our code and models will be released open-source. # 2 Related Work The task of scoring candidate labels given an input context is a classical problem in machine learn- ing. While multi-class classification is a special case, the more general task involves candidates as structured objects rather than discrete classes; in this work we consider the inputs and the candidate labels to be sequences of text. There is a broad class of models that map the input and a candidate label separately into a com- mon feature space wherein typically a dot product, cosine or (parameterized) non-linearity is used to measure their similarity. We refer to these models as Bi-encoders. Such methods include vector space models (Salton et al., 1975), LSI (Deerwester et al., 1990), supervised embeddings (Bai et al., 2009; Wu et al., 2018) and classical siamese networks (Bromley et al., 1994). For the next utterance prediction tasks we consider in this work, several Bi-encoder neural approaches have been con- sidered, in particular Memory Networks (Zhang et al., 2018a) and Transformer Memory networks (Dinan et al., 2019) as well as LSTMs (Lowe et al., 2015) and CNNs (Kadlec et al., 2015) which encode input and candidate label separately. A major advantage of Bi-encoder methods is their abil- ity to cache the representations of a large, fixed candidate set. Since the candidate encodings are independent of the input, Bi-encoders are very efficient during evaluation. Researchers have also studied a more rich class of models we refer to as Cross-encoders, which make no assumptions on the similarity scoring function between input and candidate label. Instead, the concatenation of the input and a candidate serve as a new input to a nonlinear function that scores their match based on any dependencies it wants. This has been explored with Sequential Matching Network CNN-based architectures (Wu et al., 2017), Deep Matching Networks (Yang et al., 2018), Gated Self-Attention (Zhang et al., 2018b), and most recently transformers (Wolf et al., 2019; Vig & Ramea, 2019; Urbanek et al., 2019). For the latter, concatenating the two sequences of text results in applying self-attention at every layer. This yields rich interactions between the input context and the candidate, as every word in the candidate label can attend to every word in the input context, and vice-versa. Urbanek et al. (2019) employed pre-trained BERT models, and fine-tuned both Bi- and Cross-encoders, explicitly comparing them on dialogue and action tasks, and finding that Cross-encoders perform better. However, the performance gains come at a steep computational cost. Cross-encoder representations are much slower to compute, rendering some applications infeasible. # 3 Tasks We consider the tasks of sentence selection in dialogue and article search in IR. The former is a task extensively studied and recently featured in two competitions: the Neurips ConvAI2 competition (Dinan et al., 2020), and the DSTC7 challenge, Track 1 (Yoshino et al., 2019; Jonathan K. Kummer- feld & Lasecki, 2018; Chulaka Gunasekara & Lasecki, 2019). We compare on those two tasks and in addition, we also test on the popular Ubuntu V2 corpus (Lowe et al., 2015). For IR, we use the Wikipedia Article Search task of Wu et al. (2018). The ConvAI2 task is based on the Persona-Chat dataset (Zhang et al., 2018a) which involves dia- logues between pairs of speakers. Each speaker is given a persona, which is a few sentences that describe a character they will imitate, e.g. “I love romantic movies”, and is instructed to get to know the other. Models should then condition their chosen response on the dialogue history and the lines of persona. As an automatic metric in the competition, for each response, the model has to pick the correct annotated utterance from a set of 20 choices, where the remaining 19 were other randomly chosen utterances from the evaluation set. Note that in a final system however, one would retrieve from the entire training set of over 100k utterances, but this is avoided for speed reasons in common evaluation setups. The best performing competitor out of 23 entrants in this task achieved 80.7% accuracy on the test set utilizing a pre-trained Transformer fine-tuned for this task (Wolf et al., 2019). The DSTC7 challenge (Track 1) consists of conversations extracted from Ubuntu chat logs, where one partner receives technical support for various Ubuntu-related problems from the other. The 2 Published as a conference paper at ICLR 2020 best performing competitor (with 20 entrants in Track 1) in this task achieved 64.5% R@1 (Chen & Wang, 2019). Ubuntu V2 is a similar but larger popular corpus, created before the competition (Lowe et al., 2015); we report results for this dataset as well, as there are many existing results on it. Finally, we evaluate on Wikipedia Article Search (Wu et al., 2018). Using the 2016-12-21 dump of English Wikipedia (∼5M articles), the task is given a sentence from an article as a search query, find the article it came from. Evaluation ranks the true article (minus the sentence) against 10,000 other articles using retrieval metrics. This mimics a web search like scenario where one would like to search for the most relevant articles (web documents). The best reported method is the learning- to-rank embedding model, StarSpace, which outperforms fastText, SVMs, and other baselines. We summarize all four datasets and their statistics in Table 1. Train Ex. Valid Ex. Test Ex. Eval Cands per Ex. ConvAI2 DTSC7 Ubuntu V2 Wiki Article Search 1,000,000 131,438 19,560 7,801 18,920 6634 10 20 100,000 10,000 5,000 100 5,035,182 9,921 9,925 10,001 Table 1: Datasets used in this paper. # 4 Methods In this section we describe the various models and methods that we explored. # 4.1 Transformers and Pre-training Strategies Transformers Our Bi-, Cross-, and Poly-encoders, described in sections 4.2, 4.3 and 4.4 respec- tively, are based on large pre-trained transformer models with the same architecture and dimension as BERT-base (Devlin et al., 2019), which has 12 layers, 12 attention heads, and a hidden size of 768. As well as considering the BERT pre-trained weights, we also explore our own pre-training schemes. Specifically, we pre-train two more transformers from scratch using the exact same archi- tecture as BERT-base. One uses a similar training setup as in BERT-base, training on 150 million of examples of [INPUT, LABEL] extracted from Wikipedia and the Toronto Books Corpus, while the other is trained on 174 million examples of [INPUT, LABEL] extracted from the online platform Reddit (Mazar´e et al., 2018), which is a dataset more adapted to dialogue. The former is performed to verify that reproducing a BERT-like setting gives us the same results as reported previously, while the latter tests whether pre-training on data more similar to the downstream tasks of interest helps. For training both new setups we used XLM (Lample & Conneau, 2019). Input Representation Our pre-training input is the concatenation of input and label [IN- PUT,LABEL], where both are surrounded with the special token [S], following Lample & Conneau (2019). When pre-training on Reddit, the input is the context, and the label is the next utterance. When pre-training on Wikipedia and Toronto Books, as in Devlin et al. (2019), the input is one sentence and the label the next sentence in the text. Each input token is represented as the sum of three embeddings: the token embedding, the position (in the sequence) embedding and the segment embedding. Segments for input tokens are 0, and for label tokens are 1. Pre-training Procedure Our pre-training strategy involves training with a masked language model (MLM) task identical to the one in Devlin et al. (2019). In the pre-training on Wikipedia and Toronto Books we add a next-sentence prediction task identical to BERT training. In the pre-training on Reddit, we add a next-utterance prediction task, which is slightly different from the previous one as an utterance can be composed of several sentences. During training 50% of the time the candi- date is the actual next sentence/utterance and 50% of the time it is a sentence/utterance randomly taken from the dataset. We alternate between batches of the MLM task and the next-sentence/next- utterance prediction task. Like in Lample & Conneau (2019) we use the Adam optimizer with learning rate of 2e-4, β1 = 0.9, β2 = 0.98, no L2 weight decay, linear learning rate warmup, and inverse square root decay of the learning rate. We use a dropout probability of 0.1 on all layers, and 3 Published as a conference paper at ICLR 2020 a batch of 32000 tokens composed of concatenations [INPUT, LABEL] with similar lengths. We train the model on 32 GPUs for 14 days. Fine-tuning After pre-training, one can then fine-tune for the multi-sentence selection task of choice, in our case one of the four tasks from Section 3. We consider three architectures with which we fine-tune the transformer: the Bi-encoder, Cross-encoder and newly proposed Poly-encoder. # 4.2 Bi-encoder In a Bi-encoder, both the input context and the candidate label are encoded into vectors: # yctxt = red(T1(ctxt)) # ycand = red(T2(cand)) Yerxr = red(T (ctxt) Yeand = red(T>(cand)) where T1 and T2 are two transformers that have been pre-trained following the procedure described in 4.1; they initially start with the same weights, but are allowed to update separately during fine- tuning. T (x) = h1, .., hN is the output of a transformer T and red(·) is a function that reduces that sequence of vectors into one vector. As the input and the label are encoded separately, segment tokens are 0 for both. To resemble what is done during our pre-training, both the input and label are surrounded by the special token [S] and therefore h1 corresponds to [S]. We considered three ways of reducing the output into one representation via red(·): choose the first output of the transformer (corresponding to the special token [S]), compute the average over all outputs or the average over the first m ≤ N outputs. We compare them in Table 7 in the Appendix. We use the first output of the transformer in our experiments as it gives slightly better results. Scoring The score of a candidate candi is given by the dot-product s(ctxt, candi) = yctxt ·ycandi. The network is trained to minimize a cross-entropy loss in which the logits are yctxt · ycand1 , ..., yctxt · ycandn , where cand1 is the correct label and the others are chosen from the training set. Similar to Mazar´e et al. (2018), during training we consider the other labels in the batch as negatives. This allows for much faster training, as we can reuse the embeddings computed for each candidate, and also use a larger batch size; e.g., in our experiments on ConvAI2, we were able to use batches of 512 elements. Inference speed In the setting of retrieval over known candidates, a Bi-encoder allows for the precomputation of the embeddings of all possible candidates of the system. After the context em- bedding yctxt is computed, the only operation remaining is a dot product between yctxt and every candidate embedding, which can scale to millions of candidates on a modern GPU, and potentially billions using nearest-neighbor libraries such as FAISS (Johnson et al., 2019). # 4.3 Cross-encoder The Cross-encoder allows for rich interactions between the input context and candidate label, as they are jointly encoded to obtain a final representation. Similar to the procedure in pre-training, the context and candidate are surrounded by the special token [S] and concatenated into a single vector, which is encoded using one transformer. We consider the first output of the transformer as the context-candidate embedding: yctxt,cand = h1 = f irst(T (ctxt, cand)) where f irst is the function that takes the first vector of the sequence of vectors produced by the transformer. By using a single transformer, the Cross-encoder is able to perform self-attention be- tween the context and candidate, resulting in a richer extraction mechanism than the Bi-encoder. As the candidate label can attend to the input context during the layers of the transformer, the Cross- encoder can produce a candidate-sensitive input representation, which the Bi-encoder cannot. For example, this allows it to select useful input features per candidate. Scoring To score one candidate, a linear layer W is applied to the embedding yctxt,cand to reduce it from a vector to a scalar: # s(ctxt, candi) = yctxt,candiW Similarly to what is done for the Bi-encoder, the network is trained to minimize a cross entropy loss where the logits are s(ctxt, cand1), ..., s(ctxt, candn), where cand1 is the correct candidate and the 4 Published as a conference paper at ICLR 2020 Score t Score Dim cea i i t Context Aggregator Candidate Aggregator | Aggregator t t f f ft Outi] (ou?) « [uN Out, | [Out 2) « «Oat, Out, | Out, 2 - .[Oue, Np] [Out A || Out? | . [Ou N] i t t t t t f t f f f Context Encoder Candidate Encoder Encoder t f f f f t t i t il il t Ind in,2) + lin, N, ina} |inj2| ++ InN Int |in.2) ++ inn) fina) | in2) ++ jin, Ny, (a) Bi-encoder (b) Cross-encoder Legend >(2)»Score Token Learned Model Parameter Vector Query Query | Code m |» Attention Attention/ Candidate Aggregator Aggregation t t Our, t | [Our 2). «| Out, Ny) i rs f Candidate Encoder f t f f t f Ind) | tn.2] ++ linn, Id | In2 +++ In, (c) Poly-encoder Figure 1: Diagrams of the three model architectures we consider. (a) The Bi-encoder encodes the context and candidate separately, allowing for the caching of candidate representations during inference. (b) The Cross-encoder jointly encodes the context and candidate in a single transformer, yielding richer interactions between context and candidate at the cost of slower computation. (c) The Poly-encoder combines the strengths of the Bi-encoder and Cross-encoder by both allowing for caching of candidate representations and adding a final attention mechanism between global features of the input and a given candidate to give richer interactions before computing a final score. others are negatives taken from the training set. Unlike in the Bi-encoder, we cannot recycle the other labels of the batch as negatives, so we use external negatives provided in the training set. The Cross-encoder uses much more memory than the Bi-encoder, resulting in a much smaller batch size. Inference speed Unfortunately, the Cross-encoder does not allow for precomputation of the can- didate embeddings. At inference time, every candidate must be concatenated with the input context and must go through a forward pass of the entire model. Thus, this method cannot scale to a large amount of candidates. We discuss this bottleneck further in Section 5.4. # 4.4 Poly-encoder The Poly-encoder architecture aims to get the best of both worlds from the Bi- and Cross-encoder. A given candidate label is represented by one vector as in the Bi-encoder, which allows for caching candidates for fast inference time, while the input context is jointly encoded with the candidate, as in the Cross-encoder, allowing the extraction of more information. The Poly-encoder uses two separate transformers for the context and label like a Bi-encoder, and the candidate is encoded into a single vector ycandi. As such, the Poly-encoder method can be im- plemented using a precomputed cache of encoded responses. However, the input context, which is typically much longer than a candidate, is represented with m vectors (y1 ctxt) instead of just one as in the Bi-encoder, where m will influence the inference speed. To obtain these m global features that represent the input, we learn m context codes (c1, ..., cm), where ci extracts representation yi by attending over all the outputs of the previous layer. That is, we obtain yi ctxt using: N) = softmax(ci · h1, .., ci · hN) wci (wci 1 , .., wci = yi ctxt j h j where j 5 Published as a conference paper at ICLR 2020 The m context codes are randomly initialized, and learnt during finetuning. Finally, given our m global context features, we attend over them using ycandi as the query: yctxt = wiyi ctxt where (w1, .., wm) = softmax(ycandi · y1 ctxt, .., ycandi · ym ctxt) i The final score for that candidate label is then yctxt · ycandi as in a Bi-encoder. As m < N, where N is the number of tokens, and the context-candidate attention is only performed at the top layer, this is far faster than the Cross-encoder’s full self-attention. # 5 Experiments We perform a variety of experiments to test our model architectures and training strategies over four tasks. For metrics, we measure Recall@k where each test example has C possible candidates to select from, abbreviated to R@k/C, as well as mean reciprocal rank (MRR). # 5.1 Bi-encoders and Cross-encoders We first investigate fine-tuning the Bi- and Cross-encoder architectures initialized with the weights provided by Devlin et al. (2019), studying the choice of other hyperparameters (we explore our own pre-training schemes in section 5.3). In the case of the Bi-encoder, we can use a large number of neg- atives by considering the other batch elements as negative training samples, avoiding recomputation of their embeddings. On 8 Nvidia Volta v100 GPUs and using half-precision operations (i.e. float16 operations), we can reach batches of 512 elements on ConvAI2. Table 2 shows that in this setting, we obtain higher performance with a larger batch size, i.e. more negatives, where 511 negatives yields the best results. For the other tasks, we keep the batch size at 256, as the longer sequences in those datasets uses more memory. The Cross-encoder is more computationally intensive, as the embeddings for the (context, candidate) pair must be recomputed each time. We thus limit its batch size to 16 and provide negatives random samples from the training set. For DSTC7 and Ubuntu V2, we choose 15 such negatives; For ConvAI2, the dataset provides 19 negatives. Negatives R@1/20 31 81.0 63 81.7 127 82.3 255 83.0 511 83.3 Table 2: Validation performance on ConvAI2 after fine-tuning a Bi-encoder pre-trained with BERT, averaged over 5 runs. The batch size is the number of training negatives + 1 as we use the other elements of the batch as negatives during training. The above results are reported with Bi-encoder aggregation based on the first output. Choosing the average over all outputs instead is very similar but slightly worse (83.1, averaged over 5 runs). We also tried to add further non-linearities instead of the inner product of the two representations, but could not obtain improved results over the simpler architecture (results not shown). We tried two optimizers: Adam (Kingma & Ba, 2015) with weight decay of 0.01 (as recommended by (Devlin et al., 2019)) and Adamax (Kingma & Ba, 2015) without weight decay; based on val- idation set performance, we choose to fine-tune with Adam when using the BERT weights. The learning rate is initialized to 5e-5 with a warmup of 100 iterations for Bi- and Poly-encoders, and 1000 iterations for the Cross-encoder. The learning rate decays by a factor of 0.4 upon plateau of the loss evaluated on the valid set every half epoch. In Table 3 we show validation performance when fine-tuning various layers of the weights provided by (Devlin et al., 2019), using Adam with decay optimizer. Fine-tuning the entire network is important, with the exception of the word embeddings. With the setups described above, we fine-tune the Bi- and Cross-encoders on the datasets, and report the results in Table 4. On the first three tasks, our Bi-encoders and Cross-encoders outperform the best existing approaches in the literature when we fine-tune from BERT weights. E.g., the Bi- encoder reaches 81.7% R@1 on ConvAI2 and 66.8% R@1 on DSTC7, while the Cross-encoder achieves higher scores of 84.8% R@1 on ConvAI2 and 67.4% R@1 on DSTC7. Overall, Cross- encoders outperform all previous approaches on the three dialogue tasks, including our Bi-encoders (as expected). We do not report fine-tuning of BERT for Wikipedia IR as we cannot guarantee the 6 Published as a conference paper at ICLR 2020 Fine-tuned parameters Bi-encoder Cross-encoder Top layer Top 4 layers All but Embeddings Every Layer 74.2 82.0 83.3 83.0 80.6 86.3 87.3 86.6 Table 3: Validation performance (R@1/20) on ConvAI2 using pre-trained weights of BERT-base with different parameters fine-tuned. Average over 5 runs (Bi-encoders) or 3 runs (Cross-encoders). test set is not part of the pre-training for that dataset. In addition, Cross-encoders are also too slow to evaluate on the evaluation setup of that task, which has 10k candidates. Dataset split metric (Wolf et al., 2019) (Gu et al., 2018) 60.8 (Chen & Wang, 2019) 64.5 (Yoon et al., 2018) - (Dong & Huang, 2018) - (Wu et al., 2018) - pre-trained BERT weights from (Devlin et al., 2019) - Toronto Books + Wikipedia Bi-encoder Poly-encoder 16 Poly-encoder 64 Poly-encoder 360 Cross-encoder Our pre-training on Toronto Books + Wikipedia Bi-encoder Poly-encoder 16 Poly-encoder 64 Poly-encoder 360 Cross-encoder Our pre-training on Reddit Bi-encoder Poly-encoder 16 Poly-encoder 64 Poly-encoder 360 Cross-encoder Table 4: Test performance of Bi-, Poly- and Cross-encoders on our selected tasks. # 5.2 Poly-encoders We train the Poly-encoder using the same batch sizes and optimizer choices as in the Bi-encoder experiments. Results are reported in Table 4 for various values of m context vectors. The Poly-encoder outperforms the Bi-encoder on all the tasks, with more codes generally yielding larger improvements. Our recommendation is thus to use as large a code size as compute time allows (see Sec. 5.4). On DSTC7, the Poly-encoder architecture with BERT pretraining reaches 68.9% R1 with 360 intermediate context codes; this actually outperforms the Cross-encoder result (67.4%) and is noticeably better than our Bi-encoder result (66.8%). Similar conclusions are found on Ubuntu V2 and ConvAI2, although in the latter Cross-encoders give slightly better results. We note that since reporting our results, the authors of Li et al. (2019) have conducted a human evaluation study on ConvAI2, in which our Poly-encoder architecture outperformed all other models compared against, both generative and retrieval based, including the winners of the competition. 7 Published as a conference paper at ICLR 2020 Scoring time (ms) CPU GPU Candidates Bi-encoder Poly-encoder 16 Poly-encoder 64 Poly-encoder 360 Cross-encoder 1k 115 122 126 160 21.7k 2.2M* 2.6k 266k* 100k 160 678 692 837 1k 19 18 23 57 100k 22 38 46 88 Table 5: Average time in milliseconds to predict the next dialogue utterance from C possible candi- dates on ConvAI2. * are inferred. # 5.3 Domain-specific Pre-training We fine-tune our Reddit-pre-trained transformer on all four tasks; we additionally fine-tune a trans- former that was pre-trained on the same datasets as BERT, specifically Toronto Books + Wikipedia. When using our pre-trained weights, we use the Adamax optimizer and optimize all the layers of the transformer including the embeddings. As we do not use weight decay, the weights of the final layer are much larger than those in the final layer of BERT; to avoid saturation of the attention layer in the Poly-encoder, we re-scaled the last linear layer so that the standard deviation of its output matched that of BERT, which we found necessary to achieve good results. We report results of fine-tuning with our pre-trained weights in Table 4. We show that pre-training on Reddit gives further state-of- the-art performance over our previous results with BERT, a finding that we see for all three dialogue tasks, and all three architectures. The results obtained with fine-tuning on our own transformers pre-trained on Toronto Books + Wikipedia are very similar to those obtained with the original BERT weights, indicating that the choice of dataset used to pre-train the models impacts the final results, not some other detail in our training. Indeed, as the two settings pre-train with datasets of similar size, we can conclude that choosing a pre-training task (e.g. dialogue data) that is similar to the downstream tasks of interest (e.g. dialogue) is a likely explanation for these performance gains, in line with previous results showing multi-tasking with similar tasks is more useful than with dissimilar ones (Caruana, 1997). 5.4 Inference Speed An important motivation for the Poly-encoder architecture is to achieve better results than the Bi- encoder while also performing at a reasonable speed. Though the Cross-encoder generally yields strong results, it is prohibitively slow. We perform speed experiments to determine the trade-off of improved performance from the Poly-encoder. Specifically, we predict the next utterance for 100 dialogue examples in the ConvAI2 validation set, where the model scores C candidates (in this case, chosen from the training set). We perform these experiments on both CPU-only and GPU setups. CPU computations were run on an 80 core Intel Xeon processor CPU E5-2698. GPU computations were run on a single Nvidia Quadro GP100 using cuda 10.0 and cudnn 7.4. We show the average time per example for each architecture in Table 5. The difference in timing between the Bi-encoder and the Poly-encoder architectures is rather minimal when there are only 1000 candidates for the model to consider. The difference is more pronounced when considering 100k candidates, a more realistic setup, as we see a 5-6x slowdown for the Poly-encoder variants. Nevertheless, both models are still tractable. The Cross-encoder, however, is 2 orders of magnitude slower than the Bi-encoder and Poly-encoder, rendering it intractable for real-time inference, e.g. when interacting with a dialogue agent, or retrieving from a large set of documents. Thus, Poly- encoders, given their desirable performance and speed trade-off, are the preferred method. We additionally report training times in the Appendix, Table 6. Poly-encoders also have the benefit of being 3-4x faster to train than Cross-encoders (and are similar in training time to Bi-encoders). 8 Published as a conference paper at ICLR 2020 # 6 Conclusion In this paper we present new architectures and pre-training strategies for deep bidirectional trans- formers in candidate selection tasks. We introduced the Poly-encoder method, which provides a mechanism for attending over the context using the label candidate, while maintaining the ability to precompute each candidate’s representation, which allows for fast real-time inference in a produc- tion setup, giving an improved trade off between accuracy and speed. We provided an experimental analysis of those trade-offs for Bi-, Poly- and Cross-encoders, showing that Poly-encoders are more accurate than Bi-encoders, while being far faster than Cross-encoders, which are impractical for real-time use. In terms of training these architectures, we showed that pre-training strategies more closely related to the downstream task bring strong improvements. In particular, pre-training from scratch on Reddit allows us to outperform the results we obtain with BERT, a result that holds for all three model architectures and all three dialogue datasets we tried. However, the methods introduced in this work are not specific to dialogue, and can be used for any task where one is scoring a set of candidates, which we showed for an information retrieval task as well. # References Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Olivier Chapelle, and Kilian Weinberger. Supervised semantic indexing. In Proceedings of the 18th ACM conference on Information and knowledge management, pp. 187–196. ACM, 2009. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. Signature verifi- cation using a” siamese” time delay neural network. In Advances in neural information processing systems, pp. 737–744, 1994. Rich Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. Qian Chen and Wen Wang. Sequential attention-based network for noetic end-to-end response selection. CoRR, abs/1901.02609, 2019. URL http://arxiv.org/abs/1901.02609. Lazaros Polymenakos Chulaka Gunasekara, Jonathan K. Kummerfeld and Walter S. Lasecki. Dstc7 In 7th Edition of the Dialog System Technol- task 1: Noetic end-to-end response selection. ogy Challenges at AAAI 2019, January 2019. URL http://workshop.colips.org/dstc7/ papers/dstc7_task1_final_report.pdf. Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41 (6):391–407, 1990. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. The second conversational intelligence challenge (convai2). In Sergio Escalera and Ralf Herbrich (eds.), The NeurIPS ’18 Competition, pp. 187–208, Cham, 2020. Springer International Publish- ing. ISBN 978-3-030-29135-8. Jianxiong Dong and Jim Huang. Enhance word representation for out-of-vocabulary on ubuntu dialogue corpus. CoRR, abs/1802.02614, 2018. URL http://arxiv.org/abs/1802.02614. 9 Published as a conference paper at ICLR 2020 Jia-Chen Gu, Zhen-Hua Ling, Yu-Ping Ruan, and Quan Liu. Building sequential inference models for end-to-end response selection. CoRR, abs/1812.00686, 2018. URL http://arxiv.org/ abs/1812.00686. J. Johnson, M. Douze, and H. Jgou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, pp. 1–1, 2019. ISSN 2372-2096. doi: 10.1109/TBDATA.2019.2921572. Joseph Peper Vignesh Athreya Chulaka Gunasekara Jatin Ganhotra Siva Sankalp Patel Lazaros Poly- menakos Jonathan K. Kummerfeld, Sai R. Gouravajhala and Walter S. Lasecki. Analyzing as- sumptions in conversation disentanglement research through the lens of a new dataset and model. ArXiv e-prints, October 2018. URL https://arxiv.org/pdf/1810.11118.pdf. Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. Improved deep learning baselines for ubuntu corpus dialogs. CoRR, abs/1510.03753, 2015. URL http://arxiv.org/abs/1510.03753. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980. Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS), 2019. Margaret Li, Jason Weston, and Stephen Roller. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087, 2019. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In SIGDIAL Conference, 2015. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In EMNLP, 2018. Gerard Salton, Anita Wong, and Chung-Shu Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620, 1975. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 673–683, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1062. Jesse Vig and Kalai Ramea. Comparison of transfer-learning approaches for response selection in multi-turn conversations. Workshop on DSTC7, 2019. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. transfer learning approach for neural network based conversational agents. arXiv:1901.08149, 2019. Transfertransfo: A arXiv preprint Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. In Thirty-Second AAAI Conference on Artificial Intelligence, Starspace: Embed all the things! 2018. Yu Ping Wu, Wei Chung Wu, Chen Xing, Ming Zhou, and Zhoujun Li. Sequential matching net- work: A new architecture for multi-turn response selection in retrieval-based chatbots. In ACL, 2017. Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 245–254. ACM, 2018. 10 Published as a conference paper at ICLR 2020 Seunghyun Yoon, Joongbo Shin, and Kyomin Jung. Learning to rank question-answer pairs using hierarchical recurrent encoder with latent topic clustering. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1575–1584, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1142. Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D’Haro, Lazaros Polymenakos, R. Chu- laka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, William B. Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. Dialog system technology challenge 7. CoRR, abs/1901.03461, 2019. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 2204–2213, Melbourne, Australia, July 2018a. Association for Computational Linguistics. Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. Modeling multi-turn conversation with deep utterance aggregation. In COLING, 2018b. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 19–27, 2015. 11 Published as a conference paper at ICLR 2020 # A Training Time We report the training time on 8 GPU Volta 100 for the 3 datasets considered and for 4 types of models in Table 6. Dataset Bi-encoder Poly-encoder 16 Poly-encoder 64 Cross-encoder64 ConvAI2 DSTC7 UbuntuV2 4.9 5.5 5.7 13.5 2.0 2.7 2.8 9.4 7.9 8.0 8.0 39.9 Table 6: Training time in hours. # B Reduction layer in Bi-encoder We provide in Table 7 the results obtained for different types of reductions on top of the Bi-encoder. Specifically we compare the Recall@1/20 on the ConvAI2 validation set when taking the first output of BERT, the average of the first 16 outputs, the average of the first 64 outputs and all of them except the first one ([S]). Setup First output Avg first 16 outputs Avg first 64 outputs Avg all outputs ConvAI2 valid Recall@1/20 83.3 82.9 82.7 83.1 Table 7: Bi-encoder results on the ConvAI2 valid set for different choices of function red(·). # C Alternative Choices for Context Vectors We considered a few other ways to derive the context vectors (y1 the output (h1 ctxt, ..., hN ctxt) of the underlying transformer: ctxt, ..., ym ctxt) of the Poly-encoder from • Learn m codes (c1, ..., cm), where ci extracts representation yi ctxt by attending over all the ctxt). This method is denoted “Poly-encoder (Learnt-codes)” or “Poly- ctxt, ..., hN outputs (h1 encoder (Learnt-m)”, and is the method described in section 4.4 ctxt). This method is denoted “Poly-encoder (First m outputs)” or “Poly-encoder (First-m)”. Note that when N < m, only m vectors are considered. Consider the last m outputs. • Consider the last m outputs concatenated with the first one, h1 ctxt which plays a particular role in BERT as it corresponds to the special token [S]. The performance of those four methods is evaluated on the validation set of Convai2 and DSTC7 and reported on Table 8. The first two methods are shown in Figure 2. We additionally provide the inference time for a given number of candidates coming from the Convai2 dataset on Table 9. 12 Published as a conference paper at ICLR 2020 Dataset split metric (Wolf et al., 2019) (Chen & Wang, 2019) 1 Attention Code Learnt-codes First m outputs Last m outputs Last m outputs and h1 4 Attention Codes Learnt-codes First m outputs Last m outputs Last m outputs and h1 16 Attention Codes Learnt-codes First m outputs Last m outputs Last m outputs and h1 64 Attention Codes Learnt-codes First m outputs Last m outputs Last m outputs and h1 360 Attention Codes Learnt-codes First m outputs Last m outputs Last m outputs and h1 ctxt ctxt ctxt ctxt ConvAI2 dev R@1/20 82.1 - test R@1/20 80.7 - 81.9 ± 0.3 83.2 ± 0.2 82.9 ± 0.1 - 81.0 ± 0.1 81.5 ± 0.1 81.0 ± 0.1 - 83.8 ± 0.2 83.4 ± 0.2 82.8 ± 0.2 82.9 ± 0.1 82.2 ± 0.5 81.6 ± 0.1 81.3 ± 0.4 81.4 ± 0.2 84.4 ± 0.1 85.2 ± 0.1 83.9 ± 0.2 83.8 ± 0.3 83.2 ± 0.1 83.9 ± 0.2 82.0 ± 0.4 81.7 ± 0.3 84.9 ± 0.1 86.0 ± 0.2 84.9 ± 0.3 85.0 ± 0.2 83.7 ± 0.2 84.2 ± 0.2 82.9 ± 0.2 83.2 ± 0.2 85.3 ± 0.3 86.3 ± 0.1 86.3 ± 0.1 86.2 ± 0.3 83.7 ± 0.2 84.6 ± 0.3 84.7 ± 0.3 84.5 ± 0.4 DSTC 7 dev R@1/100 - 57.3 test R@1/100 - 64.5 56.2 ± 0.1 56.4 ± 0.3 56.1 ± 0.4 - 66.9 ± 0.7 66.8 ± 0.7 67.2 ± 1.1 - 56.5 ± 0.5 56.9 ± 0.5 56.0 ± 0.5 55.8 ± 0.3 66.8 ± 0.7 67.2 ± 1.3 65.8 ± 0.5 66.1 ± 0.8 57.7 ± 0.2 56.1 ± 1.7 56.1 ± 0.3 56.1 ± 0.3 67.8 ± 0.3 66.8 ± 1.1 66.2 ± 0.7 66.6 ± 0.2 58.3 ± 0.4 57.7 ± 0.6 57.0 ± 0.2 57.3 ± 0.3 67.0 ± 0.9 67.1 ± 0.1 66.5 ± 0.5 67.1 ± 0.5 57.7 ± 0.3 58.1 ± 0.4 58.0 ± 0.4 58.3 ± 0.4 68.9 ± 0.4 66.8 ± 0.7 68.1 ± 0.5 68.0 ± 0.8 Table 8: Validation and test performance of Poly-encoder variants, with weights initialized from (Devlin et al., 2019). Scores are shown for ConvAI2 and DSTC 7 Track 1. Bold numbers indicate the highest performing variant within that number of codes. Scoring time (ms) CPU GPU Candidates Bi-encoder Poly-encoder (First m outputs) 16 Poly-encoder (First m outputs) 64 Poly-encoder (First m outputs) 360 Poly-encoder (Learnt-codes) 16 Poly-encoder (Learnt-codes) 64 Poly-encoder (Learnt-codes) 360 Cross-encoder 1k 115 119 124 120 122 126 160 21.7k 100k 160 551 570 619 678 692 837 2.2M* 1k 19 17 17 17 18 23 57 2.6k 100k 22 37 39 45 38 46 88 266k* Table 9: Average time in milliseconds to predict the next dialogue utterance from N possible candi- dates. * are inferred. 13 Published as a conference paper at ICLR 2020 Legend Token | Vector Leamed Model Attention/ Parameter Aggregation Score Score Dim Reduction —— t Cot Eb and Eri t t Context Aggregator Candidate Aggregator | f t if f f f f f i f Out, i | Out,2 |. «[ Out, N, Out, 1 || Out,2 |. ./ Out, N, Outi] [Our 2) . «| Out,2 | . .[ Out, N) rs + t t f t f f Context Encoder Candidate Encoder t f f f il il f i t f Ind |tn.2 +++ linn, Int) in2) ++ jin, Ny Int) |in,2 In,2_ +++ InN, (a) Bi-encoder (b) Cross-encoder ext Emb| >@Score txt Emb ——+(.)--Score f Attention _- — Emb 1 G05 aN Gan emb, + + Cand emt Out]... [our | _ on | t t Code 1 L»Attention, ..~ [Code m /» Attention Select First m €N Vectors Candidate Aggregator | — yr Candidate Aggregator t t t —t t = —— tt t Out, f | Out, 2 |, .[ Out, N, Out,2 [Out 2 |. | Out, N Out, 1 || Oue,2 |. .[ OW, N, | Out, ][ Out, } . .[ t t f t t f t t t t f f Context Encoder Candidate Encoder Context Encoder Candidate Encoder t f t f f f f t it f t i tnt) |In,2) +++ |In,N, Int |In,2) ++ |In,N, Ind) in,2 +++ InN, tnt} in,2) ++ linn, (©) Poly-encoder (First-m) (d) Poly-encoder (Learnt-m) Figure 2: (a) The Bi-encoder (b) The Cross-encoder (c) The Poly-encoder with first m vectors. (d) The Poly-encoder with m learnt codes. Dataset split metric ConvAI2 dev R@1/20 test R@1/20 DSTC 7 dev test R@1/100 R@1/100 R@10/100 MRR dev R@1/10 Ubuntu v2 R@1/10 test R@5/10 MRR Hugging Face (Wolf et al., 2019) (Chen & Wang, 2019) - 64.5 - - (Dong & Huang, 2018) pre-trained weights from (Devlin et al., 2019) - Toronto Books + Wikipedia 83.3 ± 0.2 81.7 ± 0.2 56.5 ± 0.4 66.8 ± 0.7 Bi-encoder 85.2 ± 0.1 83.9 ± 0.2 56.7 ± 0.2 67.0 ± 0.9 Poly-encoder (First-m) 16 84.4 ± 0.1 83.2 ± 0.1 57.7 ± 0.2 67.8 ± 0.3 Poly-encoder (Learnt-m) 16 86.0 ± 0.2 84.2 ± 0.2 57.1 ± 0.2 66.9 ± 0.7 Poly-encoder (First-m) 64 84.9 ± 0.1 83.7 ± 0.2 58.3 ± 0.4 67.0 ± 0.9 Poly-encoder (Learnt-m) 64 86.3 ± 0.1 84.6 ± 0.3 57.8 ± 0.5 67.0 ± 0.5 Poly-encoder (First-m) 360 Poly-encoder (Learnt-m) 360 85.3 ± 0.3 83.7 ± 0.2 57.7 ± 0.3 68.9 ± 0.4 87.1 ± 0.1 84.8 ± 0.3 59.4 ± 0.4 67.4 ± 0.7 Cross-encoder Our pre-training on Toronto Books + Wikipedia 84.6 ± 0.1 82.0 ± 0.1 54.9 ± 0.5 64.5 ± 0.5 Bi-encoder 84.1 ± 0.2 81.4 ± 0.2 53.9 ± 2.7 63.3 ± 2.9 Poly-encoder (First-m) 16 85.4 ± 0.2 82.7 ± 0.1 56.0 ± 0.4 65.3 ± 0.9 Poly-encoder (Learnt-m) 16 86.1 ± 0.4 83.9 ± 0.3 55.6 ± 0.9 64.3 ± 1.5 Poly-encoder (First-m) 64 85.6 ± 0.1 83.3 ± 0.1 56.2 ± 0.4 65.8 ± 0.7 Poly-encoder (Learnt-m) 64 86.6 ± 0.3 84.4 ± 0.2 57.5 ± 0.4 66.5 ± 1.2 Poly-encoder (First-m) 360 Poly-encoder (Learnt-m) 360 86.1 ± 0.1 83.8 ± 0.1 56.5 ± 0.8 65.8 ± 0.7 87.3 ± 0.5 84.9 ± 0.3 57.7 ± 0.5 65.3 ± 1.0 Cross-encoder Our pre-training on Reddit 86.9 ± 0.1 84.8 ± 0.1 60.1 ± 0.4 70.9 ± 0.5 Bi-encoder 89.0 ± 0.1 86.4 ± 0.3 60.4 ± 0.3 70.7 ± 0.7 Poly-encoder (First-m) 16 88.6 ± 0.3 86.3 ± 0.3 61.1 ± 0.4 71.6 ± 0.6 Poly-encoder (Learnt-m) 16 89.5 ± 0.1 87.3 ± 0.2 61.0 ± 0.4 70.9 ± 0.6 Poly-encoder (First-m) 64 89.0 ± 0.1 86.5 ± 0.2 60.9 ± 0.6 71.2 ± 0.8 Poly-encoder (Learnt-m) 64 90.0 ± 0.1 87.3 ± 0.1 61.1 ± 1.9 70.9 ± 2.1 Poly-encoder (First-m) 360 Poly-encoder (Learnt-m) 360 89.2 ± 0.1 86.8 ± 0.1 61.2 ± 0.2 71.4 ± 1.0 90.3 ± 0.2 87.9 ± 0.2 63.9 ± 0.3 71.7 ± 0.3 Cross-encoder 82.1 80.7 - - - 57.3 - - - 90.2 - 89.0 ± 1.0 88.8 ± 0.3 88.6 ± 0.2 89.1 ± 0.2 89.2 ± 0.2 89.6 ± 0.9 89.9 ± 0.5 90.5 ± 0.3 88.1 ± 0.2 87.2 ± 1.5 88.2 ± 0.7 87.8 ± 0.4 88.4 ± 0.3 89.0 ± 0.5 88.5 ± 0.6 89.7 ± 0.5 90.6 ± 0.3 91.0 ± 0.4 91.3 ± 0.3 91.5 ± 0.5 91.3 ± 0.4 91.5 ± 0.9 91.1 ± 0.3 92.4 ± 0.5 - - - - - 73.5 - - - - 75.9 - 97.3 - 84.8 74.6 ± 0.5 80.9 ± 0.6 80.6 ± 0.4 98.2 ± 0.1 88.0 ± 0.3 74.6 ± 0.6 81.7 ± 0.5 81.4 ± 0.6 98.2 ± 0.1 88.5 ± 0.4 75.1 ± 0.2 81.5 ± 0.1 81.2 ± 0.2 98.2 ± 0.0 88.3 ± 0.1 74.7 ± 0.4 82.2 ± 0.6 81.9 ± 0.5 98.4 ± 0.0 88.8 ± 0.3 74.7 ± 0.6 81.8 ± 0.1 81.3 ± 0.2 98.2 ± 0.1 88.4 ± 0.1 75.0 ± 0.6 82.7 ± 0.4 82.2 ± 0.6 98.4 ± 0.1 89.0 ± 0.4 76.2 ± 0.2 81.5 ± 0.1 80.9 ± 0.1 98.1 ± 0.0 88.1 ± 0.1 75.6 ± 0.4 83.3 ± 0.4 82.8 ± 0.3 98.4 ± 0.1 89.4 ± 0.2 72.6 ± 0.4 80.9 ± 0.5 80.8 ± 0.5 98.4 ± 0.1 88.2 ± 0.4 71.6 ± 2.4 80.8 ± 0.5 80.6 ± 0.4 98.4 ± 0.1 88.1 ± 0.3 73.2 ± 0.7 84.0 ± 0.1 83.4 ± 0.2 98.7 ± 0.0 89.9 ± 0.1 72.5 ± 1.0 80.9 ± 0.6 80.7 ± 0.6 98.4 ± 0.0 88.2 ± 0.4 73.5 ± 0.5 84.0 ± 0.1 83.4 ± 0.1 98.7 ± 0.0 89.9 ± 0.0 74.4 ± 0.7 81.3 ± 0.6 81.1 ± 0.4 98.4 ± 0.2 88.4 ± 0.3 73.6 ± 0.6 84.2 ± 0.2 83.7 ± 0.0 98.7 ± 0.1 90.1 ± 0.0 73.8 ± 0.6 83.2 ± 0.8 83.1 ± 0.7 98.7 ± 0.1 89.7 ± 0.5 78.1 ± 0.3 83.7 ± 0.7 83.6 ± 0.7 98.8 ± 0.1 90.1 ± 0.4 78.0 ± 0.5 84.3 ± 0.3 84.3 ± 0.2 98.9 ± 0.0 90.5 ± 0.1 78.4 ± 0.4 86.1 ± 0.1 86.0 ± 0.1 99.0 ± 0.1 91.5 ± 0.1 78.0 ± 0.3 84.0 ± 0.4 83.9 ± 0.4 98.8 ± 0.0 90.3 ± 0.3 78.2 ± 0.7 86.2 ± 0.1 85.9 ± 0.1 99.1 ± 0.0 91.5 ± 0.1 77.9 ± 1.6 84.8 ± 0.5 84.6 ± 0.5 98.9 ± 0.1 90.7 ± 0.3 78.3 ± 0.7 86.3 ± 0.1 85.9 ± 0.1 99.1 ± 0.0 91.5 ± 0.0 79.0 ± 0.2 86.7 ± 0.1 86.5 ± 0.1 99.1 ± 0.0 91.9 ± 0.0 Table 10: Validation and test performances of Bi-, Poly- and Cross-encoders. Scores are shown for ConvAI2, DSTC7 Track 1 and Ubuntu v2, and the previous state-of-the-art models in the literature. 14
{ "id": "1909.03087" }
1904.10079
The MineRL 2019 Competition on Sample Efficient Reinforcement Learning using Human Priors
Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing number of samples, their development is restricted to a continually shrinking segment of the AI community. Likewise, many of these systems cannot be applied to real-world problems, where environment samples are expensive. Resolution of these limitations requires new, sample-efficient methods. To facilitate research in this direction, we introduce the MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors. The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the number of samples needed to solve complex, hierarchical, and sparse environments. To that end, we introduce: (1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied trajectories with arbitrary modifications to game state and visuals. Participants will compete to develop systems which solve the ObtainDiamond task with a limited number of samples from the environment simulator, Malmo. The competition is structured into two rounds in which competitors are provided several paired versions of the dataset and environment with different game textures. At the end of each round, competitors will submit containerized versions of their learning algorithms and they will then be trained/evaluated from scratch on a hold-out dataset-environment pair for a total of 4-days on a prespecified hardware platform.
http://arxiv.org/pdf/1904.10079
William H. Guss, Cayden Codel, Katja Hofmann, Brandon Houghton, Noboru Kuno, Stephanie Milani, Sharada Mohanty, Diego Perez Liebana, Ruslan Salakhutdinov, Nicholay Topin, Manuela Veloso, Phillip Wang
cs.LG, cs.AI, stat.ML
accepted at NeurIPS 2019, 28 pages
null
cs.LG
20190422
20210119
1 2 0 2 n a J 9 1 ] G L . s c [ 3 v 9 7 0 0 1 . 4 0 9 1 : v i X r a NeurIPS 2019 Competition: The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors William H. Guss∗† Brandon Houghton‡† Nicholay Topin‡† Cayden Codel‡ Noboru Kuno‡§ Diego Perez Liebana‡∗∗ Manuela Veloso‡† Katja Hofmann‡§ Stephanie Milani‡¶ Ruslan Salakhutdinov‡† Phillip Wang‡† November 26, 2021 # Competition Overview Though deep reinforcement learning has led to breakthroughs in many difficult do- mains, these successes have required an ever-increasing number of samples. As state-of- the-art reinforcement learning (RL) systems require an exponentially increasing num- ber of samples, their development is restricted to a continually shrinking segment of the AI community. Likewise, many of these systemss cannot be applied to real-world problems, where environment samples are expensive. Resolution of these limitations requires new, sample-efficient methods. To facilitate research in this direction, we propose the MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors. The primary goal of the competition is to foster the development of algorithms which can efficiently leverage human demonstrations to drastically reduce the num- ber of samples needed to solve complex, hierarchical, and sparse environments. To that end, we introduce: (1) the Minecraft ObtainDiamond task, a sequential decision making environment requiring long-term planning, hierarchical control, and efficient exploration methods; and (2) the MineRL-v0 dataset, a large-scale collection of over 60 million state-action pairs of human demonstrations that can be resimulated into embodied agent trajectories with arbitrary modifications to game state and visuals. ∗Lead organizer: [email protected] †Affiliation: Carnegie Mellon University ‡Equal contribution: Organizer names are ordered alphabetically, with the exception of the lead organizer. Competitions are extremely complicated endeavors involving a huge amount of organiza- tional overhead from the development of complicated software packages to event logistics and evaluation. It is impossible to estimate the total contributions of all involved at the onset. §Affiliation: Microsoft Research ¶Affiliation: University of Maryland ‖Affiliation: AICrowd ∗∗Affiliation: Queen Mary University of London 1 Participants will compete to develop systems which solve the ObtainDiamond task with a limited number of samples from the environment simulator, Malmo [11]. The competition is structured into two rounds in which competitors are provided sev- eral paired versions of the dataset and environment with different game textures and shaders. At the end of each round, competitors will submit containerized versions of their learning algorithms to the AICrowd platform where they will then be trained from scratch on a hold-out dataset-environment pair for a total of 4-days on a pre-specified hardware platform. Each submission will then be automatically ranked according to the final performance of the trained agent. # Keywords Learning, Reinforcement Learning, Imitation Learning, Sample Efficiency, Games. # Competition type Regular. # 1 Competition description # 1.1 Background and impact Many of the recent, most celebrated successes of artificial intelligence (AI), such as Al- phaStar, AlphaGo, OpenAI Five, and their derivative systems, utilize deep reinforcement learning to achieve human or super-human level performance in sequential decision-making tasks. As established by Amodei and Hernandez [1], these improvements to the state-of- the-art have thus far required exponentially increasing computational power to achieve such performance. In part, this is due to an increase in the computation required per environment-sample; however, the most significant change is the number of environment- samples required for training. For example, DQN [13], A3C [14], and Rainbow DQN [9] have been applied to ATARI 2600 games [2] and require from 44 to over 200 million frames (200 to over 900 hours) to achieve human-level performance. On more complex domains: OpenAI Five utilizes 11,000+ years of Dota 2 gameplay [18], AlphaGoZero uses 4.9 million games of self-play in Go [23], and AlphaStar uses 200 years of Starcraft II gameplay [5]. Due to the growing computational requirements, a shrinking portion of the AI commu- nity has the resources to improve these systems and reproduce state-of-the-art results. Additionally, the application of many reinforcement learning techniques to real-world chal- lenges, such as self-driving vehicles, is hindered by the raw number of required samples. In these real-world domains, policy roll-outs can be costly and simulators are not yet accurate enough to yield policies robust to real-world conditions. One well-known way to reduce the environment sample-complexity of the aforemen- tioned methods is to leverage human priors and demonstrations of the desired behavior. 2 Techniques utilizing trajectory examples, such as imitation learning and Bayesian reinforce- ment learning, have been successfully applied to older benchmarks and real-world problems where samples from the environment are costly. In many simple games with singular tasks, such as the Atari 2600, OpenAI Gym, and TORCS environments, imitation learning can drastically reduce the number of environment samples needed through pretraining and hybrid RL techniques [10, 4, 19, 8]. Further, in some real-world tasks, such as robotic manipulation [7, 6] and self-driving [3], in which it is expensive to gather a large number of samples from the environment, imitation-based methods are often the only means of generating solutions using few samples. Despite their success, these techniques are still not sufficiently sample-efficient for application to many real-world domains. Impact. To that end, the central aim of our proposed competition is the advancement and development of novel, sample-efficient methods which leverage human priors for se- quential decision-making problems. Due to the competition’s design, organizational team, and support, we are confident that the competition will catalyze research towards the de- ployment of reinforcement learning in the real world, democratized access to AI/ML, and reproducibility. By enforcing constraints on the computation and sample budgets of the considered techniques, we believe that the methods developed during the competition will broaden participation in deep RL research by lowering the computational barrier to entry. While computational resources inherently have a cost barrier, large-scale, open-access datasets can be widely used. To that end, we center our proposed competition around techniques which leverage the newly introduced MineRL dataset. To maximize the devel- opment of domain-agnostic techniques that enable the application of deep reinforcement learning to sample-limited, real-world domains, such as robotics, we carefully developed a novel data-pipeline and hold-out environment evaluation scheme with AICrowd to prevent the over-engineering of submissions to the competition task. The proposed competition is ambitious, so we have taken meaningful steps to ensure its smooth execution. Specifically, we secured several crucial partnerships with organizations and individuals. Our primary partner, Microsoft Research, is providing significant compu- tational resources to enable direct, fair evaluation of the participants’ training procedures. We developed a relationship with AICrowd.com to provide the submission orchestration platform for our competition, as well as continued support throughout the competition to ensure that participants can easily submit their algorithms. In addition, we have partnered with Preferred Networks to provide a set of standard baseline implementations includ- ing standard reinforcement learning techniques, hierarchical methods, and basic imitation learning methods. # 1.1.1 Domain Interest Minecraft is a compelling domain for the development of reinforcement and imitation learn- ing based methods because of the unique challenges it presents: Minecraft is a 3D, first- 3 ab ee 1 wets) al 5 els i = S| 4) iA o El ore) o a) AI R = # ai fe fA 2] AAR AZ fs > >|] jo a = 2) EG a >| a (le 3 EA siete fs | ~ + ig i“ . Val fa] fl a ale = i ott is 2 i [2 ri y hl 4) Oar ir a a : 5 s 0 Figure 1: A subset of the Minecraft item hierarchy (totaling 371 unique items). Each node is a unique Minecraft item, block, or non-player character, and a directed edge between two nodes denotes that one is a prerequisite for another. Each item presents is own unique set of challenges, so coverage of the full hierarchy by one player takes several hundred hours. person, open-world game centered around the gathering of resources and creation of struc- tures and items. Notably, the procedurally generated world is composed of discrete blocks that allow modification; over the course of gameplay, players change their surroundings by gathering resources (such as wood from trees) and constructing structures (such as shelter and storage). Since Minecraft is an embodied domain and the agent’s surroundings are varied and dynamic, it presents many of the same challenges as real-world robotics do- mains. Therefore, solutions created for this competition are a step toward applying these same methods to real-world problems. An additional reason Minecraft is an appealing competition domain is its popularity as a video game; of all games ever released, it has the second-most total copies sold. Given its popularity, potential participants are more likely to be familiar with it than other domains based on video games. Likewise, the competition will be of greater interest due to its relationship with such a well-known game. Furthermore, there is existing research interest in Minecraft. With the development of Malmo [11], a simulator for Minecraft, the environment has garnered great research in- terest: many researchers [22, 24, 16] have leveraged Minecraft’s massive hierarchality and expressive power as a simulator to make great strides in language-grounded, interpretable multi-task option-extraction, hierarchical lifelong learning, and active perception. How- ever, much of the existing research utilizes toy tasks in Minecraft, often restricted to 2D movement, discrete positions, or artificially confined maps unrepresentative of the intrinsic complexity that human players typically face. These restrictions reflect the difficulty of the domain, the challenge of coping with fully-embodied human state- and action-spaces, and the complexity exhibited in optimal human policies. Our competition and the release of the large-scale MineRL-v0 dataset of human demonstrations will serve to catalyze research 4 on this domain in two ways: (1) our preliminary results indicate that through imitation learning, basic reinforcement learning approaches can finally deal directly with the full, unrestricted state- and action-space of Minecraft; and (2) due to the difficult and crucial research challenges exhibited on the primary competition task, ObtainDiamond, we believe that the competition will bring work on the Minecraft domain to the fore of sample-efficient reinforcement learning research. # 1.2 Novelty Reinforcement Learning. To date, all existing reinforcement learning competitions have focused on the development of policies or meta-policies which perform well on ex- tremely complex domains or generalize across a distribution of tasks [12, 15, 20]. However, the focus of these competitions is performing well on a given domain and not the develop- ment of robust algorithms that are applicable to a broad set of domains. Often, the winning submissions are the result of massive amounts of computational resources or highly specific, hand-engineered features. In contrast, our competition is the first of its kind to directly consider the efficiency of the training procedures of different algorithms. We evaluate submissions solely on their ability to perform well within a strict compu- tation and environment-sample budget. Moreover, we are uniquely positioned to propose such a competition due to the nature of our human demonstration dataset and environ- ment: our dataset is constructed by directly recording the game-state as human experts play, so we are able to later make multiple renders of both the environment and data with varied lighting, geometry, textures, and gamestate dynamics, thus yielding development, validation, and hold-out evaluation dataset/environment pairs. As a result, competitors are naturally prohibited from hand-engineering or warm-starting their learning algorithms and winning solely due to resource advantages. Imitation Learning. To our knowledge, no competitions have explicitly focused on the use of imitation learning alongside reinforcement learning. This is in large part due to a lack of large-scale, publicly available datasets of human or expert demonstrations. Our competition is the first to explicitly involve and encourage the use of imitation learning to solve the given task, and in that capacity, we release the largest-ever dataset of hu- man demonstrations on an embodied domain. The large number of trajectories and rich demonstration-performance annotations enable the application of many standard imitation learning techniques and encourage further development of new ones that use hierarchical labels, varying agent performance levels, and auxiliary state information. Minecraft. A few competitions have already used Minecraft due to its expressive power as a domain. The first one was The Malm¨o Collaborative AI Challenge1, in which agents 1https://www.microsoft.com/en-us/research/academic-program/collaborative-ai-challenge 5 worked in pairs to solve a collaborative task in a decentralized manner. Later, C. Salge et al. [21] organized the Generative Design in Minecraft (GDMC): Settlement Generation Competition, in which participants were asked to implement methods that would procedu- rally build complete cities in any given, unknown landscape. These two contests highlight the versatility of this framework as a benchmark for different AI tasks. In 2019, Perez-Liebana et al. [20] organized the Multi-Agent Reinforcement Learning in Malm ¨O (MARL ¨O) competition. This competition pitted groups of agents to compete against each other in three different games. Each og the games was parameterizable to prevent the agents from overfitting to specific visuals and layouts. The objective of the competition was to build an agent that would learn, in a cooperative or competitive multi- agent task, to play the games in the presence of other agents. The MARL ¨O competition successfuly attracted a large number of entries from both existing research institutions and the general public, indicating a broad level of accessibility and excitement for the Minecraft domain within and outside of the existing research community. In comparison with previous contests, our competition tackles one main task and pro- vides a massive number of hierarchical subtasks and demonstrations (see Section 1.3). The main task and its subtasks are not trivial; however, agent progress can be easily measured, which allows for a clear comparison between submitted methods. Further, the target of the present competition is to promote research on efficient learning, focusing directly on the sample- and computational-efficiency of the submitted algorithms. # 1.3 Data For this competition, we introduce two main components: a set of sequential deci- sion making environments in Minecraft and a corresponding public large-scale dataset of human demonstrations. # 1.3.1 Environment We define one primary competition envi- ronment, ObtainDiamond, and six other auxiliary environments that encompass a significant portion of human Minecraft play. We select these environment domains to highlight many of the hardest challenges in reinforcement learning, such as sparse re- wards, long reward horizons, and efficient hierarchical planning. #Â¥*94t®: greechop: obtai ea Obtain year: optainzron Piekaxe: Obtain Diamond: — — Figure 2: Images of various stages of six of seven total environments. 6 Primary Environment. The main task of the competition is solving the ObtainDiamond environment. In this environment, the agent begins in a random starting location without any items, and is tasked with obtaining a diamond. The agent receives a high reward for obtaining a diamond as well as smaller, auxillary rewards for obtaining prerequisite items. Episodes end due to the agent (a) dying, (b) successfully obtaining a diamond, or (c) reaching the maximum step count of 18000 frames (15 minutes). The ObtainDiamond environment is a difficult environment for a number of reasons. Diamonds only exist in a small portion of the world and are 2-10 times rarer than other ores in Minecraft. Additionally, obtaining a diamond requires many prerequisite items. For these reasons, it is practically impossible for an agent to obtain a diamond via naive random exploration. Auxillary Environments. We provide six auxillary environments (in four families), which we believe will be useful for solving ObtainDiamond (see Section 1.3.4): 1. Navigate: In this environment, the agent must move to a goal location. This rep- resents a basic primitive used in many tasks throughout Minecraft. In addition to standard observations, the agent has access to a “compass” observation, which points to a set location, 64 meters from the start location. The agent is given a sparse re- ward (+100 upon reaching the goal, at which point the episode terminates). We also support a dense, reward-shaped version of Navigate, in which the agent is given a reward every tick corresponding to the change in distance between the agent and the goal. In this environment, the agent must collect wood, a key resource in Minecraft and a prerequisite item for diamonds. The agent begins in a forest biome (near many trees) with an iron axe for cutting trees. The agent is given +1 reward for obtaining each unit of wood, and the episode terminates once the agent obtains 64 units or the step limit is reached. 3. Obtain<Item>: We include a three additional obtain environments, similar to that of ObtainDiamond, but with different goal items to obtain. They are: (a) CookedMeat: cooked meat of a (cow, chicken, sheep, or pig), which is necessary for survival in Minecraft. In this environment, the agent is given a specific kind of meat to obtain. (b) Bed: made out of dye, wool, and wood, an item that is also vital to Minecraft survival. In this environment, the agent is given a specific color of bed to create. is a direct prerequisite item of the diamond. It is significantly easier to solve than ObtainDiamond: iron is 20 times more common in the Minecraft world than diamonds, and this environment is typically solved by humans in less than 10 minutes. 7 4. Survival: This environment is the standard, open-ended game mode used by most human players when playing the game casually. There is no specified reward function in this case, but data from this environment can be used to help train agents in more structured tasks, such as ObtainDiamond. # 1.3.2 Dataset The MineRL-v0 dataset consists of over 60 million state-action-(reward) tuples of recorded human demonstrations over the seven environments mentioned above. Each trajectory is contiguously sampled every Minecraft game tick (at 20 game ticks per second). Each state is comprised of an RGB video frame of the player’s point-of- view and a comprehensive set of features from the game-state at that tick: player in- ventory, item collection events, distances to objectives, player attributes (health, level, achievements), and details about the cur- rent GUI the player has open. The ac- tion recorded at each tick consists of: all the keyboard presses, the change in view pitch and yaw (mouse movements), player GUI interactions, and agglomerative ac- tions such as item crafting. Minecraft Server MineRL Data Repository Figure 3: A diagram of the MineRL data col- lection platform. Our system renders demon- strations from packet-level data, so we can easily rerender our data with different parameters. Human trajectories are accompanied by a large set of automatically generated anno- tations. For all of the environments, we include metrics which indicate the quality of the demonstration, such as timestamped rewards, number of no-ops, number of deaths, and total score. Additionally, trajectory meta-data includes timestamped markers for hier- archical labelings; e.g. when a house-like structure is built or certain objectives such as chopping down a tree are met. Data is made available both in the competition materials as well as through a standalone website http://minerl.io # 1.3.3 Data Collection For this competition, we use our novel platform for the collection of player trajectories in Minecraft, enabling the construction of the MineRL-v0 dataset. As shown in Figure 3, our platform consists of (1) a public game server and website, where we obtain permission to record trajectories of Minecraft players in natural gameplay; (2) a custom Minecraft client plugin, which records all packet level communication between the client and the 8 server, so we can re-simulate and re-render human demonstrations with modifications to the game state and graphics; and (3) a data processing pipeline, which enables us to produce automatically annotated datasets of task demonstrations. Data Acquisition. Minecraft players find the MineRL server on standard Minecraft server lists. Players first use our webpage to provide IRB2 consent to have their gameplay anonymously recorded. Then, they download a plugin for their Minecraft client, which records and streams users’ client-server game packets to the MineRL data repository. When playing on our server, users select an environment to solve and receive in-game currency proportional to the amount of reward obtained. For the Survival environment (where there is no known reward function), players receive rewards only for duration of gameplay, so as not to impose an artificial reward function. Data Pipeline. Our data pipeline allows us to resimulate recorded trajectories into several algorithmically consumable formats. The pipeline serves as an extension to the core Minecraft game code and synchronously resends each recorded packet from the MineRL data repository to a Minecraft client using our custom API for automatic annotation and game-state modification. This API allows us to add annotations based on any aspect of the game state accessible from existing Minecraft simulators. Notably, it allows us to rerender the same data with different textures, shaders, and lighting-conditions which we will use to create test and validation environment-dataset pairs for this competition. # 1.3.4 Data Usefulness Human Performance. A majority of the human demonstrations in the dataset fall within the range of expert level play. Figure 4 shows the distribution over tra- jectory length for each environment. The red region in each histogram denotes the range of times which correspond to play at an expert level, computed as the aver- age time required for task completion by players with at least five years of Minecraft experience. The large number of expert samples and rich labelings of demonstration performance enable application of many standard imitation learning techniques which as- sume optimality of the base policy. In addition, the beginner and intermediate level tra- + swiase Obtainged cbtainoiamond Hil ya 1 fo. $e 8 bw mh 0 em Um "7 yalowund cana ‘och ” Alii yN i | ”}~ 2 a @ of o © o ter % ime ites o completion 2The data collection study was approved by Carnegie Mellon University’s institutional review board as STUDY2018 00000364. 9 malmo_env env = malmo_env observation = env done done env action env.action_space observation, reward, done, info = env action Figure 6: Example code for running a single episode of a random agent in ObtainDiamond. jectories allow for the further development of techniques which leverage imperfect demon- strations. Hierarchality. Minecraft is deeply hier- archical as shown in Figure 1, and the MineRL data collection platform is de- signed to capture these hierarchies both ex- plicitly and implicitly. Due to the sub- task labelings provided in MineRL-v0, we can inspect and quantify the extent to which these environments overlap. Fig- ure 5 shows precedence frequency graphs constructed from MineRL trajectories on the ObtainDiamond, Obtain CookedMeat, The and ObtainIronPickaxe tasks. policies for obtaining a diamond con- sist of subpolicies which obtain wood, stone, crafting tables, and furnaces, all of which appear in ObtainIronPickaxe and ObtainCookedMeat as well. There is even greater overlap between ObtainDiamond and ObtainIronPickaxe: most of the item hierarchy for ObtainDiamond consists of the hierarchy for ObtainIronPickaxe. a © «6 e bd bd = Ne es * * ee® L/ wih en/ 4 T ys! t cil Tt i 4 il oxfo z , On ou ° i x [2 t een , se iN ss ° oe be a 1. vee ° qg Bsa 1 e 1 Diamond CookedMeat IronPickaxe # graphs for (mid- The thick- # of times a # item B. Interface Participants will be provided with an OpenAI Gym[17] wrapper for the envi- ronment and a simple interface for loading demonstrations from the MineRL-v0 dataset as illustrated in figures 6, 7, and 8. This makes interacting with the environment and our data as simple as a few lines of code. Our data will be released in the form of Numpy .npz files composed of state-action-reward tuples in vector form, and can be found along with accompanying documentation on the competition website. 10 minerl dat = minerl e dat trajectory = dat state, action trajectory state, action Figure 7: Utilizing individual trajectories of the MineRLdataset. t minerl dat = minerl dat.batch_size dat.seq_len dat = dat e dat batch = minerl Figure 8: Using the MineRLwrapper to filter demonstrations based on metadata # 1.4 Tasks and application scenarios # 1.4.1 Task The primary task of the competition is solving the ObtainDiamond environment. As pre- viously described (see Section 1.3), agents begin at a random position on a randomly generated Minecraft survival map with no items in their inventory. The task consists of controlling an embodied agent to obtain a single diamond. This task can only be accom- plished by navigating the complex item hierarchy of Minecraft. The learning algorithm will have direct access to a 64x64 pixel point-of-view observation from the perspective of the embodied Minecraft agent, as well as a set of discrete observations of the agent’s inventory for every item required for obtaining a diamond (see Figure 5). The action space of the agent is the Cartesian product of continuous view adjustment (turning and pitching), bi- nary movement commands (left/right, forward/backward), and discrete actions for placing blocks, crafting items, smelting items, and mining/hitting enemies. The agent is rewarded for completing the full task. Due to the difficulty of the task, the agent is also rewarded for reaching a set of milestones of increasing difficulty that form a set of prerequisites for the full task (see Section 1.5). The competition task embodies two crucial challenges in reinforcement learning: sparse rewards and long time horizons. The sparsity of the posed task (in both its time struc- ture and long time horizon) necessitates the use of efficient exploration techniques, human priors for policy bootstrapping, or reward shaping via inverse reinforcement learning tech- niques. Although this task is challenging, preliminary results indicate the potential of existing and new methods utilizing human demonstrations to make progress in solving it (see Section 1.6). 11 Progress towards solving the ObtainDiamond environment under strict sample com- plexity constraints lends itself to the development of sample-efficient–and therefore more computationally accessible–sequential decision making algorithms. In particular, because we maintain multiple versions of the dataset and environment for development, validation, and evaluation, it is difficult to engineer domain-specific solutions to the competition chal- lenge. The best performing techniques must explicitly implement strategies that efficiently leverage human priors across general domains. In this sense, the application scenarios of the competition are those which stand to benefit from the development of such algorithms; to that end, we believe that this competition is a step towards democratizing access to deep reinforcement learning based techniques and enabling their application to real-world problems. # 1.5 Metrics Milestone Reward Milestone Reward log planks stick crafting table wooden pickaxe stone 1 2 4 4 8 16 furnace stone pickaxe iron ore iron ingot iron pickaxe diamond 32 32 64 128 256 1024 Following training, participants will be evaluated on the average score of their model over 500 episodes. Scores are com- puted as the sum of the milestone rewards achieved by the agent in a given episode as outlined in Table 1. A milestone is reached when an agent obtains the first instance of the specified item. Ties are broken by the number of episodes required to achieve the last milestone. An automatic evaluation script will be included with starter code. For official evaluation and validation, a fixed map seed will be selected for each episode. These seeds will not be available to participants during the competition. # 1.6 Baselines, code, and material provided Preliminary Results We present pre- liminary results showing the usefulness of the data for improving sample efficiency and overall performance. We compare al- gorithms by the highest average reward ob- tained over a 100-episode window during training. We also report the performance of random policies and 50th percentile human performance. The results are summarized in Table 2. wo f Rewerd — Pretained 00N a “ mo a # In the presented comparison, the DQN Figure 9: Performance graphs over time with DQN and PreDQN on Navigate(Dense) is an implementation of Double Dueling DQN and Behavioral Cloning is a supervised learn- ing method trained on expert trajectories. PreDQN denotes a version of DQN pretrained 12 Treechop Navigate (S) Navigate (D) DQN (Minh et al., 2015[13]) A2C (Minh et al. 2016[14]) Behavioral Cloning PreDQN 3.73 ± 0.61 2.61 ± 0.50 43.9 ± 31.46 4.16 ± 0.82 0.00 ± 0.00 0.00 ± 0.00 4.23 ± 4.15 6.00 ± 4.65 55.59 ± 11.38 -0.97 ± 3.23 5.57 ± 6.00 94.96 ± 13.42 Human Random 64.00 ± 0.00 3.81 ± 0.57 100.00 ± 0.00 1.00 ± 1.95 164.00 ± 0.00 -4.37 ± 5.10 Table 2: Results in Treechop, Navigate (S)parse, and Navigate (D)ense, over the best 100 contiguous episodes. ± denotes standard deviation. Note: humans achieve the maximum score for all environments shown. on the MineRL-v0 data: specifically, PreDQN is trained by performing Bellman updates on minibatches drawn from expert trajectories with accompanying reward labels. Before training, we initialize the replay buffer with expert demonstrations. exhibits the largest difference: on average, humans achieve a score of 64, but reinforcement agents achieve scores of less than 4. These results suggest that our environments are quite challenging, especially given that the Obtain<Item> environments build upon the Treechop environment by requiring the completion of several additional sub-goals. We hypothesize that a large source of difficulty stems from the environment’s inherent long-horizon credit assignment problems. For example, it is hard for agents to learn to navigate through water because it takes many transitions before the agent dies by drowning. In light of these difficulties, our data is useful in improving performance and sample efficiency: in all environments, methods that leverage human data perform better. As seen in Figure 9, the expert demonstrations were able to achieve higher reward per episode and attain high performance using fewer samples. Expert demonstrations are particularly helpful in environments where random exploration is unlikely to yield any reward, like Navigate (Sparse). These preliminary results indicate that human demonstrations will be crucial in solving the main competition environment. Planned Baselines. We will provide baselines on state of the art RL algorithms such as DQN, A2C, and PPO as well as pretrained (on human data) alternatives of each. We also provide code for imitation learning algorithms such as Behavioral Cloning and GAIL. Starting Code and Documentation. We will release an open-source Github repository with starting code including the baselines mentioned above, an OpenAI Gym interface for the Minecraft simulator, and a data-loader to accompany the data we will release on (http://minerl.io/docs/). Additionally, we will release a public Docker container for ease of use. 13 # 1.7 Tutorial and documentation A competition page that will contain instructions, documentation, and updates to the competition can be found at http://minerl.io/competition. # 2 Organizational aspects 2.1 Protocol # 2.1.1 Submission Protocol The evaluation of the submissions will be managed by AICrowd.com, an open-source plat- form for organizing machine learning competitions. Throughout the competition, partici- pants will work on their code bases as git repositories on https://gitlab.aicrowd.com. Participants must package their intended software runtime in their repositories. Doing so ensures that the AICrowd evaluators can automatically build relevant Docker images from their repositories and orchestrate them as needed. This approach also ensures that all successfully-evaluated, user-submitted code is both versioned across time and completely reproducible. Software Runtime Packaging. Packaging and specification of the software runtime is among the most time consuming (and frustrating) tasks for many participants. To simplify this step, we will support numerous approaches to easily package the software runtime with the help of aicrowd-repo2docker (https://pypi.org/project/aicrowd-repo2docker/). The aicrowd-repo2docker is a tool which lets participants specify their runtime using Anaconda environment exports, requirements.txt, or a traditional Dockerfile. This sig- nificantly decreases the barrier to entry for less technically-inclined participants by trans- forming an irritating debug cycle to a deterministic one-liner that performs the work behind the scenes. Submission Mechanism. Participants will collaborate on their git repository through- out the competition. Whenever they are ready to make a submission, they will create and push a git tag, which triggers the evaluation pipeline. Orchestration of the Submissions. The ability to reliably orchestrate user submis- sions over large periods of time is a key determining feature of the success of the proposed competition. We will use the evaluators of AICrowd, which use custom Kubernetes clus- ters to orchestrate the submissions against pre-agreed resource usage constraints. The same setup has previously been successfully used in numerous other machine learning competi- tions, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, and the 2018 MarLO challenge. 14 The evaluation setup allows for evaluations of arbitrarily long time-periods, and also can privately provide feedback about the current state of the evaluation to the respective par- ticipants. # 2.1.2 General Competition Structure Round 1: General Entry. website, and receive the following materials: In this round, participants will register on the competition • Starter code for running the Malmo environments for the competition task. • Basic baseline implementations provided by Preferred Networks and the competition organizers (see Section 1.6). • Two different renders of the human demonstration dataset (one for methods develop- ment, the other for validation) with modified textures, lighting conditions, and minor game state changes. • The Docker Images and Azure quick-start template that the competition organizers will use to validate the training performance of the competitor’s models. • Several scripts enabling the procurement of the standard cloud compute3 used to evaluate the sample-efficiency of participants’ submissions. Competitors will use the provided human demonstrations to develop and test proce- dures for efficiently training models to solve the competition task. When satisfied with their models, participants will follow the submission protocols (described in Section 2.1.1) to submit their code for evaluation. The automated evaluation setup will evaluate the sub- missions against the validation environment, to compute and report the metrics (described in Section 1.5) on the leaderboard of the competition. Because the full training phase is quite resource intensive, it is not be possible to run the training for all the submissions in this round; however, the evaluator will ensure that the submitted code includes the relevant subroutines for the training of the models by running a short integration test on the training code before doing the actual evaluation on the validation environment. Once Round 1 is complete, the organizers will examine the code repositories of the top 20 submissions to ensure compliance with the competition rules. The top 20 submissions which comply with the competition rules will then automatically be trained on validation dataset and environment by the competition orchestration platform. The resulting trained models will then be evaluated again over several hundred episodes. Their performance will be compared with the submission’s final model performance during Round 1 to ensure that no warm-starting or adversarial modifications of the evaluation harness was made. 3For this competition we will specifically be restricting competitors to NC6 v2 Azure instances with 6 CPU cores, 112 GiB RAM, 736 GiB SDD, and a single NVIDIA P100 GPU. 15 For those submissions whose end-of-round and organizer-ran performance distributions disagree, the offending teams will be contacted for appeal. If none is made, the organizers will remove those submissions from the competition and then evaluate a corresponding number of submissions beyond the original top 20 selected. When twenty fully-compliant and qualified submissions are determined, their submis- sions will automatically go through the training process on the hold-out evaluation environ- ment and dataset to seed the leaderboard of the subsequent round. The code repositories associated with the corresponding submissions will be forked, and scrubbed of any files larger than 15MB to ensure that participants are not using any pre trained models (pre trained on the dataset of this competition) in the subsequent round. Round 2: Finals. In this round, the top 20 performing teams will continue to develop their algorithms. Their work will be evaluated against a confidential, held-out test envi- ronment and test dataset, to which they will not have access. Specifically, participants will be able to make a submission (as described in Section 2.1.1) twice during Round 2 and the automated evaluator will evaluate their algorithms on the test dataset and simulator, com- pute and report the metrics back to the participants. This is done to prevent competitors from over-fitting to the training and validation datasets/simulators. All submitted code repositories will be scrubbed to remove any files larger than 30MB to ensure participants are not checking in any model weighs pretrained on the previously released training dataset. While the container running the submitted code will not have external network access, relevant exceptions are added to ensure participants can download and use the pretrained models included in popular frameworks like PyTorch, tensorflow. Participants can request to add network exceptions for any other publicly available pre- trained models, which will be validated by AICrowd on a case by case basis. Further, participants will submit a written report/workshop submission for their tech- nical approach to the problem; this report will be used to bolster the impact of this com- petition on sample-efficient reinforcement learning research. At the end of the second period, the competition organizers will execute a final run of the participants’ algorithms and the winners will be selected for each of the competition tracks. User Submitted Code. At the end of the competition, all the participants will be provided a time window of 3 weeks to appeal the mandatory open-sourcing policy and categorically object if they do not want their code repositories associated with this com- petition to be open sourced. Such appeals will be handled by the competition organizers, but competitors are typically prohibited from participating in the competition if they are not willing to release their submissions publicly for reproducibility. As a default configuration, all the associated code repositories will be made public and available at https://gitlab.aicrowd.com after the 3 week window at the end of the competition. 16 NeurIPS Workshop. After winners have been selected, there will be a public NeurIPS workshop to exhibit the technical approaches developed during the competition. At the workshop, we will feature talks by several researchers in sample-efficient reinforcement learning and AI democratization. To that end, we plan to contact Jia Li, an adjunct pro- fessor at Stanford University involved in democratizing AI for healthcare, Emma Brunskill, an assistant professor at Stanford University whose research focuses on efficient and hier- archical RL, and Michael Littman, a professor at Brown University whose research focuses on learning from demonstrations. Further details of this workshop are to be determined. # 2.2 Rules The aim of the competition is to develop sample-efficient training algorithms. Therefore, we discourage the use of environment-specific, hand-engineered features because they do not demonstrate fundamental algorithmic improvements. The following rules attempt to capture the spirit of the competition and any submissions found to be violating the rules may be deemed ineligible for participation by the organizers. • The submission must train a machine learning model. A manually specified policy may not be used as a component of this model. • Submissions may re-use open-source code with proper attribution. At the end of the competition, submissions need be open-sourced to enable reproducibility. • Participants are limited to the provided dataset, no additional resources in source or available over the internet may be used. • In Round 1, participants must submit their code along with self-reported scores. The submitted code must finish training within two days on the provided platform and attain a final performance not significantly less than the self-reported score. This training must be “from scratch” (i.e., no information may be carried over from previous training through saved model weights or otherwise). Submissions which fail to train or which do not attain the self-reported score are not eligible for the next round. • Agents will be evaluated using unique seeds per episode. These seeds will not be available to participants until the competition has completed. • During the evaluation of the submitted code, the individual containers will not have access to any external network to avoid any information leak. Cheating. We have designed the competition to prevent rule breaking and to discourage submissions that circumvent the competition goals. First off, the competitors’ submissions 17 g # g Figure 10: Proposed timeline for competition are tested on variants of the environment/data with different textures and lighting, dis- couraging the any priors that are not trained from scratch. Inherent stochasticity in the environment, such as different world and spawn locations, discourage the use of hard-coded policies. Furthermore, we will use automatic evaluation scripts to verify the participants’ submitted scores in the first round and perform a manual code review of the finalists of each round in the competition. We highlight that the evaluation dataset/environment pair on which participants will be evaluated is completely inaccessible to competitors, and measures are taken to prevent information leak. # 2.3 Schedule and readiness # 2.3.1 Schedule Given the difficulty of the problem posed, ample time shall be given to allow participants to fully realize their solutions. Our proposed timeline gives competitors over 120 days to prepare, evaluate, and receive feedback on their solutions before the end of the first round. # Mar 23 Competition Accepted Apr 3 Dataset Starter-kit Completed - Demonstration code for leveraging the MineRL- dataset is finalized and posted. Apr 16 Submission Framework Completed - Submission framework is finalized enabling the submission and automatic evaluation of models via aicrowd-repo2docker. Apr 28 Baselines Completed - Baselines developed by Preferred Networks (PFN) are fi- nalized and integrated into starting materials. Jun 1 First Round Begins - Participants invited to download starting materials and baselines and to begin developing their submission. Sep 22 End of First Round - Submissions for consideration into entry into the final round are closed. Models will be evaluated by organizers and partners. Sep 27 First Round Results Posted - Official results will be posted notifying finalists. Sep 30 Final Round Begins - Finalists are invited to submit their models against the held out validation texture pack to ensure their models generalize well. 18 Oct 25 End of Final Round - Submissions for finalists are closed and organizers begin training finalists latest submission for evaluation. Nov 12 Final Results Posted - Official results of model training and evaluation are posted. Dec 1 Special Awards Posted - Additional awards granted by the advisory committee are posted. Dec 8 NeurIPS 2019 - Winning teams will be invited to the conference to present their results. # 2.3.2 Readiness. At the time of writing this proposal the following key milestones are complete: the dataset is fully collected, cleaned, and automatically annotated; the competition environments have been finalized and implemented; the advisory committee is fully established; the partnerships with Microsoft, Preferred Networks, and AICrowd have been confirmed and all parties are working closely to release the competition on schedule; correspondence with several affinity groups has been made, and a specific plan for attracting underrepresented groups is finalized; and the major software components of the competition infrastructure have been developed. If accepted to the NeurIPS competition track, there are no major roadblocks preventing the execution of the competition. # 2.4 Competition promotion Partnership with Affinity Groups We plan to partner with a number of affinity groups to promote the participation of groups traditionally underrepresented at NeurIPS in our competition. Specifically, we reached out to organizers of Women in Machine Learning (WiML), LatinX in AI (LXAI), Black in AI (BAI), and Queer in Artificial Intelligence. We also reached out to organizations, such as Deep Learning Indaba and Data Science Africa, to work with them to determine how to increase participation of individuals often underrepresented in North American conferences and competitions. Promotion through General Mailing Lists To promote participation in the compe- tition, we plan to distribute the call to general technical mailing lists, such as Robotics Worldwide and Machine Learning News, company mailing lists, such as DeepMind’s in- ternal mailing list, and institutional mailing lists. We plan to promote participation of underrepresented groups in the competition by distributing the call to affinity group mail- ing lists, including, but not limited to Women in Machine Learning (WiML), LatinX in AI (LXAI), Black in AI (BAI), and Queer in AI. Furthermore, we plan to reach out to researchers and/or lab directors who are members of underrepresented groups, such as those employed at historically black or all-female universities and colleges, to encourage 19 their participation in the competition. By contacting these researchers, we will be able to promote the competition to individuals who are not on any of the aforementioned mailing lists, but are still members of underrepresented groups. Media Coverage To increase general interest and excitement surrounding the competi- tion, we will reach out to the media coordinator at Carnegie Mellon University. By doing so, our competition will be promoted by popular online magazines and websites, such as Wired. We will also post about the competition on relevant popular subreddits, such as r/machinelearning and /r/datascience, and promote it through social media. We will utilize our industry and academic partners to post on their various social media platforms, such as the Carnegie Mellon University Twitter and the Microsoft Facebook page. Promotion at Conferences. Several of our advisors will directly promote the compe- tition via keynote talks at various AI/ML related conferences including: Structure and priors in RL workshop at ICLR (early May, http://spirl.info/2019/about/), RLDM (early July, http://rldm.org/), Industry day at CoG (August, http://ieee-cog.org), and the ReWork Deep Learning Summit (https://www.re-work.co/). # 3 Resources 3.1 Organizing team # 3.1.1 Organizers William H. Guss. William Guss is a Ph.D. candidate in the Machine Learning Depart- ment at CMU and co-founder of Infoplay AI. He is advised by Dr. Ruslan Salakhutdinov and his research spans sample-efficient reinforcement learning, natural language process- ing, and deep learning theory. William completed his bachelors in Pure Mathematics at UC Berkeley where he was awarded the Regents’ and Chancellor’s Scholarship, the highest honor awarded to incoming undergraduates. During his time at Berkeley, William received the Amazon Alexa Prize Grant for the development of conversational AI and co-founded Machine Learning at Berkeley. William is from Salt Lake City, Utah and grew up in an economically impacted, low-income neighborhood without basic access to computational resources. As a result, William is committed to working towards developing research and initiatives which promote socioeconomically-equal access to AI/ML systems and their de- velopment. Cayden Codel. Cayden Codel is an undergraduate computer science student at Carnegie Mellon University interested in machine learning and cybersecurity. Since June 2018, he has developed and helped manage the MineRL data collection pipeline, expanded the available Malmo testing environments, and built many Minecraft server features and minigames. 20 Katja Hofmann. Katja Hofmann is a Senior Researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge. Her research focuses on reinforcement learning with applications in video games, as she believes that games will drive a trans- formation of how people interact with AI technology. She is the research lead of Project Malmo, which uses the popular game Minecraft as an experimentation platform for devel- oping intelligent technology, and has previously co-organized two competitions based on the Malmo platform. Her long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems. Brandon Houghton. Brandon Houghton is a Research Associate at CMU and co- creator of the MineRL dataset. Graduating from the School of Computer Science at Carnegie Mellon University in the Fall of 2018, Brandon has worked on many machine learning projects, such as discovering invariants in physical systems as well as learning lane boundaries for autonomous driving. Noboru Kuno. Noboru Kuno is a Senior Research Program Manager at Microsoft Re- search in Redmond, USA. He is a member of Artificial Intelligence Engaged team of Microsoft Research Outreach. He leads the design, launch and development of research programs for AI projects such as Project Malmo, working in partnership with research communities and universities worldwide. Stephanie Milani. Stephanie Milani is a Canadian-American Computer Science and Psychology undergraduate student at the University of Maryland, Baltimore County. She will be joining Carnegie Mellon’s Machine Learning Department as a Ph.D. student in September 2019. Her general research interest is in sequential decision-making problems, with an emphasis on reinforcement learning. Previously, she conducted research in hi- erarchical model-based reinforcement learning and planning, reinforcement learning and planning that integrates human norms, and the intersection of behavioral psychology and neuroscience. Since 2016, she has worked to increase the participation of underrepresented minorities in CS and AI by developing curriculum at the local and state level, co-founding a mentoring and tutoring program between UMBC and a local middle school, organizing out- reach events to introduce middle and high school students to CS, and leading efforts within the UMBC Computer Science Education community. She has been nationally recognized for her outreach efforts in CS education through a Newman Civic Fellowship. Sharada Mohanty. Sharada Mohanty is the CEO and Co-founder of AICrowd, an opensource platform encouraging reproducible artificial intelligence research. He was the co-organizer of many large-scale machine learning competitions, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, and the 2018 MarLO Challenge. During his Ph.D. at EPFL, 21 he worked on numerous problems at the intersection of AI and health, with a strong inter- est in reinforcement learning. In his current role, he focuses on building better engineering tools for AI researchers and making research in AI accessible to a larger community of engineers. Diego Perez Liebana. Diego Perez Liebana is a Lecturer in Computer Games and AI at QMUL and holds a Ph.D. in CS from the University of Essex (2015). His research interests are search algorithms, evolutionary computation, and reinforcement learning ap- plied to real-time games and general video game playing. He has published more than 60 papers in leading conferences and journals in the area, including best paper awards (CIG, EvoStar). He is the main organizer behind popular AI game-based competitions in the field, serves as a reviewer in top conferences and journals, and is general chair of the up- coming IEEE Conference on Games (QMUL, 2019). He has experience in the videogames industry with titles published for both PC and consoles, and also developing AI tools for games. Diego previously organized the MarLO competition on multi-agent reinforcement learning in Minecraft. Ruslan Salakhutdinov. Ruslan Salakhutdinov received his Ph.D. in machine learning (computer science) from the University of Toronto in 2009. After spending two post- doctoral years at the Massachusetts Institute of Technology Artificial Intelligence Lab, he joined the University of Toronto as an Assistant Professor in the Department of Computer Science and Department of Statistics. In February of 2016, he joined the Machine Learning Department at Carnegie Mellon University as an Associate Professor. Ruslan’s primary interests lie in deep learning, machine learning, and large-scale optimization. His main research goal is to understand the computational and statistical principles required for discovering structure in large amounts of data. He is an action editor of the Journal of Ma- chine Learning Research and served on the senior programme committee of several learning conferences including NeurIPS and ICML. He is an Alfred P. Sloan Research Fellow, Mi- crosoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the Early Researcher Award, Connaught New Researcher Award, Google Faculty Award, Nvidia’s Pioneers of AI award, and is a Senior Fellow of the Canadian Institute for Advanced Research. Nicholay Topin. Nicholay Topin is a Machine Learning Ph.D. student advised by Dr. Manuela Veloso at Carnegie Mellon University. His current research focus is explainable deep reinforcement learning systems. Previously, he has worked on knowledge transfer for reinforcement learning and learning acceleration for deep learning architectures. Manuela Veloso. Manuela Veloso is a Herbert A. Simon University Professor at Carnegie Mellon University and the head of AI research at JPMorgan Chase. She received her Ph.D. 22 in computer science from Carnegie Mellon University in 1992. Since then, she has been a faculty member at the Carnegie Mellon School of Computer Science. Her research focuses on artificial intelligence and robotics, across a range of planning, execution, and learning algorithms. She cofounded the RoboCup Federation and served as president of AAAI from 2011 to 2016. She is a AAAI, IEEE, AAAS, and ACM fellow. Phillip Wang. Phillip Wang is an undergraduate computer science student at CMU and a core contributor to the MineRL dataset. He has previously worked on NLP at Microsoft, Computer Vision at Cruise Automation, and Software Engineering at Facebook. He is cur- rently interested in Deep Reinforcement Learning, and has previously conducted research in meta-learning, generative vision models, and voting rules. He also enjoys building out random engineering projects, including a dating app that acquired 20k+ users, 3D holo- graphic video chat, and an online multiplayer real time strategy game that topped the charts of ProductHunt games. # 3.1.2 Advisors Chelsea Finn. Chelsea Finn is a research scientist at Google Brain and a post-doctoral scholar at UC Berkeley. In September 2019, she will be joining Stanford’s computer sci- ence department as an assistant professor. Her research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through interaction. During her Ph.D., Finn developed deep learning algorithms for concurrently learning visual percep- tion and control in robotic manipulation skills, inverse reinforcement methods for scalable acquisition of nonlinear reward functions, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelors degree in Electrical Engineering and Computer Science at MIT. Her research has been recognized through an NSF graduate fellowship, a Facebook fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg. With Sergey Levine and John Schulman, Finn also designed and taught a course on deep reinforcement learning, with thousands of followers online. Throughout her career, she has sought to increase the representation of underrepresented minorities within CS and AI by developing an AI outreach camp at Berkeley and a mentoring program across three universities, and leading efforts within the WiML and Berkeley WiCSE communities of women researchers. Sergey Levine. Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. 23 Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. He has previously served as the general chair for the Conference on Robot Learning, program co-chair for the International Conference on Learning Representations, and organizer for numerous workshops at ICML, NeurIPS, and RSS. He has also served as co-organizer on the Learning to Run and AI for Prosthetics NeurIPS competitions. Harm van Seijen. Harm van Seijen is the team lead of the Reinforcement Learning team at Microsoft Research Montr´eal, which focuses on fundamental challenges in rein- forcement learning. Areas of research within reinforcement learning that he is currently very interested in are transfer learning, continual learning, hierarchical approaches, and multi-agent systems. In his most recent project, the team developed an approach to break down a complex task into many smaller ones, called the hybrid reward architecture. Using this architecture, they were able to achieve the highest possible score of 999,990 points on the challenging Atari 2600 game Ms. Pac-Man. Oriol Vinyals. Oriol Vinyals is a Research Scientist at Google DeepMind, working in deep learning. Prior to joining DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award. His research has been featured multiple times at the New York Times, BBC, etc., and his articles have been cited over 29000 times. His academic involvement includes program chair for the International Conference on Learning Representations (ICLR) of 2017 and 2018. Some of his contributions are used in Google Translate, Text-To-Speech, and Speech recognition, used by billions. At DeepMind he con- tinues working on his areas of interest, which include artificial intelligence, with particular emphasis on machine learning, deep learning and reinforcement learning. # 3.1.3 Partners and Sponsors Microsoft Research. Microsoft Research is the research subsidiary of Microsoft. It is dedicated to conducting both basic and applied research in computer science and software engineering. It is collaborating with academic, government and industry researchers to advance the state of the art of computer science. Microsoft supports this competition by providing a substantial amount of cloud computing resource as necessary to help this competition operate smoothly. Further, Microsoft will provide computation/travel grants to enable the broadest set of groups to participate. Microsoft also provides technical advise for the competition and supports the communication for the organizer to reach out to relevant audience. Preferred Networks, Inc. Preferred Networks (PFN) is known as the company behind Chainer, the first deep learning framework to adopt the define-by-run paradigm for intu- itive modeling of neural networks. PFN also actively develops ChainerRL, a flexible and 24 comprehensive deep reinforcement learning library built on top of Chainer. ChainerRL contains high-quality, efficient implementations of deep reinforcement learning algorithms spanning multiple common benchmark tasks and environments. PFN is happy to be a partner of this competition and provide baseline implementations based on ChainerRL en- abling contestants to quickly and easily understand how the environment works, prototype new reinforcement learning algorithms and realize their own solutions for the competition. # 3.2 Resources provided by organizers, including prizes Mentorship. We will facilitate a community forum through a publicly available discord server to enable participants to ask questions, provide feedback, and engage meaningfully with our organizers and advisory board. We hope to foster an active community to col- laborate on these hard problems and will award small prizes to members with the most helpful votes at the end of the first round. Computing Resources. In concert with our efforts to provide open, democratized ac- cess to AI, through our generous sponsor, Microsoft, we will provide 50 large compute grants totaling $40,000 USD for teams that self identify as lacking access to the necessary compute power to participate in the competition. We will also provide groups with the evaluation resources for their experiments in Round 2. We will work with various affinity groups to ensure that selection of recipients for these resources reflects our commitment to enabling the participation of underrepresented groups and competitors from universities without access to large amounts of funding and resources. Travel Grants and Scholarships. The competition organizers are committed to in- creasing the participation of groups traditionally underrepresented in reinforcement learn- ing and, more generally, in machine learning (including, but not limited to: women, LGBTQ individuals, underrepresented racial and ethnic minorities, and individuals with disabilities). To that end, we will offer Inclusion@NeurIPS scholarships/travel grants for Round 1 participants who are traditionally underrepresented at NeurIPS to attend the conference. These individuals will be able to apply online for these grants; their applica- tions will be evaluated by the competition organizers and partner affinity groups. We also plan to provide travel grants to enable all of the top participants from Round 2 to attend our NeurIPS workshop. Prizes. Currently in discussion with sponsors / partners. # 3.3 Support and facilities requested Due to the quality of sponsorships and industry partnerships we have secured for the competition thus far, we only request facility resources. We aim to host a NeurIPS 2019 25 Workshop on the competition with approximately 250 seats. We will reserve spots for guest speakers, organizers, Round 2 participants, and Round 1 participants attending NeurIPS. We request poster stands or materials for hanging the posters of the Round 2 participants. Additionally, we will need a projector, podium, and elevated stage so that guest speakers, finalists, and organizers can present and address the workshop attendees. # References [1] Dario Amodei and Danny Hernandez. https://blog.openai.com/ai-and-compute/, May 2018. URL https://blog.openai.com/ai-and-compute/. [2] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. JAIR, 47:253–279, 2013. [3] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316, 2016. [4] Gabriel V Cruz Jr, Yunshu Du, and Matthew E Taylor. Pre-training neural net- works with human demonstrations for deep reinforcement learning. arXiv preprint arXiv:1709.04083, 2017. [5] DeepMind. game alphastar-mastering-real-time-strategy-game-starcraft-ii/. [6] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In ICML, pages 49–58, 2016. [7] Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017. [8] Yang Gao, Ji Lin, Fisher Yu, Sergey Levine, Trevor Darrell, et al. Reinforcement learning from imperfect demonstrations. arXiv preprint arXiv:1802.05313, 2018. [9] Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: In Thirty-Second AAAI Combining improvements in deep reinforcement learning. Conference on Artificial Intelligence, 2018. [10] Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. Deep q-learning 26 from demonstrations. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [11] Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelligence experimentation. In IJCAI, pages 4246–4247, 2016. [12 Lukasz Kidzitiski, Sharada P Mohanty, Carmichael F Ong, Jennifer L Hicks, Sean F Carroll, Sergey Levine, Marcel Salathé, and Scott L Delp. Learning to run challenge: Synthesizing physiologically accurate motion using deep reinforcement learning. In The NIPS’17 Competition: Building Intelligent Systems, pages 101-120. Springer, 2018. [13] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. [14] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, pages 1928–1937, 2016. [15] Alex Nichol, Vicki Pfau, Christopher Hesse, Oleg Klimov, and John Schulman. Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720, 2018. [16] Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016. [17] OpenAI. Universe, Mar 2017. URL https://blog.openai.com/universe/. [18] OpenAI. Openai five, Sep 2018. URL https://blog.openai.com/openai-five/. [19] Ameya Panse, Tushar Madheshia, Anand Sriraman, and Shirish Karande. Imitation learning on atari using non-expert human annotations. 2018. [20] Diego Perez-Liebana, Katja Hofmann, Sharada Prasanna Mohanty, Noburu Kuno, Andre Kramer, Sam Devlin, Raluca D Gaina, and Daniel Ionita. The Multi- Agent Reinforcement Learning in Malm ¨O (MARL ¨O) Competition. arXiv preprint arXiv:1901.08129, 2019. [21] Christoph Salge, Michael Cerny Green, Rodgrigo Canaan, and Julian Togelius. Gen- erative Design in Minecraft (GDMC): Settlement Generation Competition. In Pro- ceedings of the 13th International Conference on the Foundations of Digital Games, page 49. ACM, 2018. 27 [22] Tianmin Shu, Caiming Xiong, and Richard Socher. Hierarchical and interpretable skill acquisition in multi-task reinforcement learning. arXiv preprint arXiv:1712.07294, 2017. [23] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mas- tering the game of go without human knowledge. Nature, 550(7676):354, 2017. [24] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical approach to lifelong learning in minecraft. In Thirty-First AAAI, 2017. 28
{ "id": "1804.03720" }
1904.09751
The Curious Case of Neural Text Degeneration
Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. In this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.
http://arxiv.org/pdf/1904.09751
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi
cs.CL
Published in ICLR 2020
null
cs.CL
20190422
20200214
0 2 0 2 b e F 4 1 ] L C . s c [ 2 v 1 5 7 9 0 . 4 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # THE CURIOUS CASE OF NEURAL TEXT DeGENERATION †Paul G. Allen School of Computer Science & Engineering, University of Washington ‡Allen Institute for Artificial Intelligence §Department of Computer Science, University of Cape Town {ahai,dul2,mbforbes,yejin}@cs.washington.edu, [email protected] # ABSTRACT Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration — output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models than previ- ous decoding strategies. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding meth- ods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our re- sults show that (1) maximization is an inappropriate decoding objective for open- ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is currently the best available decoding strategy for gener- ating long-form text that is both high-quality — as measured by human evaluation — and as diverse as human-written text. Context: In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. # Beam Search, b=32: “The study, published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), was conducted by researchers from the Universidad Nacional Autonoma de México (UNAM) and the Universidad Nacional Autonoma de México (UNAM/Universidad Nacional Auténoma de México/Universidad Nacional Autonoma de México/Universidad Nacional Autonoma de México/Universidad Nacional Autonoma de ... # Sampling: # Pure They were cattle called Bolivian Cavalleros; they live in a remote desert uninterrupted by town, and they speak huge, beautiful, paradisiacal Bolivian linguistic thing. They say, ‘Lunch, marge.' They don't tell what the lunch is," director Professor Chuperas Omwell told Sky News. “They've only been talking to scientists, like we're being interviewed by TV reporters. We don't even stick around to be interviewed by TV reporters. Maybe that's how they figured out that they're cosplaying as the Bolivian Cavalleros." Figure 1: Even with substantial human context and the powerful GPT-2 Large language model, Beam Search (size 32) leads to degenerate repetition (highlighted in blue) while pure sampling leads to incoherent gibberish (highlighted in red). When b ≥ 64, both GPT-2 Large and XL (774M and 1542M parameters, respectively) prefer to stop generating immediately after the given context. 1 # INTRODUCTION On February 14th 2019, OpenAI surprised the scientific community with an impressively high- quality article about Ovid’s Unicorn, written by GPT-2.1 Notably, the top-quality generations ob- # 1https://openai.com/blog/better-language-models/ 1 Published as a conference paper at ICLR 2020 tained from the model rely on randomness in the decoding method, in particular through top-k sam- pling that samples the next word from the top k most probable choices (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019), instead of aiming to decode text that maximizes likelihood. In fact, decoding strategies that optimize for output with high probability, such as beam search, lead to text that is incredibly degenerate, even when using state-of-the-art models such as GPT-2 Large, as shown in Figure 1. This may seem counter-intuitive, as one would expect that good models would assign higher probability to more human-like, grammatical text. Indeed, language models do generally assign high scores to well-formed text, yet the highest scores for longer texts are often generic, repetitive, and awkward. Figure 2 exposes how different the distribution of probabilities assigned to beam search decoded text and naturally occurring text really are. Perhaps equally surprising is the right side of Figure 1, which shows that pure sampling — sampling directly from the probabilities predicted by the model — results in text that is incoherent and almost unrelated to the context. Why is text produced by pure sampling so degenerate? In this work we show that the “unreliable tail” is to blame. This unreliable tail is composed of tens of thousands of candidate tokens with relatively low probability that are over-represented in the aggregate. To overcome these issues we introduce Nucleus Sampling (§3.1). The key intuition of Nucleus Sampling is that the vast majority of probability mass at each time step is concentrated in the nucleus, a small subset of the vocabulary that tends to range between one and a thousand candidates. Instead of relying on a fixed top-k, or using a temperature parameter to control the shape of the distribution without sufficiently suppressing the unreliable tail, we propose sampling from the top-p portion of the probability mass, expanding and contracting the candidate pool dynamically. order to distributional vari- such likelihood of veering into repetition and the perplexity of generated text. compare current methods to Nucleus Sampling, we compare distribution, properties of generated text to the reference the In ous as The latter reveals that text generated by maxi- mization or top-k sampling is too probable, in- dicating a lack of diversity and divergence in vocabulary usage from the human distribution. On the other hand, pure sampling produces text that is significantly less likely than the gold, corresponding to lower generation quality. Beam Search Text is Less Surprising °: x0 wo uo Timestep Vocabulary usage and Self-BLEU (Zhu et al., 2018) statistics reveal that high values of k are needed to make top-k sampling match human statistics. Yet, generations based on high val- ues of k often have high variance in likelihood, hinting at qualitatively observable incoherency issues. Nucleus Sampling can easily match ref- erence perplexity through tuning the value of p, avoiding the incoherence caused by setting k high enough to match distributional statistics. # Beam Search # Human __...which grant increased life span and three years warranty. The Antec Hee series consists of five models with capacities spanning from 400W to 900W. Here we should note that we have already _tested the HCG-620 in a previous review and were quite satisfied With its performance. In today's review we will rigorously test the Antec HCG-520, which as its model number implies, has 520W capacity and contrary to Antec's strong beliefs in multi-rail PSUs is equipped... ..to provide an overview of the irdtesste-of-the-artin the of computer vision and machine learning, and to provide an overview of the current state-of-the-art in the field of computer vision and machine learning, and to provide an svewiew of the current state-of-the-art in the field of computer vision and machine learning, and to provide an overview of the current state-of-the-art in the field of computer vision and machine °"""9:*"4-~ — # field — — Finally, we perform Human Unified with Sta- tistical Evaluation (HUSE; Hashimoto et al., 2019) to jointly assess the overall quality and diversity of the decoding strategies, which can- not be captured using either human or auto- matic evaluation alone. The HUSE evaluation demonstrates that Nucleus Sampling is the best overall decoding strategy. We include gener- ated examples for qualitative analysis – see Fig- ure 3 for a representative example, and further examples in the appendix.2 — Figure 2: The probability assigned to tokens gen- erated by Beam Search and humans, given the same context. Note the increased variance that characterizes human text, in contrast with the end- less repetition of text decoded by Beam Search. 2Code and all generations are available at https://github.com/ari-holtzman/degen 2 Published as a conference paper at ICLR 2020 2 BACKGROUND 2.1 TEXT GENERATION DECODING STRATEGIES A number of recent works have alluded to the disadvantages of generation by maximization, which tend to generate output with high grammaticality but low diversity (Kulikov et al., 2019; Holtzman et al., 2018; Fan et al., 2018). Generative Adversarial Networks (GANs) have been a prominent research direction (Yu et al., 2017; Xu et al., 2018), but recent work has shown that when qual- ity and diversity are considered jointly, GAN-generated text fails to outperform generations from language models (Caccia et al., 2018; Tevet et al., 2019; Semeniuta et al., 2018). Work on neural di- alog systems have proposed methods for diverse beam search, using a task-specific diversity scoring function or constraining beam hypotheses to be sufficiently different (Li et al., 2016a; Vijayakumar et al., 2018; Kulikov et al., 2019; Pal et al., 2006). While such utility functions encourage desirable properties in generations, they do not remove the need to choose an appropriate decoding strategy, and we believe that Nucleus Sampling will have complementary advantages in such approaches. Finally, Welleck et al. (2020) begin to address the problem of neural text degeneration through an “unlikelihood loss”, which decreases training loss on repeated tokens and thus implicitly reduces gradients on frequent tokens as well. Our focus is on exposing neural text degeneration and provid- ing a decoding solution that can be used with arbitrary models, but future work will likely combine training-time and inference-time solutions. 2.2 OPEN-ENDED VS DIRECTED GENERATION Many text generation tasks are defined through (input, output) pairs, such that the output is a con- strained transformation of the input. Example applications include machine translation (Bahdanau et al., 2015), data-to-text generation (Wiseman et al., 2017), and summarization (Nallapati et al., 2016). We refer to these tasks as directed generation. Typically encoder-decoder architectures are used, often with an attention mechanism (Bahdanau et al., 2015; Luong et al., 2015) or using attention-based architectures such as the Transformer (Vaswani et al., 2017). Generation is usually performed using beam search; since output is tightly scoped by the input, repetition and generic- ness are not as problematic. Still, similar issues have been reported when using large beam sizes (Koehn & Knowles, 2017) and more recently with exact inference (Stahlberg & Byrne, 2019), a counter-intuitive observation since more comprehensive search helps maximize probability. Open-ended generation, which includes conditional story generation and contextual text continua- tion (as in Figure 1), has recently become a promising research direction due to significant advances in neural language models (Clark et al., 2018; Holtzman et al., 2018; Fan et al., 2018; Peng et al., 2018; Radford et al., 2019). While the input context restricts the space of acceptable output genera- tions, there is a considerable degree of freedom in what can plausibly come next, unlike in directed generation settings. Our work addresses the challenges faced by neural text generation with this increased level of freedom, but we note that some tasks, such as goal-oriented dialog, may fall somewhere in between open-ended and directed generation. # 3 LANGUAGE MODEL DECODING Given an input text passage as context, the task of open-ended generation is to generate text that forms a coherent continuation from the given context. More formally, given a sequence of m tokens x1 . . . xm as context, the task is to generate the next n continuation tokens to obtain the completed sequence x1 . . . xm+n. We assume that models compute P (x1:m+n) using the common left-to-right decomposition of the text probability, m+n P(X1:m4n) ~ Hr P(aj|v1...@i-1), (1) i=1 which is used to generate the generation token-by-token using a particular decoding strategy. Maximization-based decoding The most commonly used decoding objective, in particular for directed generation, is maximization-based decoding. Assuming that the model assigns higher prob- ability to higher quality text, these decoding strategies search for the continuation with the highest 3 Published as a conference paper at ICLR 2020 An unprecedented number of mostly young whales have become stranded on the West Australian coast since 2008. WebText The number of stranded whales has increased by more than 50 per cent in the past year, with the number of stranded whales on the West Australian coast increasing by more <€ than 50 per cent in the past year. The number of whales stranded on the West Australian coast has increased by more than 50 per cent in the past year, with the number of stranded whales on the West Australian coast increasing by more than 50 per cent in the Beam Search, b=16 past year. y The Australian Food Safety Authority has warned Australia's beaches may be revitalised this year because healthy seabirds and seals have been on the move. More than 50,000 vod seabirds, sea mammals and seahorses have been swept into the sea by the Holden ZZ CS118 and Adelaide Airport CS300 from 2013. A major white-bat and umidauda Pure Sampling migration across Australia is under way in Australia for the first time, with numbers reaching an estimated 50,000. 2 ak Last week's intense storms and a series of powerful cyclones have been officially blamed A F for the deaths of at least nine large fin whales near Whitsundays - the largest loss of any 7 species globally. The fin whales: packed in the belly of one killer whale thrashing madly Ad in fear as another tries to bring it to safety. When the colossal animal breached the Sampling, t=0.9 waters of Whitsundays, he'd been seen tagged for a decade. Pumping Station #3 shut down due to construction damage Find more at: www.abc.net.au/environment/species-worry/ in-the-top-10-killer-whale-catastrophes-in-history.html “In the top 10 killer whale catastrophes in history: 1) 1986: Up to 12 orcas struck by lightning; many drowned and many more badly injured. The whale's fate was confirmed late last week when the animal was found by fishermen off the coast of Bundaberg. Experts believe the whale was struck by a fishing vessel off the coast of Bundaberg, and died after being sucked into the ocean. The whale's fate was confirmed late last week when the animal was found by fishermen off the coast of Bundaberg. There has been an unprecedented number of calves caught in the nets of whaling stations that operate in WA. Pilot whales continue to migrate to feeding grounds to feed their calves. They are now vulnerable due to the decline of wild populations; they are restricted to one breeding site each year. Image copyright Yoon Bo Kim But, with sharp decline in wild populations the size of the Petrels are shrinking and dwindling Nucleus, p=0.95 population means there will only be room for a few new fowl. Poor nutrition has led to a rise in the number of stranded humpback whales on the West Australian coast, veterinary researchers have said. Carly Holyoake, from Murdoch University, at the Australian Veterinary Association's annual conference in Perth on Wednesday, said an unprecedented number of mostly young whales had become WebText stranded on the coast since 2008. Figure 3: Example generations continuing an initial sentence. Maximization and top-k truncation methods lead to copious repetition (highlighted in blue), while sampling with and without tempera- ture tends to lead to incoherence (highlighted in red). Nucleus Sampling largely avoids both issues. likelihood. Since finding the optimum argmax sequence from recurrent neural language models or Transformers is not tractable (Chen et al., 2018), common practice is to use beam search (Li et al., 2016b; Shen et al., 2017; Wiseman et al., 2017). However, several recent studies on open-ended generation have reported that maximization-based decoding does not lead to high quality text (Fan et al., 2018; Holtzman et al., 2018). 3.1 NUCLEUS SAMPLING We propose a new stochastic decoding method: Nucleus Sampling. The key idea is to use the shape of the probability distribution to determine the set of tokens to be sampled from. Given a distribution P (x|x1:i−1), we define its top-p vocabulary V (p) ⊂ V as the smallest set such that P (x|x1:i−1) ≥ p. x∈V (p) (2) 4 Published as a conference paper at ICLR 2020 Token Probabilities for "| don't know." Repeated 200 times 1 Pay 305 oO me) Q Q ( Figure 4: The probability of a repeated phrase increases with each repetition, creating a positive feedback loop. We found this effect to hold for the vast majority of phrases we tested, regardless of phrase length or if the phrases were sampled randomly rather than taken from human text. o.08 08 cae — thought mm : ‘ow did id _ ved | She said f told mm ‘kod am oot ore! Peaked : Distribution \ sag nt] Flat Distribution Figure 5: The probability mass assigned to partial human sentences. Flat distributions lead to many moderately probable tokens, while peaked distributions concentrate most probability mass into just a few tokens. The presence of flat distributions makes the use of a small k in top-k sampling problematic, while the presence of peaked distributions makes large k’s problematic. Let p' = Oey P(x|x1i-1). The original distribution is re-scaled to a new distribution, from which the next word is sampled: / —) _ f Plalaii1)/p! ife eV) P\(altisi—1) = { 0 otherwise. @) In practice this means selecting the highest probability tokens whose cumulative probability mass exceeds the pre-chosen threshold p. The size of the sampling set will adjust dynamically based on the shape of the probability distribution at each time step. For high values of p, this is a small subset of vocabulary that takes up vast majority of the probability mass — the nucleus. 3.2 TOP-k SAMPLING Top-k sampling has recently become a popular alternative sampling procedure (Fan et al., 2018; Holtzman et al., 2018; Radford et al., 2019). Nucleus Sampling and top-k both sample from trun- cated Neural LM distributions, differing only in the strategy of where to truncate. Choosing where to truncate can be interpreted as determining the generative model’s trustworthy prediction zone. At each time step, the top & possible next tokens are sampled from according to their relative prob- abilities. Formally, given a distribution P(x|x1.;_1), we define its top-k vocabulary Veh) CV as the set of size k which maximizes rev P(x 14-1). Let p! = rev P(ax|x1.,-1). The distribution is then re-scaled as in equation[3| and sampling is performed based on that distribution. Note that the scaling factor p’ can vary wildly at each time-step, in contrast to Nucleus Sampling. Difficulty in choosing a suitable value of k While top-k sampling leads to considerably higher quality text than either beam search or sampling from the full distribution, the use of a constant k is 5 Published as a conference paper at ICLR 2020 sub-optimal across varying contexts. As illustrated on the left of Figure 5, in some contexts the head of the next word distribution can be flat across tens or hundreds of reasonable options (e.g. nouns or verbs in generic contexts), while in other contexts most of the probability mass is concentrated in one or a small number of tokens, as on the right of the figure. Therefore if k is small, in some contexts there is a risk of generating bland or generic text, while if k is large the top-k vocabulary will include inappropriate candidates which will have their probability of being sampled increased by the renormalization. Under Nucleus Sampling, the number of candidates considered rises and falls dynamically, corresponding to the changes in the model’s confidence region over the vocabulary which top-k sampling fails to capture for any one choice of k. 3.3 SAMPLING WITH TEMPERATURE Another common approach to sampling-based generation is to shape a probability distribution through temperature (Ackley et al., 1985). Temperature sampling has been applied widely to text generation (Ficler & Goldberg, 2017; Fan et al., 2018; Caccia et al., 2018). Given the logits u1:|V | and temperature t, the softmax is re-estimated as exp(wi/t) Sy exp(uj/t)” (4) p(« = Vi 1i-1) Setting t ∈ [0, 1) skews the distribution towards high probability events, which implicitly lowers the mass in the tail distribution. Low temperature sampling has also been used to partially alleviate the issues of top-k sampling discussed above, by shaping the distribution before top-k sampling (Radford et al., 2018; Fan et al., 2018). However, recent analysis has shown that, while lowering the temperature improves generation quality, it comes at the cost of decreasing diversity (Caccia et al., 2018; Hashimoto et al., 2019). 4 LIKELIHOOD EVALUATION 4.1 EXPERIMENTAL SETUP While many neural network architectures have been proposed for language modeling, including LSTMs (Sundermeyer et al., 2012) and convolutional networks (Dauphin et al., 2017), the Trans- former architecture (Vaswani et al., 2017) has been the most successful in the extremely large-scale training setups in recent literature (Radford et al., 2018; 2019). In this study we use the Generatively Pre-trained Transformer, version 2 (GPT2; Radford et al., 2019), which was trained on WebText, a 40GB collection of text scraped from the web.3 We perform experiments using the Large model (762M parameters). Our analysis is based on generating 5,000 text passages, which end upon reach- ing an end-of-document token or a maximum length of 200 tokens. Texts are generated condition- ally, conditioned on the initial paragraph (restricted to 1-40 tokens) of documents in the held-out portion of WebText, except where otherwise mentioned. 4.2 PERPLEXITY Our first evaluation is to compute the perplexity of generated text using various decoding strategies, according to the model that is being generated from. We compare these perplexities against that of the gold text (Figure 6). Importantly, we argue that the optimal generation strategy should produce text which has a perplexity close to that of the gold text: Even though the model has the ability to generate text that has lower perplexity (higher probability), such text tends to have low diversity and get stuck in repetition loops, as shown in §5 and illustrated in Figure 4. We see that perplexity of text obtained from pure sampling is worse than the perplexity of the gold. This indicates that the model is confusing itself: sampling too many unlikely tokens and creating context that makes it difficult to recover the human distribution of text, as in Figure 1. Yet, setting the temperature lower creates diversity and repetition issues, as we shall see in §5. Even with our relatively fine-grained parameter sweep, Nucleus Sampling obtains closest perplexity to human text, as shown in Table 1. # 3Available at https://github.com/openai/gpt-2-output-dataset 6 Published as a conference paper at ICLR 2020 Method Human Greedy Beam, b=16 Stochastic Beam, b=16 Pure Sampling Sampling, t=0.9 Top-k=40 Top-k=640 Top-k=40, t=0.7 Nucleus p=0.95 Perplexity 12.38 1.50 1.48 19.20 22.73 10.25 6.88 13.82 3.48 13.13 Self-BLEU4 Zipf Coefficient Repetition % HUSE 0.31 0.50 0.44 0.28 0.28 0.35 0.39 0.32 0.44 0.32 0.93 1.00 0.94 0.91 0.93 0.96 0.96 0.96 1.00 0.95 0.28 73.66 28.94 0.32 0.22 0.66 0.78 0.28 8.86 0.36 - - - - 0.67 0.79 0.19 0.94 0.08 0.97 Table 1: Main results for comparing all decoding methods with selected parameters of each method. The numbers closest to human scores are in bold except for HUSE (Hashimoto et al., 2019), a combined human and statistical evaluation, where the highest (best) value is bolded. For Top-k and Nucleus Sampling, HUSE is computed with interpolation rather than truncation (see §6.1). 4.3 NATURAL LANGUAGE DOES NOT MAXIMIZE PROBABILITY One might wonder if the issue with maximization is a search error, i.e., there are higher quality sentences to which the model assigns higher probability than to the decoded ones, beam search has just failed to find them. Yet Figures 2 & 6 show that the per-token probability of natural text is, on average, much lower than text generated by beam search. Natural language rarely remains in a high probability zone for multiple consecutive time steps, instead veering into lower-probability but more informative tokens. Nor does natural language tend to fall into repetition loops, even though the model tends to assign high probability to this, as seen in Figure 4. Why is human-written text not the most probable text? We conjecture that this is an intrinsic property of human language. Language models that assign probabilities one word at a time without a global model of the text will have trouble capturing this effect. Grice’s Maxims of Communication (Grice, 1975) show that people optimize against stating the obvious. Thus, making every word as predictable as possible will be disfavored. This makes solving the problem simply by training larger models or improving neural architectures using standard per-word learning objectives unlikely: such models are forced to favor the lowest common denominator, rather than informative language. 5 DISTRIBUTIONAL STATISTICAL EVALUATION 5.1 ZIPF DISTRIBUTION ANALYSIS In order to compare generations to the reference text, we begin by analyzing their use of vocabu- lary. Zipf’s law suggests that there is an exponential relationship between the rank of a word and its frequency in text. The Zipfian coefficient s can be used to compare the distribution in a given text Beam Search Sampling Top-k (t=1.0) Top-k (t=0.7) Nucleus ooone® Human 5 w Conditional PPL roe as i a 1 i i 1 1 1 1 5 10 15 0.25 0.50 0.75 1.00 10? Beam Width Temperature k k Figure 6: Perplexities of generations from various decoding methods. Note that beam search has unnaturally low perplexities. A similar effect is seen using a temperature of 0.7 with top-k as in both Radford et al. (2019) and Fan et al. (2018). Sampling, Top-k, and Nucleus can all be calibrated to human perplexities, but the first two face coherency issues when their parameters are set this high. 7 Published as a conference paper at ICLR 2020 Vocabulary Distrubution and Zipf's Coefficient - —— t=100, Szp¢= 0.926 —— b= 16, Szipp= 0.967 —— gold, §zipp = 0.934 — k=40,t=0.7, §zip¢= 1.000 8 —— p=0.95, Szipp= 0.949 —— k= 640, Szipp = 0.958 log (frequency) fi 20000 20000 30000 40000 rank Figure 7: A rank-frequency plot of the distributional differences between n-gram frequencies of human and machine text. Sampling and Nucleus Sampling are by far the closest to the human distribution, while Beam Search clearly follows a very different distribution than natural language. Self-BLEU of Generations Over 5000 Documents m Self-BLEU4 0.9 0.8 0.7 0.6 0.5 0.4 0.3 o2 IMT “ITAA m Self-BLEUS IIT ith with 3ho%,02 0%, 0? 0% 3 ES Kee 3 S88 gS. ea LLL LLL LSS S rege Yo ‘ PEP? ee Ae Figure 8: Self-BLEU calculated on the unconditional generations produced by stochastic decoding methods; lower Self-BLEU scores imply higher diversity. Horizontal blue and orange lines represent human self-BLEU scores. Note how common values of t ∈ [0.5, 1] and k ∈ [1, 100] result in high self-similarity, whereas “normal” values of p ∈ [0.9, 1) closely match the human distribution of text. to a theoretically perfect exponential curve, where s = 1 (Piantadosi, 2014). Figure 7 shows the vocabulary distributions along with estimated Zipf coefficients for selected parameters of different decoding methods. As expected, pure sampling is the closest to the human distribution, followed by Nucleus Sampling. The visualization of the distribution shows that pure sampling slightly overes- timates the use of rare words, likely one reason why pure sampling also has higher perplexity than human text. Furthermore, lower temperature sampling avoids sampling these rare words from the tail, which is why it has been used in some recent work (Fan et al., 2018; Radford et al., 2019). 5.2 SELF-BLEU We follow previous work and compute Self-BLEU (Zhu et al., 2018) as a metric of diversity. Self- BLEU is calculated by computing the BLEU score of each generated document using all other generations in the evaluation set as references. Due to the expense of computing such an operation, we sample 1000 generations, each of which is compared with all 4999 other generations as refer- ences. A lower Self-BLEU score implies higher diversity. Figure 8 shows that Self-BLEU results largely follow that of the Zipfian distribution analysis as a diversity measure. It is worth noting that 8 Published as a conference paper at ICLR 2020 Likelihood of Degeneration into Repetition greedy —— Sampling oan ennte a nannncnncncncnnnancncnnnnnnaeecncnnnnnnae eannnnnnnnenanncneainecnnnnnnnananananananeanannnnnnanaenenanannns = Nucleus —— Beam Search bis beam width p_ 4 in Beam Search —— Top-k b=8 + Top . + F=d0a80 ~ 034 kin Top-k sampling + r= 10240 ~ % Generation Resulting in Repetition a1 02 03 04 as 06 07 08 a9 xo pin Nucleus / tin Sampling Figure 9: We visualize how often different decoding methods get “stuck” in loops within the first 200 tokens. A phrase (minimum length 2) is considered a repetition when it repeats at least three times at the end of the generation. We label points with their parameter values except for t and p which follow the x-axis. Values of k greater than 100 are rarely used in practice and values of p are usually in [0.9, 1); therefore Nucleus Sampling is far closer to the human distribution in its usual parameter range. Sampling with temperatures lower than 0.9 severely increase repetition. Finally, although beam search becomes less repetitive according to this metric as beam width increases, this is largely because average length gets shorter as b increases (see Appendix A). very high values of k and t are needed to get close to the reference distribution, though these result in unnaturally high perplexity (§4). 5.3 REPETITION One attribute of text quality that we can quantify is repetition. Figure 9 shows that Nucleus Sam- pling and top-k sampling have the least repetition for reasonable parameter ranges. Generations from temperature sampling have more repetition unless very high temperatures are used, which we have shown negatively affects coherence (as measured by high perplexity). Further, all stochastic methods face repetition issues when their tuning parameters are set too low, which tends to over- truncate, mimicking greedy search. Therefore we conclude that only Nucleus Sampling satisfies all the distributional criteria for desirable generations. 6 HUMAN EVALUATION 6.1 HUMAN UNIFIED WITH STATISTICAL EVALUATION (HUSE) Statistical evaluations are unable to measure the coherence of generated text properly. While the metrics in previous sections gave us vital insights into the different decoding methods we compare, human evaluation is still required to get a full measure of the quality of the generated text. However, pure human evaluation does not take into account the diversity of the generated text; therefore we use HUSE (Hashimoto et al., 2019) to combine human and statistical evaluation. HUSE is computed by training a discriminator to distinguish between text drawn from the human and model distributions, based on only two features: The probability assigned by the language model, and human judgements of typicality of generations. Text that is close to the human distribution in terms of quality and diversity should perform well on both likelihood evaluation and human judgements. As explored in the previous sections, the current best-performing decoding methods rely on trunca- tion of the probability distribution, which yields a probability of 0 for the vast majority of potential tokens. Initial exploration of applying HUSE directly led to top-k and Nucleus Sampling receiving scores of nearly 0 due to truncation, despite humans favoring these methods. As a proxy, when generating the text used to compute HUSE, we interpolate (with mass 0.1) the original probability distribution with the top-k and Nucleus Sampling distribution, smoothing the truncated distribution. For each decoding algorithm we annotate 200 generations for typicality, with each generation re- ceiving 20 annotations from 20 different annotators. This results in a total of 4000 annotations per a 9 Published as a conference paper at ICLR 2020 decoding scheme. We use a KNN classifier to compute HUSE, as in the original paper, with k = 13 neighbors, which we found led to the higher accuracy in discrimination. The results in Table 1 shows that Nucleus Sampling obtains the highest HUSE score, with Top-k sampling performing second best. 6.2 QUALITATIVE ANALYSIS Figure 3 shows representative example generations. Unsurprisingly, beam search gets stuck in a repetition loop it cannot escape. Of the stochastic decoding schemes, the output of full sampling is clearly the hardest to understand, even inventing a new word “umidauda”, apparently a species of bird. The generation produced by Nucleus Sampling isn’t perfect – the model appears to confuse whales with birds, and begins writing about those instead. Yet, top-k sampling immediately veers off into an unrelated event. When top-k sampling is combined with a temperature of 0.7, as is commonly done (Radford et al., 2019; Fan et al., 2018), the output devolves into repetition, exhibiting the classic issues of low-temperature decoding. More generations are available in Appendix B. # 7 CONCLUSION This paper provided a deep analysis into the properties of the most common decoding methods for open-ended language generation. We have shown that likelihood maximizing decoding causes repe- tition and overly generic language usage, while sampling methods without truncation risk sampling from the low-confidence tail of a model’s predicted distribution. Further, we proposed Nucleus Sam- pling as a solution that captures the region of confidence of language models effectively. In future work, we wish to dynamically characterize this region of confidence and include a more semantic utility function to guide the decoding process. # ACKNOWLEDGMENTS This research was supported in part by NSF (IIS-1524371), the National Science Foundation Gradu- ate Research Fellowship under Grant No. DGE1256082, DARPA CwC through ARO (W911NF15- 1- 0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), the South African Centre for Artificial Intelligence Research, and the Allen Institute for AI. # REFERENCES David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. Proceedings of the 2015 International Conference on Learning Representations, 2015. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Char- In Critiquing and Correcting Trends in Machine Learning: lin. Language gans falling short. NeurIPS 2018 Workshop, 2018. URL http://arxiv.org/abs/1811.02549. Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. Recurrent neu- ral networks as weighted language recognizers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2261–2271, New Orleans, Louisiana, June 2018. Elizabeth Clark, Yangfeng Ji, and Noah A. Smith. Neural text generation in stories using entity rep- resentations as context. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2250–2260, New Orleans, Louisiana, June 2018. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learn- ing, pp. 933–941, 2017. 10 Published as a conference paper at ICLR 2020 Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 889–898, 2018. Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pp. 94–104, 2017. H Paul Grice. Logic and conversation. In P Cole and J L Morgan (eds.), Speech Acts, volume 3 of Syntax and Semantics, pp. 41–58. Academic Press, 1975. Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. In Proceedings of the Association for Computational Linguistics, 2018. Philipp Koehn and Rebecca Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pp. 28–39, 2017. Ilya Kulikov, Alexander H Miller, Kyunghyun Cho, and Jason Weston. Importance of search and evaluation strategies in neural dialogue modeling. International Conference on Natural Language Generation, 2019. Jiwei Li, Will Monroe, and Dan Jurafsky. A simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562, 2016a. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. Deep rein- forcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202, 2016b. Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based In Proceedings of the 2015 Conference on Empirical Methods in neural machine translation. Natural Language Processing, pp. 1412–1421, 2015. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. Abstractive In Proceedings of The 20th text summarization using sequence-to-sequence rnns and beyond. SIGNLL Conference on Computational Natural Language Learning, pp. 280–290, 2016. Chris Pal, Charles Sutton, and Andrew McCallum. Sparse forward-backward using minimum diver- gence beams for fast training of conditional random fields. In 2006 IEEE International Confer- ence on Acoustics Speech and Signal Processing Proceedings, volume 5, May 2006. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. Towards controllable story In Proceedings of the First Workshop on Storytelling, pp. 43–49, New Orleans, generation. Louisiana, June 2018. doi: 10.18653/v1/W18-1505. Steven T Piantadosi. Zipfs word frequency law in natural language: A critical review and future directions. Psychonomic bulletin & review, 21(5):1112–1130, 2014. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language under- standing by generative pre-training, 2018. URL https://s3-us-west-2.amazonaws. com/openai-assets/research-covers/language-unsupervised/ language_understanding_paper.pdf. Unpublished manuscript. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. URL https: Language models are unsupervised multitask learners, February 2019. //d4mucfpksywv.cloudfront.net/better-language-models/language_ models_are_unsupervised_multitask_learners.pdf. Unpublished manuscript. Stanislau Semeniuta, Aliaksei Severyn, and Sylvain Gelly. On accurate evaluation of gans for lan- guage generation. arXiv preprint arXiv:1806.04936, 2018. 11 Published as a conference paper at ICLR 2020 Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pp. 6830–6841, 2017. Felix Stahlberg and Bill Byrne. On nmt search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3347–3353, 2019. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. Lstm neural networks for language mod- eling. In Thirteenth annual conference of the international speech communication association, 2012. Guy Tevet, Gavriel Habib, Vered Shwartz, and Jonathan Berant. Evaluating text gans as language models. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2241–2247, 2019. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, In Advances in Neural Infor- Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. mation Processing Systems, pp. 5998–6008, 2017. Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search for improved description of complex scenes. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. Neural text generation with unlikelihood training. In Proceedings of the International Conference on Learning Representations (ICLR), 2020. Sam Wiseman, Stuart Shieber, and Alexander Rush. Challenges in data-to-document generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2253–2263, Copenhagen, Denmark, September 2017. Jingjing Xu, Xuancheng Ren, Junyang Lin, and Xu Sun. Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3940–3949, Brussels, Belgium, oct 2018. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, 2017. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation models. SIGIR, 2018. 12 Published as a conference paper at ICLR 2020 A BEAM WIDTH EFFECT 800000 igrams 600000 400000 Over 5000 Generations Total Number of Tri 200000 Beam Width vs Distinct Trigrams : mmm = Number of Distinct Trigrams b=8 b=16 gold b=4 Figure 10: The total number of trigrams produced by Beam Search with varying beam widths, with gold (human) data for comparison. Note how the average length of generations goes down linearly with beam width, while the number of distinct trigrams stays constant and extremely low in comparison to gold data. 13 Published as a conference paper at ICLR 2020 # B EXAMPLE GENERATIONS We include a set of examples for further qualitative comparison. | Top Customer Questions WebText < Beam Search, b=16 / 7; Zz) Pure Sampling fy Sampling, t=0.9 Nucleus; p=0.95, WebText Q: | have a question about the new "S" series. | have a question about the new "S" series A: The new "S" series is a new line of high-performance, high-performance, high- performance, high-performance, high-performance, high-performance, high- performance, high-performance, high-performance, high-performance, high- performance, high-performance, high-performance, high-performance, high- performance, high-performance, high-performance, high-performance, When will you start sending this stuff for register until someone else orders? If anyone wants to order, we would like to keep the next batch of binders as being a vaiible sample. lt ASKED for the logistics and cost of shipping. | wish for a refund if actions like this are to set us back another beat. Are there any instructions for understanding the CDI CAL 0519-0197? If there are, I'd be very keen to find them. What's the best way to read this one? The blog post from the authors discusses the differences between how the marketing mix ae out in terms of visibility, trustworthiness, and engagement, among other things. In my experience, this difference isn't necessarily obvious; it's hard to explain to a layperson, but in all honesty it's probably the most important point to consider when reviewing social media. 8 questions Asked By: Bizzabo - 28th February 2016 Product: Jaybo Comfort: Durability | have a small pad for the calf and on the off chance it all breaks i would like to know that how do you square up the circumference of the pad with something as comfortable as a new t shirts. Thanks. Bas the compression system at all moved????? Asked By: Kiran R - 28th February 2016 Product: Jaybo Comfort: Durability How can | change the wallpaper for my computer screen? If you have the latest (current version) of your operating system, then you can change the Background wallpaper for your screen. How can | change the wallpaper for my computer screen? Open the settings app. Tap on your user name and tap on Settings. Click on the Background. The wallpaper change will apply. How can | change the background wallpaper for my screen? Where can | purchase my prototypes and printed books in the USA? Currently we have a sign for purchase available in the guide. Since we are based in the States we will be shipping to you by FedEx. Do you make books in America? We do! We love our folks in the fs. We haven't had a chance to make a documentary so we love to share our love for our culture. How does the book publisher pay for the book and art? Why are your caustics superior? Purest Grades Available. All US Manufactured. No Asian Imports. Quality Control on all incoming and outgoing shipments. Most suppliers only supply tech grade product. What is difference between Food and Tech Grade Lye? Food grade has a lower parts per million of heavy metal impurities. 80% of our soap makers prefer our food grade product. Figure 11: More example generations from an initial tag line. All generations available at https: //github.com/ari-holtzman/degen 14 Published as a conference paper at ICLR 2020 Ud WebText < Beam Search, b=16 of Pure Sampling / ff Sampling, t=0.9 Nucleus; p=0.95, Ld WebText So what's new in my life? 09/11/18 - Just got back from vacation. Just got back from vacation. 09/10/18 - Just got back from vacation. Just got back from vacation. 09/09/18 - Just got back from vacation. Just got back from vacation. 09/08/18 - Just got back from vacation. Just got back from vacation. 01/05/19 - Visited The Subaru Canada Heritage15th Star for my vehicle research. That was a wonderful experience. Thanks again for your knowledge and support. 05/23/19 - | was thrilled to receive my Subaru Legacy character! It fits me perfectly and makes me feel like a kid again! 06/15/19 - Just regained my licence for my different coloured 2016 WRX / STI... Just got back from vacation. 09/07/18 - Happy 9th birthday to me! Happy 9th birthday to me! 09/01/18 - Sotwidden also! Sofwidden also! 08/25/18 - Happy birthday to me! Happy birthday to me! 08/17/18 - Night shift! OONight shift! OD 08/17/18 - Lucky me! Lucky me! Just got back from vacation. 09/03/18 - After more than 40 years, | might have an apartment in Virginia Beach. After more than 40 years, | might have an apartment in Virginia Beach. 08/20/18 - Going for a hike at Mount Eerie in Northeast Virginia Spring Going for a hike at Mount Eerie in Northeast Virginia Spring Just got back from vacation. 09/08/18 - I've gotten really sick. - |'ve gotten really sick. 09/07/18 - My wife and | are getting married in February. - My wife and | are getting married in February. 09/06/18 - I'm so excited to go back to college this fall. - I'm so excited to go back to college this fall. Just got back from vacation. 07/12/18 - Happy birthday to Swingu, who is nearly 5 years old. | would like to say hi to him on the rose as well as when | ride with him. You cannot go to work without feeling physically sick or psychologically exhausted because you can barely breathe. Even if you ride on rollercoaster even once, it is easy to recover from the physical side of it. | just got back from a much needed and really great nine day vacation to my remote Arizona property. It was a really restful and relaxin visit. | got a lot accomplished while | was there, but still found time to just goof off and have fun too. | got to do some astronomy, even though the weather was pretty cloudy most of the time. Here is a 50 minute exposure of M101. It turned out pretty good. Figure 12: More example generations from an initial tag line. Note that Pure Sampling and Nu- cleus Sampling is the only algorithms that can escape the repetition loop, with Nucleus Sam- pling’s generation far closer in style to the ground truth text. All generations available at https: //github.com/ari-holtzman/degen 15 Published as a conference paper at ICLR 2020 [ i University of Wisconsin — Madison WebText University of Wisconsin — Madison University of Wisconsin — Milwaukee University of Wisconsin — Oshkosh Beam Search, b=16 y Mental Health 4240 New Florence Rd, Henderson, WI 54711 (262)-335-7453 Zi [email protected] Pure Sampling University of Wisconsin Madison (at UW Campus) 4 Jonathan Hedberg EG Department of Comparative and Applied Economics 4 - University of California — Berkeley ZA, Michael Amante Department of Economics Sampling, t=0.9 re--<4 Colorado State University Denver rT Claremont McKenna College J Merrimack College ' New England Institute of Technology Le UConn School of Medicine To! Wisconsin School of Engineering University of Wisconsin — Milwaukee University of Wisconsin — Oshkosh University of Wisconsin — Racine University of Wisconsin — Red Wing University of Wisconsin — Stevens Point =40, t=0.7 University of Wisconsin — Stevens Point Thomas Jefferson, 1777—1826 Who Is Mike Barnacle? Well, at the start, this was clearly a project designed to help people get the gist of classic art in its myriad ways. Now we find ourselves with an interesting set of recordings of 200 BC. Who are these guys? This one, apparently, are the descendants of Greek historian Euclid. He famously analyzed straight lines so we know those are straight. In late 1998, a UW-Madison group led by James Thomson was the first to isolate and culture human embryonic stem cells, master undifferentiated cells that arise at the earliest stages of development and are capable of becoming any of the 220 types of cells and tissues in the human body. -k, 3 no} Nucleus; p=0.95, WebText Figure 13: More example generations from an initial tag line. All generations available at https: //github.com/ari-holtzman/degen 16
{ "id": "1611.08562" }
1904.09728
SocialIQA: Commonsense Reasoning about Social Interactions
We introduce Social IQa, the first largescale benchmark for commonsense reasoning about social situations. Social IQa contains 38,000 multiple choice questions for probing emotional and social intelligence in a variety of everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could hear"). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interactions, using a new framework that mitigates stylistic artifacts in incorrect answers by asking workers to provide the right answer to a different but related question. Empirical results show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish Social IQa as a resource for transfer learning of commonsense knowledge, achieving state-of-the-art performance on multiple commonsense reasoning tasks (Winograd Schemas, COPA).
http://arxiv.org/pdf/1904.09728
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, Yejin Choi
cs.CL
the first two authors contributed equally; accepted to EMNLP 2019; camera ready version
null
cs.CL
20190422
20190909
9 1 0 2 p e S 9 ] L C . s c [ 3 v 8 2 7 9 0 . 4 0 9 1 : v i X r a # SOCIAL IQA: Commonsense Reasoning about Social Interactions Maarten Sap* °° Hannah Rashkin* °° Derek Chen’ Ronan Le Bras® Yejin Choi®° © Allen Institute for Artificial Intelligence, Seattle, WA, USA “Paul G. Allen School of Computer Science & Engineering, Seattle, WA, USA {msap, hrashkin, dchen14, yejin}@cs.washington.edu {ronanlb}@allenai.org # Abstract We introduce SOCIAL IQA, the first large- scale benchmark for commonsense reasoning about social situations. SOCIAL IQA contains 38,000 multiple choice questions for prob- ing emotional and social intelligence in a va- riety of everyday situations (e.g., Q: “Jor- dan wanted to tell Tracy a secret, so Jor- dan leaned towards Tracy. Why did Jordan do this?” A: “Make sure no one else could hear”). Through crowdsourcing, we collect commonsense questions along with correct and incorrect answers about social interac- tions, using a new framework that mitigates stylistic artifacts in incorrect answers by ask- ing workers to provide the right answer to a different but related question. Empirical re- sults show that our benchmark is challenging for existing question-answering models based on pretrained language models, compared to human performance (>20% gap). Notably, we further establish SOCIAL IQA as a re- source for transfer learning of commonsense knowledge, achieving state-of-the-art perfor- mance on multiple commonsense reasoning tasks (Winograd Schemas, COPA). # Introduction Social and emotional intelligence enables humans to reason about the mental states of others and their likely actions (Ganaie and Mudasir, 2015). For example, when someone spills food all over the floor, we can infer that they will likely want to clean up the mess, rather than taste the food off the floor or run around in the mess (Figure 1, middle). This example illustrates how Theory of Mind, i.e., the ability to reason about the implied emotions and behavior of others, enables humans to nav- igate social situations ranging from simple con- versations with friends to complex negotiations in courtrooms (Apperly, 2010). # REASONING ABOUT MOTIVATION REASONING ABOUT MOTIVATION Tracy had accidentally pressed upon Austin in the small elevator and it was awkward. (a) get very close to Austin (b) squeeze into the elevator ¥ (c) get flirty with Austin Why did Tracy do this? REASONING ABOUT WHAT HAPPENS NEXT Alex spilled the food she just prepared all over the floor and it made a huge mess. (a) taste the food (b) mop up ¥ (c) run around in the mess What will Alex want to do next? REASONING ABOUT EMOTIONAL REACTIONS In the school play, Robin played a hero in the struggle to the death with the angry villain. (a) sorry for the villain (b) hopeful that Robin will succeed ¥ (c) like Robin should lose How would others feel afterwards? Figure 1: Three context-question-answers triples from SOCIAL IQA, along with the type of reasoning required to answer them. In the top example, humans can triv- ially infer that Tracy pressed upon Austin because there was no room in the elevator. Similarly, in the bottom example, commonsense tells us that people typically root for the hero, not the villain. While humans trivially acquire and develop such social reasoning skills (Moore, 2013), this is still a challenge for machine learning models, in part due to the lack of large-scale resources to train and evaluate modern AI systems’ social and emotional intelligence. Although recent ad- vances in pretraining large language models have yielded promising improvements on several com- monsense inference tasks, these models still strug- gle to reason about social situations, as shown in this and previous work (Davis and Marcus, Both authors contributed equally. 2015; Nematzadeh et al., 2018; Talmor et al., 2019). This is partly due to language models being trained on written text corpora, where reporting bias of knowledge limits the scope of common- sense knowledge that can be learned (Gordon and Van Durme, 2013; Lucy and Gauthier, 2017). In this work, we introduce Social Intelligence QA (SOCIAL IQA), the first large-scale resource to learn and measure social and emotional intel- ligence in computational models.1 SOCIAL IQA contains 38k multiple choice questions regard- ing the pragmatic implications of everyday, social events (see Figure 1). To collect this data, we de- sign a crowdsourcing framework to gather con- texts and questions that explicitly address social commonsense reasoning. Additionally, by com- bining handwritten negative answers with adver- sarial question-switched answers (Section 3.3), we minimize annotation artifacts that can arise from crowdsourcing incorrect answers (Schwartz et al., 2017; Gururangan et al., 2018). This dataset remains challenging for AI sys- tems, with our best performing baseline reaching 64.5% (BERT-large), significantly lower than hu- man performance. We further establish SOCIAL IQA as a resource that enables transfer learning for other commonsense challenges, through se- quential finetuning of a pretrained language model on SOCIAL IQA before other tasks. Specifically, we use SOCIAL IQA to set a new state-of-the-art on three commonsense challenge datasets: COPA (Roemmele et al., 2011) (83.4%), the original Winograd (Levesque, 2011) (72.5%), and the ex- tended Winograd dataset from Rahman and Ng (2012) (84.0%). Our contributions are as follows: (1) We cre- ate SOCIAL IQA, the first large-scale QA dataset aimed at testing social and emotional intelligence, containing over 38k QA pairs. (2) We introduce question-switching, a technique to collect incor- rect answers that minimizes stylistic artifacts due (3) We establish to annotator cognitive biases. baseline performance on our dataset, with BERT- large performing at 64.5%, well below human per- formance. (4) We achieve new state-of-the-art ac- curacies on COPA and Winograd through sequen- tial finetuning on SOCIAL IQA, which implicitly endows models with social commonsense knowl- edge. # 1Available at https://tinyurl.com/socialiqa SOCIAL IQA # QA tuples train dev test total 33,410 1,954 2,224 37,588 Train statistics Average # tokens context question answers (all) answers (correct) answers (incorrect) 14.04 6.12 3.60 3.65 3.58 Unique # tokens context question answers (all) answers (correct) answers (incorrect) 15,764 1,165 12,285 7,386 10,514 Average freq. of answers answers (correct) answers (incorrect) 1.37 1.47 Table 1: Data statistics for SOCIAL IQA. # 2 Task description SOCIAL IQA aims to measure the social and emotional intelligence of computational models through multiple choice question answering (QA). In our setup, models are confronted with a ques- tion explicitly pertaining to an observed context, where the correct answer can be found among three competing options. By design, the questions require inferential rea- soning about the social causes and effects of situa- tions, in line with the type of intelligence required for an AI assistant to interact with human users (e.g., know to call for help when an elderly per- son falls; Pollack, 2005). As seen in Figure 1, correctly answering questions requires reasoning about motivations, emotional reactions, or likely preceding and following actions. Performing these inferences is what makes us experts at navigat- ing social situations, and is closely related to The- ory of Mind, i.e., the ability to reason about the beliefs, motivations, and needs of others (Baron- Cohen et al., 1985).2 Endowing machines with this type of intelligence has been a longstanding but elusive goal of AI (Gunning, 2018). 2 Theory of Mind is well developed in most neurotypical adults (Ganaie and Mudasir, 2015), but can be influenced by age, culture, or developmental disorders (Korkmaz, 2011). # ATOMIC As a starting point for our task creation, we draw upon social commonsense knowledge from ATOMIC (Sap et al., 2019) to seed our contexts and question types. ATOMIC is a large knowledge graph that contains inferential knowledge about the causes and effects of 24k short events. Each triple in ATOMIC consists of an event phrase with person-centric variables, one of nine inference di- mensions, and an inference object (e.g., “PersonX pays for PersonY’s ”, “xAttrib”, “generous”). The nine inference dimensions in ATOMIC cover causes of an event (e.g., “X needs money”), its ef- fects on the agent (e.g., “X will get thanked”) and its effect on other participants (e.g., “Y will want to see X again”); see Sap et al. (2019) for details. Given this base, we generate natural language contexts that represent specific instantiations of the event phrases found in the knowledge graph. Furthermore, the questions created probe the com- monsense reasoning required to navigate such contexts. Critically, since these contexts are based off of ATOMIC, they explore a diverse range of motivations and reactions, as well as likely pre- ceding or following actions. # 3 Dataset creation SOCIAL IQA contains 37,588 multiple choice questions with three answer choices per question. Questions and answers are gathered through three phases of crowdsourcing aimed to collect the con- text, the question, and a set of positive and negative answers. We run crowdsourcing tasks on Ama- zon Mechanical Turk (MTurk) to create each of the three components, as described below. # 3.1 Event Rewriting In order to cover a variety of social situations, we use the base events from ATOMIC as prompts for context creation. As a pre-processing step, we run an MTurk task that asks workers to turn an ATOMIC event (e.g., “PersonX spills all over the floor”) into a sentence by adding names, fixing potential grammar errors, and filling in placehold- ers (e.g., “Alex spilled food all over the floor.”).3 # 3.2 Context, Question, & Answer Creation Next, we run a task where annotators create full context-question-answers triples. We auto- matically generate question templates covering 3This task paid $0.35 per event. Alex spilt food all over the floor and it made a huge mess. WHAT HAPPENS NEXT What will Alex want to do next? “mop up ¥ give up and order take out What did Alex need to do before this? v have slippery hands v get ready to eat X have slippery hands X get ready to eat Figure 2: Question-Switching Answers (QSA) are col- lected as the correct answers to the wrong question that targets a different type of inference (here, reasoning about what happens before instead of after an event). the nine commonsense inference dimensions in ATOMIC.4 Crowdsourcers are prompted with an event sentence and an inference question to turn into a more detailed context5 (e.g. “Alex spilled food all over the floor and it made a huge mess.”) and an edited version of the question if needed for improved specificity (e.g. “What will Alex want to do next?”). Workers are also asked to contribute two potential correct answers. # 3.3 Negative Answers In addition to correct answers, we collect four in- correct answer options, of which we filter out two. To create incorrect options that are adversarial for models but easy for humans, we use two different approaches to the collection process. These two methods are specifically designed to avoid differ- ent types of annotation artifacts, thus making it more difficult for models to rely on data biases. We integrate and filter answer options and validate final QA tuples with human rating tasks. Handwritten Incorrect Answers (HIA) The first method involves eliciting handwritten incor- rect answers that require reasoning about the con- text. These answers are handwritten to be similar to the correct answers in terms of topic, length, and style but are subtly incorrect. Two of these answers are collected during the same MTurk task as the original context, questions, and correct an- swers. We will refer to these negative responses as handwritten incorrect answers (HIA). Question-Switching Answers (QSA) We col- lect a second set of negative (incorrect) answer 4We do not generate templates if the ATOMIC dimension is annotated as “none.” 5Workers were asked to contribute a context 7-25 words longer than the event sentence. wants reactions (e.g., What will Kai want to do next?) (Gp oanceulls Resin esl 29% afterwards?) 21% needs effects descriptions motivations q (e.g., How would —(e.g., Why did (SG TERE ea Whetwil you describe Alex?) Sydney do this?) y . PP 15% 12% do before this?) Sasha?) 12% 11% Figure 3: SOCIAL IQA contains several question types which cover different types of inferential reasoning. Ques- tion types are derived from ATOMIC inference dimensions. candidates by switching the questions asked about the context, as shown in Figure 2. We do this to avoid cognitive biases and annotation artifacts in the answer candidates, such as those caused by writing incorrect answers or negations (Schwartz et al., 2017; Gururangan et al., 2018). In this crowdsourcing task, we provide the same context as the original question, as well as a question au- tomatically generated from a different but similar ATOMIC dimension,6 and ask workers to write two correct answers. We refer to these negative responses as question-switching answers (QSA). By including answers to a different question about the same context, we ensure that these ad- versarial responses have the stylistic qualities of correct answers and strongly relate to the con- text topic, while still being incorrect, making it difficult for models to simply perform pattern- matching. To verify this, we compare valence, arousal, and dominance (VAD) levels across an- swer types, computed using the VAD lexicon by Mohammad (2018). Figure 4 shows effect sizes (Cohen’s d) of the differences in VAD means, where the magnitude of effect size indicates how different the answer types are stylistically. Indeed, QSA and correct answers differ substantially less than HIA answers (|d|≤.1).7 # 3.4 QA Tuple Creation As the final step of the pipeline, we aggregate the data into three-way multiple choice questions. For each created context-question pair contributed by crowdsourced workers, we select a random cor- rect answer and the incorrect answers that are least entailed by the correct one, following inspiration from Zellers et al. (2019a). fel 0.4 arousal £ dominance 2 sc 0.3 ° Fe valence $B 02 & 04 = 0 HIA-corr HIA-QSA corr-QSA Figure 4: Magnitude of effect sizes (Cohen’s d) when comparing average dominance, arousal and valence values of different answer types where larger |d| in- dicates more stylistic difference. For valence (senti- ment polarity) and dominance, the effect sizes compar- ing QSA and correct answers are much smaller, indi- cating that these are more similar tonally. Notably, all three answer types have comparable levels of arousal (intensity). answer to the question provided.8 In order to en- sure even higher quality, we validate the dev and test data a second time with five workers. Our final dataset contains questions for which the correct answer was determined by human majority vot- ing, discarding cases without a majority vote. We also apply a lightweight form of adversarial filter- ing to make the task more challenging by using a deep stylistic classifier to remove easier examples on the dev and test sets (Sakaguchi et al., 2019).9 To obtain human performance, we run a sepa- rate task asking three new workers to select the correct answer on a random subset of 900 dev and 900 test examples. Human performance on these subsets is 87% and 84%, respectively. # 3.5 Data Statistics For the training data, we validate our QA tu- ples through a multiple-choice crowdsourcing task where three workers are asked to select the right To keep contexts separate across train/dev/test sets, we assign SOCIAL IQA contexts to the same partition as the ATOMIC event the context was based on. Shown in Table 1 (top), this yields a 6Using the following three groupings of ATOMIC dimen- sions: {xWant, oWant, xNeed, xIntent}, {xReact oReact, xAttr}, and {xEffect, oEffect}. (Sawilowsky, 2009). We find similarly small effect sizes using other senti- ment/emotion lexicons. 8Agreement on this task was high (Cohen’s κ=.70) 9We also tried filtering to remove examples from the train- ing set but found it did not significantly change performance. We will release tags for the easier training examples with the full data. total set of around 33k training, 2k dev, and 2k test tuples. We additionally include statistics on word counts and vocabulary of the training data. We report the averages of correct and incorrect an- swers in terms of: token length, number of unique tokens, and number of times a unique answer ap- pears in the dataset. Note that due to our three-way multiple choice setup, there are twice as many in- correct answers which influences these statistics. We also include a breakdown (Figure 3) across question types, which we derive from ATOMIC inference dimensions.10 In general, questions re- lating to what someone will feel afterwards or what they will likely do next are more common in SOCIAL IQA. Conversely, questions pertaining to (potentially involuntary) effects of situations on people are less frequent. # 4 Methods We establish baseline performance on SOCIAL IQA, using large pretrained language models based on the Transformer architecture (Vaswani et al., 2017). Namely, we finetune OpenAI-GPT (Radford et al., 2018) and BERT (Devlin et al., 2019), which have both shown remarkable im- provements on a variety of tasks. OpenAI-GPT is a uni-directional language model trained on the BookCorpus (Zhu et al., 2015), whereas BERT is a bidirectional language model trained on both the BookCorpus and English Wikipedia. As per pre- vious work, we finetune the language model rep- resentations but fully learn the classifier specific parameters described below. Multiple classify sequences using these language models, we follow the multiple-choice setup implementation by the respective authors, as described below. First, we concatenate the context, question, and answer, using the model specific separator tokens. For OpenAI-GPT, the format becomes start <context> <question> delimiter <answer> classify , where start , delimiter , and classify are special function tokens. For BERT, the format is similar, but the classifier token comes before the context.11 For each triple, we then compute a score l by 10We group agent and theme ATOMIC dimensions to- gether (e.g., “xReact” and “oReact” become the “reactions” question type). 11BERT’s format is [CLS] <context> [UNUSED] <question> [SEP] <answer> [SEP] Model Accuracy (%) Dev Test Random baseline GPT BERT-base BERT-large 33.3 63.3 63.3 66.0 33.3 63.0 63.1 64.5 w/o context w/o question w/o context, question 52.7 52.1 45.5 – – – Human 86.9* 84.4* Table 2: Experimental results. We additionally perform an ablation by removing contexts and questions, veri- fying that both are necessary for BERT-large’s perfor- mance. Human evaluation results are obtained using 900 randomly sampled examples. passing the hidden representation from the classi- fier token hCLS ∈ RH through an MLP: l = W2 tanh(W1hCLS + b1) where W1 ∈ RH×H , b1 ∈ RH and W2 ∈ R1×H . Finally, we normalize scores across all triples for a given context-question pair using a softmax layer. The model’s predicted answer corresponds to the triple with the highest probability. # 5 Experiments # 5.1 Experimental Set-up We train our models on the 33k SOCIAL IQA train- ing instances, selecting hyperparameters based on the best performing model on our dev set, for which we then report test results. Specifically, we perform finetuning through a grid search over the hyper-parameter settings (with a learning rate in {1e−5, 2e−5, 3e−5}, a batch size in {3, 4, 8}, and a number of epochs in {3, 4, 10}) and report the maximum performance. Models used in our experiments vary in sizes: OpenAI-GPT (117M parameters) has a hid- den size H=768, BERT-base (110M params) and BERT-large (340M params) hidden sizes of H=768 and H=1024, respectively. We train us- ing the HuggingFace PyTorch (Paszke et al., 2017) implementation.12 12https://github.com/huggingface/ pytorch-pretrained-BERT Context Question Answer Jesse was pet sitting for Addison, What does Jesse (a) feed the dog (1) so Jesse came to Addison’s need to do Vv & (b) get a key from Addison house and walked their dog. before this? (c) walk the dog Kai handed back the computer to (2) Will after using it to buy a product off Amazon. What will Kai want to do next? a) wanted to save money on shipping b) Wait for the package c) Wait for the computer v ® Remy gave Skylar, the concierge, (3) her account so that she could check into the hotel. What will Remy want to do next? a) lose her credit card b) arrive at a hotel c) get the key from Skylar co Vv Sydney woke up and was ready (4) to start the day. They put on their What will Sydney want to @ (a) go to bed b) go to the pool ( ( ( ( ( ( v (c) go to work ( ( ( ( ( ( clothes. do next? Kai grabbed Carson’s tools for How would -@ (a) inconvenienced (5) him because Carson could not Carson feelasa ¥ b) grateful get them. result? c) angry Although Aubrey was older and = How would a) they need to practice more (6) stronger, they lost to Alexinarm Alex feel as a 2 (b) ashamed wrestling. result? Vv c) boastful (a) wanted to save money on shipping (b) Wait for the package (c) Wait for the computer Table 3: Example CQA triples from the SOCIAL IQA dev set with BERT-large’s predictions (4: BERT’s predic- tion, Vv: true correct answer). The model predicts correctly in (1) and (2) and incorrectly in the other four examples shown here. Examples (3) and (4) illustrate the model choosing answers that might have happened before, or that might happen much later after the context, as opposed to right after the context situation. In Examples (5) and (6), the model chooses answers that may apply to people other than the ones being asked about. # 5.2 Results Our results (Table 2) show that SOCIAL IQA is still a challenging benchmark for existing com- putational models, compared to human perfor- mance. Our best performing model, BERT-large, outperforms other models by several points on the dev and test set. We additionally ablate our best model’s representation by removing the context and question from the input, confirming that rea- soning over both is necessary for this task. Error Analysis We include a breakdown of our best model’s performance on various question types in Figure 6 and specific examples of errors in the last four rows of Table 3. Overall, questions related to pre-conditions of the context (people’s motivations, actions needed before the context) are less challenging for the model. Conversely, the model seems to struggle more with questions re- lating to (potentially involuntary) effects, stative descriptions, and what people will want to do next. Learning Curve To better understand the ef- fect of dataset scale on model performance on our task, we simulate training situations with lim- ited knowledge. We present the learning curve of BERT-large’s performance on the dev set as it is trained on more training set examples (Fig- ure 5). Although the model does significantly im- prove over a random baseline of 33% with only a few hundred examples, the performance only starts to converge after around 20k examples, pro- viding evidence that large-scale benchmarks are required for this type of reasoning. Examples of errors in Table 3 further indicate that, instead of doing advanced reasoning about situations, models may only be learning lexical as- sociations between the context, question, and an- swers, as hinted at by Marcus (2018) and Zellers et al. (2019b). This leads the model to select answers with incorrect timing (examples 3 and 4) or answers pertaining to the wrong partici- pants (examples 5 and 6), despite being trained on large amounts of examples that specifically distin- guish proper timing and participants. For instance, in (3) and (4), the model selects answers which 100% 80% 60% 40% dev acc. 20% 0% 100 1000 10000 num. training instances 100000 1000000 Figure 5: Dev accuracy when training BERT-large with various number of examples (multiple runs per training size), with human performance (86.9%) shown in or- ange. In order to reach >80%, the model would require nearly 1 million training examples. are incorrectly timed with respect to the context and question (e.g., “arrive at a hotel” is some- thing Remy likely did before checking in with the concierge, not afterwards). Additionally, the model often chooses answers related to a person other than the one asked about. In (6), after the arm wrestling, though it is likely that Aubrey will feel ashamed, the question relates to what Alex might feel–not Aubrey. Overall, our results illustrate how reasoning about social situations still remains a challenge for these models, compared to humans who can triv- ially reason about the causes and effects for mul- tiple participants. We expect that this task would benefit from models capable of more complex rea- soning about entity state, or models that are more explicitly endowed with commonsense (e.g., from knowledge graphs like ATOMIC). # 6 SOCIAL IQA for Transfer Learning In addition to being the first large-scale bench- mark for social commonsense, we also show that SOCIAL IQA can improve performance on down- stream tasks that require commonsense, namely the Winograd Schema Challenge and the Choice of Plausible Alternatives task. We achieve state of the art performance on both tasks by sequentially finetuning on SOCIAL IQA before the task itself. COPA The Choice of Plausible Alternatives task (COPA; Roemmele et al., 2011) is a two- way multiple choice task which aims to measure commonsense reasoning abilities of models. The dataset contains 1,000 questions (500 dev, 500 test) that ask about the causes and effects of a premise. This has been a challenging task for 70% GM acc ——avg acc 65% 60% Figure 6: Average dev accuracy of BERT-large on dif- ferent question types. While questions about effects and motivations are easier, the model still finds wants and descriptions more challenging. computational systems, partially due to the limited amount of training data available. As done previ- ously (Goodwin et al., 2012; Luo et al., 2016), we finetune our models on the dev set, and report per- formance only on the test set. Winograd Schema The Winograd Schema Challenge (WSC; Levesque, 2011) is a well- known challenge It framed as a coreference resolution task. contains a collection of 273 short sentences in which a pronoun must be resolved to one of two antecedents (e.g., in “The city councilmen refused the demonstrators a permit because they feared violence”, they refers to the councilmen). Because of data scarcity in WSC, Rahman and Ng (2012) created 943 Winograd-style sentence pairs (1886 sentences in total), henceforth referred to as DPR, which has been shown to be slightly less challenging than WSC for computational models. We evaluate on these two benchmarks. While the DPR dataset is split into train and test sets (Rahman and Ng, 2012), the WSC dataset con- tains a single (test) set of only 273 instances for evaluation purposes only. Therefore, we use the DPR dataset as training set when evaluating on the WSC dataset. # 6.1 Sequential Finetuning We first finetune BERT-large on SOCIAL IQA, which reaches 66% on our dev set (Table 2). We then finetune that model further on the task- specific datasets, considering the same set of hy- perparameters as in §5.1. On each of the test sets, Task Model Acc. (%) best mean std A Sasaki et al. (2017) P BERT-large O C BERT-SOCIAL IQA 71.2 – – 80.8 75.0 3.0 83.4 80.1 2.0 C S W Kocijan et al. (2019) BERT-large BERT-SOCIAL IQA 72.5 – – 67.0 65.5 1.0 72.5 69.6 1.7 R P D Peng et al. (2015) BERT-large BERT-SOCIAL IQA 76.4 – – 79.4 71.2 3.8 84.0 81.7 1.2 Table 4: Sequential finetuning of BERT-large on SO- CIAL IQA before the task yields state of the art results (bolded) on COPA (Roemmele et al., 2011), Winograd Schema Challenge (Levesque, 2011) and DPR (Rah- man and Ng, 2012). For comparison, we include previ- ous published state of the art performance. we report best, mean, and standard deviation of all models, and compare sequential finetuning results to a BERT-large baseline. Results Shown in Table 4, sequential finetun- ing on SOCIAL IQA yields substantial improve- ments over the BERT-only baseline (between 2.6 and 5.5% max performance increases), as well as the general increase in performance stability (i.e., lower standard deviations). As hinted at by Phang et al. (2019), this suggests that BERT-large can benefit from both the large scale and the QA for- mat of commonsense knowledge in SOCIAL IQA, which it struggles to learn from small benchmarks only. Notably, we find that sequentially finetuned BERT-SOCIAL IQA achieves state-of-the-art re- sults on all three tasks, showing improvements of previous best performing models.13 Effect of scale and knowledge type To bet- ter understand these improvements in downstream task performance, we investigate the impact on COPA performance of sequential finetuning on less SOCIAL IQA training data (Figure 7), as well as the impact of the type of commonsense knowl- edge used in sequential finetuning. As expected, the downstream performance on COPA improves when using a model pretrained on more of SO- CIAL IQA, indicating that the scale of the dataset 13Note that OpenAI-GPT was reported to achieve 78.6% on COPA, but that result was not published, nor discussed in the OpenAI-GPT white paper (Radford et al., 2018). fo) lo} s U @- 2-06 % 60% Oo 8 @. Fa oS 40% o © 20% w) 0% 100 1000 10000 100000 num. Social IQa training instances Figure 7: Effect of finetuning BERT-large on varying sizes of the SOCIAL IQA training set on the dev ac- curacy of COPA. As expected, the more SOCIAL IQA instances the model is finetuned on, the better the accu- racy on COPA. is one factor that helps in the fine-tuning. How- ever, when using SWAG (a similarly sized dataset) instead of SOCIAL IQA for sequential finetuning, the downstream performance on COPA is lower (76.2%). This indicates that, in addition to its large scale, the social and emotional nature of the knowledge in SOCIAL IQA enables improvements on these downstream tasks. # 7 Related Work Commonsense Benchmarks: Commonsense benchmark creation has been well-studied by previous work. Notably, the WinoGrad Schema Challenge (WSC; Levesque, 2011) and the Choice Of Plausible Alternatives dataset (COPA; Roemmele et al., 2011) are expert-curated collec- tions of commonsense QA pairs that are trivial for humans to solve. Whereas WSC requires physical and social commonsense knowledge to solve, COPA targets the knowledge of causes and effects surrounding social situations. While both benchmarks are of high-quality and created their small scale (150 and 1,000 by experts, examples, respectively) poses a challenge for modern modelling techniques, which require many training instances. More recently, Talmor et al. (2019) intro- duce CommonsenseQA, containing 12k multiple- choice questions. Crowdsourced using Concept- these questions Net (Speer and Havasi, 2012), mostly probe knowledge related to factual and physical commonsense (e.g., “Where would I not want a fox?”). In contrast, SOCIAL IQA explicitly separates contexts from questions, and focuses on the types of commonsense inferences humans per- form when navigating social situations. Commonsense Knowledge Bases: In addition to large-scale benchmarks, there is a wealth of work aimed at creating commonsense knowledge repositories (Speer and Havasi, 2012; Sap et al., 2019; Zhang et al., 2017; Lenat, 1995; Espinosa and Lieberman, 2005; Gordon and Hobbs, 2017) that can be used as resources in downstream rea- soning tasks. While SOCIAL IQA is formatted as a natural language QA benchmark, rather than a taxonomic knowledge base, it also can be used as a resource for external tasks, as we have demon- strated experimentally. Constrained or Adversarial Data Collection: Various work has investigated ways to circumvent annotation artifacts that result from crowdsourc- ing. Sharma et al. (2018) extend the Story Cloze data by severely restricting the incorrect story end- ing generation task, reducing the sentiment and negation artifacts. Rajpurkar et al. (2018) create an adversarial version of the extractive question- answering challenge, SQuAD (Rajpurkar et al., 2016), by creating 50k unanswerable questions. Instead of using human-generated incorrect an- swers, Zellers et al. (2018, 2019b) use adversarial filtering of machine generated incorrect answers to minimize surface patterns. Our dataset also aims to reduce annotation artifacts by using a multi- stage annotation pipeline in which we collect neg- ative responses from multiple methods including a unique adversarial question-switching technique. # 8 Conclusion We present SOCIAL IQA, large-scale benchmark for social commonsense. Consisting of 38k multiple-choice questions, SOCIAL IQA covers various types of inference about people’s actions being described in situational contexts. We design a crowdsourcing framework for col- lecting QA pairs that reduces stylistic artifacts of negative answers through an adversarial question- switching method. Despite human performance of close to 90%, computational approaches based on large pretrained language models only achieve accuracies up to 65%, suggesting that these so- cial inferences are still a challenge for AI systems. In addition to providing a new benchmark, we demonstrate how transfer learning from SOCIAL IQA to other commonsense challenges can yield significant improvements, achieving new state-of- the-art performance on both COPA and Winograd Schema Challenge datasets. # Acknowledgments We thank Chandra Bhagavatula, Hannaneh Ha- jishirzi, and other members of the UW NLP and AI2 community for helpful discussions and this project. We also feedback throughout thank the anonymous reviewers for their insight- ful comments and suggestions. This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), Samsung Research, and the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE- 1256082. # References Ian Apperly. 2010. Mindreaders: the cognitive basis of” theory of mind”. Psychology Press. Simon Baron-Cohen, Alan M Leslie, and Uta Frith. 1985. Does the Autistic Child have a “Theory of Mind”? Cognition, 21(1):37–46. Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Commun. ACM, 58:92–103. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL. Jos´e H. Espinosa and Henry Lieberman. 2005. Event- net: Inferring temporal relations between common- sense events. In MICAI. MY Ganaie and Hafiz Mudasir. 2015. A Study of So- cial Intelligence & Academic Achievement of Col- lege Students of District Srinagar, J&K, India. Jour- nal of American Science, 11(3):23–27. Travis Goodwin, Bryan Rink, Kirk Roberts, and Sanda M Harabagiu. 2012. UTDHLT: Copacetic system for choosing plausible alternatives. In NAACL workshop on SemEval, pages 461–466. As- sociation for Computational Linguistics. Andrew S Gordon and Jerry R Hobbs. 2017. A Formal Theory of Commonsense Psychology: How People Think People Think. Cambridge University Press. Jonathan Gordon and Benjamin Van Durme. 2013. Re- porting bias and knowledge acquisition. In Proceed- ings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC ’13, pages 25–30, New York, NY, USA. ACM. David Gunning. 2018. Machine common sense con- cept paper. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in nat- ural language inference data. In NAACL-HLT. Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In ACL. Baris Korkmaz. 2011. Theory of mind and neurodevel- opmental disorders of childhood. Pediatr Res, 69(5 Pt 2):101R–8R. Douglas B Lenat. 1995. Cyc: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38. Hector J. Levesque. 2011. The winograd schema chal- lenge. In AAAI Spring Symposium: Logical Formal- izations of Commonsense Reasoning. Li Lucy and Jon Gauthier. 2017. Are distributional representations ready for the real world? evaluating word vectors for grounded perceptual meaning. In RoboNLP@ACL. Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Common- sense causal reasoning between short texts. In Fif- teenth International Conference on the Principles of Knowledge Representation and Reasoning. Gary Marcus. 2018. Deep learning: A critical ap- praisal. CoRR, abs/1801.00631. Saif Mohammad. 2018. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000 In Proceedings of the 56th Annual english words. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 174–184. Chris Moore. 2013. The development of commonsense psychology. Psychology Press. Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, and Thomas L. Griffiths. 2018. Evaluating theory of mind in question answering. In EMNLP. Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. In HLT- Solving hard coreference problems. NAACL. Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2019. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088. Martha E. Pollack. 2005. Intelligent technology for an aging population: The use of ai to assist elders with cognitive impairment. AI Magazine, 26:9–24. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative Pre-Training. Resolving complex cases of definite pronouns: The winograd In EMNLP, EMNLP-CoNLL schema challenge. ’12, pages 777–789, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy S. Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP. Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S. Gordon. 2011. Choice of plausible alter- natives: An evaluation of commonsense causal rea- In AAAI Spring Symposium: Logical For- soning. malizations of Commonsense Reasoning. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. ArXiv, abs/1907.10641. Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In AAAI. Shota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2017. Handling multi- word expressions in causality estimation. In IWCS. Shlomo S. Sawilowsky. 2009. New effect size rules of thumb. Journal of Modern Applied Statistical Meth- ods, 8(2):597–599. Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, and Noah A Smith. 2017. The effect of different writing tasks on linguistic style: A case study of the ROC story cloze task. In CoNLL. Rishi Kant Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In ACL. Robyn Speer and Catherine Havasi. 2012. Represent- ing general relational knowledge in conceptnet 5. In LREC. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In NAACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998–6008. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Visual commonsense reasoning. In CVPR. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In EMNLP. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019b. Hellaswag: Can a machine really finish your sentence? In ACL. Sheng Zhang, Rachel Rudinger, Kevin Duh, and Ben- jamin Van Durme. 2017. Ordinal common-sense in- ference. Transactions of the Association of Compu- tational Linguistics, 5(1):379–395. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pages 19– 27.
{ "id": "1806.03822" }
1904.09675
BERTScore: Evaluating Text Generation with BERT
We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTScore is more robust to challenging examples when compared to existing metrics.
http://arxiv.org/pdf/1904.09675
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, Yoav Artzi
cs.CL
Code available at https://github.com/Tiiiger/bert_score; To appear in ICLR2020
null
cs.CL
20190421
20200224
0 2 0 2 b e F 4 2 ] L C . s c [ 3 v 5 7 6 9 0 . 4 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # BERTSCORE: EVALUATING TEXT GENERATION WITH BERT # Tianyi Zhang*'?$ Varsha Kishore*} Felix Wu*} Kilian Q. Weinberger‘'$ and Yoav Artzi*! ‘Department of Computer Science and $Cornell Tech, Cornell University {vk352, fw245, kilian}@cornell.edu {yoav}@cs.cornell.edu °ASAPP Inc. [email protected] # ABSTRACT We propose BERTSCORE, an automatic evaluation metric for text generation. Analogously to common metrics, BERTSCORE computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTSCORE correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTSCORE is more robust to challenging examples when compared to existing metrics. 1 # 1 INTRODUCTION Automatic evaluation of natural language generation, for example in machine translation and caption generation, requires comparing candidate sentences to annotated references. The goal is to evaluate semantic equivalence. However, commonly used methods rely on surface-form similarity only. For example, BLEU (Papineni et al., 2002), the most common machine translation metric, simply counts n-gram overlap between the candidate and the reference. While this provides a simple and general measure, it fails to account for meaning-preserving lexical and compositional diversity. In this paper, we introduce BERTSCORE, a language generation evaluation metric based on pre- trained BERT contextual embeddings (Devlin et al., 2019). BERTSCORE computes the similarity of two sentences as a sum of cosine similarities between their tokens’ embeddings. BERTSCORE addresses two common pitfalls in n-gram-based metrics (Banerjee & Lavie, 2005). First, such methods often fail to robustly match paraphrases. For example, given the reference peo- ple like foreign cars, BLEU and METEOR (Banerjee & Lavie, 2005) incorrectly give a higher score to people like visiting places abroad compared to consumers prefer imported cars. This leads to performance underestimation when semantically-correct phrases are penalized because they differ from the surface form of the reference. In contrast to string matching (e.g., in BLEU) or matching heuristics (e.g., in METEOR), we compute similarity using contextualized token embeddings, which have been shown to be effective for paraphrase detection (Devlin et al., 2019). Second, n-gram mod- els fail to capture distant dependencies and penalize semantically-critical ordering changes (Isozaki et al., 2010). For example, given a small window of size two, BLEU will only mildly penalize swapping of cause and effect clauses (e.g. A because B instead of B because A), especially when In contrast, contextualized embeddings are trained to the arguments A and B are long phrases. effectively capture distant dependencies and ordering. We experiment with BERTSCORE on machine translation and image captioning tasks using the outputs of 363 systems by correlating BERTSCORE and related metrics to available human judg- ments. Our experiments demonstrate that BERTSCORE correlates highly with human evaluations. In machine translation, BERTSCORE shows stronger system-level and segment-level correlations with human judgments than existing metrics on multiple common benchmarks and demonstrates ∗Equal contribution. † Work done at Cornell. 1 Published as a conference paper at ICLR 2020 strong model selection performance compared to BLEU. We also show that BERTSCORE is well-correlated with human annotators for image captioning, surpassing SPICE, a popular task- specific metric (Anderson et al., 2016). Finally, we test the robustness of BERTSCORE on the adversarial paraphrase dataset PAWS (Zhang et al., 2019), and show that it is more ro- bust to adversarial examples than other metrics. The code for BERTSCORE is available at https://github.com/Tiiiger/bert_score. # 2 PROBLEM STATEMENT AND PRIOR METRICS Natural language text generation is commonly evaluated using annotated reference sentences. Given a reference sentence x tokenized to k tokens (x1,...,2,) and a candidate @ tokenized to J tokens (1,..., 1), a generation evaluation metric is a function f(x,#) € R. Better metrics have a higher correlation with human judgments. Existing metrics can be broadly categorized into using n-gram matching, edit distance, embedding matching, or learned functions. 2.1 n-GRAM MATCHING APPROACHES The most commonly used metrics for generation count the number of n-grams that occur in the reference x and candidate ˆx. The higher the n is, the more the metric is able to capture word order, but it also becomes more restrictive and constrained to the exact form of the reference. Formally, let S$!’ and S% be the lists of token n-grams (n € Z+) in the reference # sentences. The number of matched n-grams is }>,,<.gn I[w € 5%], where I[-] function. The exact match precision (Exact-P,,) and recall (Exact-R,,) scores are: wesn Hw € $7] ewes Hw € $2] SF Sp Fa Exact-P,, = and Exact-R,, = Sn ˆx Sn x w w and Exact-Rn = Exact-Pn = . ∈ ∈ x and candidate is an indicator Several popular metrics build upon one or both of these exact matching scores. BLEU The most widely used metric in machine translation is BLEU (Papineni et al., 2002), which includes three modifications to Exact-Pn. First, each n-gram in the reference can be matched at most once. Second, the number of exact matches is accumulated for all reference-candidate pairs in the corpus and divided by the total number of n-grams in all candidate sentences. Finally, very short candidates are discouraged using a brevity penalty. Typically, BLEU is computed for multiple values of n (e.g. n = 1, 2, 3, 4) and the scores are averaged geometrically. A smoothed variant, SENT- BLEU (Koehn et al., 2007) is computed at the sentence level. In contrast to BLEU, BERTSCORE is not restricted to maximum n-gram length, but instead relies on contextualized embeddings that are able to capture dependencies of potentially unbounded length. METEOR METEOR (Banerjee & Lavie, 2005) computes Exact-P1 and Exact-R1 while allowing backing-off from exact unigram matching to matching word stems, synonyms, and paraphrases. For example, running may match run if no exact match is possible. Non-exact matching uses an external stemmer, a synonym lexicon, and a paraphrase table. METEOR 1.5 (Denkowski & Lavie, 2014) weighs content and function words differently, and also applies importance weighting to different matching types. The more recent METEOR++ 2.0 (Guo & Hu, 2019) further incorporates a learned external paraphrase resource. Because METEOR requires external resources, only five languages are supported with the full feature set, and eleven are partially supported. Similar to METEOR, BERTSCORE allows relaxed matches, but relies on BERT embeddings that are trained on large amounts of raw text and are currently available for 104 languages. BERTSCORE also supports importance weighting, which we estimate with simple corpus statistics. Other Related Metrics NIST (Doddington, 2002) is a revised version of BLEU that weighs each n-gram differently and uses an alternative brevity penalty. ∆BLEU (Galley et al., 2015) modifies multi-reference BLEU by including human annotated negative reference sentences. CHRF (Popovi´c, 2015) compares character n-grams in the reference and candidate sentences. CHRF++ (Popovi´c, 2017) extends CHRF to include word bigram matching. ROUGE (Lin, 2004) is a commonly used metric for summarization evaluation. ROUGE-n (Lin, 2004) computes Exact-Rn (usually n = 1, 2), while ROUGE-L is a variant of Exact-R1 with the numerator replaced by the length of the longest common subsequence. CIDER (Vedantam et al., 2015) is an image captioning metric that computes 2 Published as a conference paper at ICLR 2020 cosine similarity between tf–idf weighted n-grams. We adopt a similar approach to weigh tokens differently. Finally, Chaganty et al. (2018) and Hashimoto et al. (2019) combine automatic metrics with human judgments for text generation evaluation. 2.2 EDIT-DISTANCE-BASED METRICS Several methods use word edit distance or word error rate (Levenshtein, 1966), which quantify similarity using the number of edit operations required to get from the candidate to the refer- ence. TER (Snover et al., 2006) normalizes edit distance by the number of reference words, and ITER (Panja & Naskar, 2018) adds stem matching and better normalization. PER (Tillmann et al., 1997) computes position independent error rate, CDER (Leusch et al., 2006) models block reorder- ing as an edit operation. CHARACTER (Wang et al., 2016) and EED (Stanchev et al., 2019) operate on the character level and achieve higher correlation with human judgements on some languages. 2.3 EMBEDDING-BASED METRICS Word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Grave et al., 2018; Nguyen et al., 2017; Athiwaratkun et al., 2018) are learned dense token representations. MEANT 2.0 (Lo, 2017) uses word embeddings and shallow semantic parses to compute lexical and structural similarity. YISI-1 (Lo et al., 2018) is similar to MEANT 2.0, but makes the use of semantic parses optional. Both methods use a relatively simple similarity computation, which inspires our approach, including using greedy matching (Corley & Mihalcea, 2005) and experimenting with a similar importance weighting to YISI-1. However, we use contextual embeddings, which capture the specific use of a token in a sentence, and potentially capture sequence information. We do not use external tools to generate linguistic structures, which makes our approach relatively simple and portable to new languages. Instead of greedy matching, WMD (Kusner et al., 2015), WMDO (Chow et al., 2019), and SMS (Clark et al., 2019) propose to use optimal matching based on earth mover’s distance (Rubner et al., 1998). The tradeoff1 between greedy and optimal matching was studied by Rus & Lintean (2012). Sharma et al. (2018) compute similarity with sentence-level representations. In contrast, our token-level computation allows us to weigh tokens differently according to their importance. 2.4 LEARNED METRICS Various metrics are trained to optimize correlation with human judgments. BEER (Stanojevi´c & Sima’an, 2014) uses a regression model based on character n-grams and word bigrams. BLEND (Ma et al., 2017) uses regression to combine 29 existing metrics. RUSE (Shimanaka et al., 2018) com- bines three pre-trained sentence embedding models. All these methods require costly human judg- ments as supervision for each dataset, and risk poor generalization to new domains, even within a known language and task (Chaganty et al., 2018). Cui et al. (2018) and Lowe et al. (2017) train a neural model to predict if the input text is human-generated. This approach also has the risk of being optimized to existing data and generalizing poorly to new data. In contrast, the model underlying BERTSCORE is not optimized for any specific evaluation task. # 3 BERTSCORE Given a reference sentence x = (21,...,2,) and a candidate sentence # = (#1,...,%1), we use contextual embeddings to represent the tokens, and compute matching using cosine similarity, op- tionally weighted with inverse document frequency scores. Figure[I]illustrates the computation. Token Representation We use contextual embeddings to represent the tokens in the input sen- tences x and ˆx. In contrast to prior word embeddings (Mikolov et al., 2013; Pennington et al., 2014), contextual embeddings, such as BERT (Devlin et al., 2019) and ELMO (Peters et al., 2018), can generate different vector representations for the same word in different sentences depending on the surrounding words, which form the context of the target word. The models used to generate these embeddings are most commonly trained using various language modeling objectives, such as masked word prediction (Devlin et al., 2019). 1We provide an ablation study of this design choice in Appendix C. 3 Published as a conference paper at ICLR 2020 Published as a conference paper at ICLR 2020 Figure 1: Illustration of the computation of the recall metric RBERT. Given the reference x and candidate ˆx, we compute BERT embeddings and pairwise cosine similarity. We highlight the greedy matching in red, and include the optional idf importance weighting. We experiment with different models (Section Ap. using the tokenizer provided with each model. Given a tokenized reference sentence x = (x1, t~), the embedding model generates a se- quence of vectors (x;,...,x,). Similarly, the tokenized candidate ¢ = (%1,...,@m) is mapped to (Ri,..., &). The main model we use is BERT, which tokenizes the input text into a sequence of word pieces (Wu et al.||2016), where unknown words are split into several commonly observed sequences of characters. The representation for each word piece is computed with a Transformer encoder 2017) by repeatedly applying self-attention and nonlinear transformations in an alternating fashion. BERT embeddings have been shown to benefit various NLP tasks et al.|/2019} {Liu} /2019} Huang et al.|/2019} Yang et al.|/2019a). Similarity Measure The vector representation allows for a soft measure of similarity instead of exact-string (Papineni et al.| 2) or heuristic (Banerjee & Lavie matching. The cosine similarity of a reference token x; and a candidate token 2; is anne ES vectors, which reduces this calculation to the inner product xj %;. While this measure considers tokens in isolation, the contextual embeddings contain information from the rest of the sentence. T: We use pre-normalized BERTSCORE The complete score matches each token in x to a token in ˆx to compute recall, and each token in ˆx to a token in x to compute precision. We use greedy matching to maximize the matching similarity score,2 where each token is matched to the most similar token in the other sentence. We combine precision and recall to compute an F1 measure. For a reference x and candidate ˆx, the recall, precision, and F1 scores are: Peert: Reerr max x} &; > Fert = 2—————_ wiew Pgert + Reert 1 T = Maxx; Xj, = Reert maxx, Xj Ppert |x| 4 Zi, #68 aE . ∈ ∈ Importance Weighting Previous work on similarity measures demonstrated that rare words can be more indicative for sentence similarity than common words (Banerjee & Lavie, 2005; Vedantam et al., 2015). BERTSCORE enables us to easily incorporate importance weighting. We experiment with inverse document frequency (idf) scores computed from the test corpus. Given M reference sentences {x(i)}M M 1 . i = —log — (i) idf(w) = — log uM > tlw ea), i= where I[·] is an indicator function. We do not use the full tf-idf measure because we process single sentences, where the term frequency (tf) is likely 1. For example, recall with idf weighting is Yoasex idf (ai) maxe,ce x} &; Porer Tne tlle) ∈ Because we use reference sentences to compute idf, the idf scores remain the same for all systems evaluated on a specific test set. We apply plus-one smoothing to handle unknown word pieces. 2We compare greedy matching with optimal assignment in Appendix C. 4 Published as a conference paper at ICLR 2020 Baseline Rescaling Because we use pre-normalized vectors, our computed scores have the same numerical range of cosine similarity (between −1 and 1). However, in practice we observe scores in a more limited range, potentially because of the learned geometry of contextual embeddings. While this characteristic does not impact BERTSCORE’s capability to rank text generation systems, it makes the actual score less readable. We address this by rescaling BERTSCORE with respect to its empirical lower bound b as a baseline. We compute b using Common Crawl monolingual datasets.3 For each language and contextual embedding model, we create 1M candidate-reference pairs by grouping two random sentences. Because of the random pairing and the corpus diversity, each pair has very low lexical and semantic overlapping.4 We compute b by averaging BERTSCORE computed on these sentence pairs. Equipped with baseline b, we rescale BERTSCORE linearly. For example, the rescaled value ˆRBERT of RBERT is: ˆRBERT = RBERT − b 1 − b . After this operation ˆRBERT is typically between 0 and 1. We apply the same rescaling procedure for PBERT and FBERT. This method does not affect the ranking ability and human correlation of BERTSCORE, and is intended solely to increase the score readability. # 4 EXPERIMENTAL SETUP We evaluate our approach on machine translation and image captioning. Contextual Embedding Models We evaluate twelve pre-trained contextual embedding models, including variants of BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019b), XLNet (Yang et al., 2019b), and XLM (Lample & Conneau, 2019). We present the best-performing models in Section 5. We use the 24-layer RoBERTalarge model5 for English tasks, 12-layer BERTchinese model for Chi- nese tasks, and the 12-layer cased multilingual BERTmulti model for other languages.6 We show the performance of all other models in Appendix F. Contextual embedding models generate embedding representations at every layer in the encoder network. Past work has shown that intermediate layers produce more effective representations for semantic tasks (Liu et al., 2019a). We use the WMT16 dataset (Bojar et al., 2016) as a validation set to select the best layer of each model (Appendix B). Machine Translation Our main evaluation corpus is the WMT18 metric evaluation dataset (Ma et al., 2018), which contains predictions of 149 translation systems across 14 language pairs, gold references, and two types of human judgment scores. Segment-level human judgments assign a score to each reference-candidate pair. System-level human judgments associate each system with a single score based on all pairs in the test set. WMT18 includes translations from English to Czech, German, Estonian, Finnish, Russian, and Turkish, and from the same set of languages to English. We follow the WMT18 standard practice and use absolute Pearson correlation |ρ| and Kendall rank correlation τ to evaluate metric quality, and compute significance with the Williams test (Williams, 1959) for |ρ| and bootstrap re-sampling for τ as suggested by Graham & Baldwin (2014). We compute system- level scores by averaging BERTSCORE for every reference-candidate pair. We also experiment with hybrid systems by randomly sampling one candidate sentence from one of the available systems for each reference sentence (Graham & Liu, 2016). This enables system-level experiments with a higher number of systems. Human judgments of each hybrid system are created by averaging the WMT18 segment-level human judgments for the corresponding sentences in the sampled data. We compare BERTSCOREs to one canonical metric for each category introduced in Section 2, and include the comparison with all other participating metrics from WMT18 in Appendix F. In addition to the standard evaluation, we design model selection experiments. We use 10K hybrid systems super-sampled from WMT18. We randomly select 100 out of 10K hybrid systems, and rank them using the automatic metrics. We repeat this process 100K times. We report the percentage of the metric ranking agreeing with the human ranking on the best system (Hits@1). In Tables 23-28, 3https://commoncrawl.org/ 4BLEU computed on these pairs is around zero. 5We use the tokenizer provided with each model. For all Hugging Face models that use the GPT-2 tokenizer, >We use the tokenizer provided with each model. For all Hugging Face models that use the GPT-2 tokenizer, at the time of our experiments, the tokenizer adds a space to the begin of each sentence. at the time of our experiments, the tokenizer adds a space to the beginning of each sentence. 6All the models used are from https://github.com/huggingface/pytorch-transformers. 5 Published as a conference paper at ICLR 2020 Metric en↔cs (5/5) en↔de (16/16) en↔et (14/14) en↔fi (9/12) en↔ru (8/9) en↔tr (5/8) en↔zh (14/14) BLEU ITER RUSE YiSi-1 PBERT RBERT FBERT FBERT (idf) .970/.995 .975/.915 .981/ – .950/.987 .980/.994 .998/.997 .990/.997 .985/.995 .971/.981 .990/.984 .997/ – .992/.985 .998/.988 .997/.990 .999/.989 .999/.990 .986/.975 .975/.981 .990/ – .979/.979 .990/.981 .986/.980 .990/.982 .992/.981 .973/.962 .996/.973 .991/ – .973/.940 .995/.957 .997/.980 .998/.972 .992/.972 .979/.983 .937/.975 .988/ – .991/.992 .982/.990 .995/.989 .990/.990 .991/.991 .657/.826 .861/.865 .853/ – .958/.976 .791/.935 .054/.879 .499/.908 .826/.941 .978/.947 .980/ – .981/ – .951/.963 .981/.954 .990/.976 .988/.967 .989/.973 Table 1: Absolute Pearson correlations with system-level human judgments on WMT18. For each language pair, the left number is the to-English correlation, and the right is the from-English. We bold correlations of metrics not significantly outperformed by any other metric under Williams Test for that language pair and direction. The numbers in parenthesis are the number of systems used for each language pair and direction. Metric en↔cs en↔de en↔et en↔fi en↔ru en↔tr en↔zh BLEU ITER RUSE YiSi-1 PBERT RBERT FBERT FBERT (idf) .956/.993 .966/.865 .974/ – .942/.985 .965/.989 .989/.995 .978/.993 .982/.995 .969/.977 .990/.978 .996/ – .991/.983 .995/.983 .997/.991 .998/.988 .998/.988 .981/.971 .975/.982 .988/ – .976/.976 .990/.970 .982/.979 .989/.978 .988/.979 .962/.958 .989/.966 .983/ – .964/.938 .976/.951 .989/.977 .983/.969 .989/.969 .972/.977 .943/.965 .982/ – .985/.989 .976/.988 .988/.989 .985/.989 .983/.987 .586/.796 .742/.872 .780/ – .881/.942 .846/.936 .540/.872 .760/.910 .453/.877 .968/.941 .978/ – .973/ – .943/.957 .975/.950 .981/.980 .981/.969 .980/.963 Table 2: Absolute Pearson correlations with system-level human judgments on WMT18. We use 10K hybrid super-sampled systems for each language pair and direction. For each language pair, the left number is the to-English correlation, and the right is the from-English. Bolding criteria is the same as in Table 1. we include two additional measures to the model selection study: (a) the mean reciprocal rank of the top metric-rated system according to the human ranking, and (b) the difference between the human score of the top human-rated system and that of the top metric-rated system. Additionally, we report the same study on the WMT17 (Bojar et al., 2017) and the WMT16 (Bojar et al., 2016) datasests in Appendix F.7 This adds 202 systems to our evaluation. Image Captioning We use the human judgments of twelve submission entries from the COCO 2015 Captioning Challenge. Each participating system generates a caption for each image in the COCO validation set (Lin et al., 2014), and each image has approximately five reference cap- tions. Following Cui et al. (2018), we compute the Pearson correlation with two system-level metrics: the percentage of captions that are evaluated as better or equal to human captions (M1) and the percentage of captions that are indistinguishable from human captions (M2). We compute BERTSCORE with multiple references by scoring the candidate with each available reference and returning the highest score. We compare with eight task-agnostic metrics: BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), ROUGE-L (Lin, 2004), CIDER (Vedantam et al., 2015), BEER (Stanojevi´c & Sima’an, 2014), EED (Stanchev et al., 2019), CHRF++ (Popovi´c, 2017), and CHARACTER (Wang et al., 2016). We also compare with two task-specific metrics: SPICE (Ander- son et al., 2016) and LEIC (Cui et al., 2018). SPICE is computed using the similarity of scene graphs parsed from the reference and candidate captions. LEIC is trained to predict if a caption is written by a human given the image. 7For WMT16, we only conduct segment-level experiments on to-English pairs due to errors in the dataset. 6 Published as a conference paper at ICLR 2020 Metric en↔cs en↔de en↔et en↔fi en↔ru en↔tr en↔zh BLEU ITER RUSE YiSi-1 PBERT RBERT FBERT FBERT (idf) .134/.151 .154/.000 .214/ – .159/.178 .173/.180 .163/.184 .175/.184 .179/.178 .803/.610 .814/.692 .823/ – .809/.671 .706/.663 .804/.730 .824/.703 .824/.722 .756/.618 .742/.733 .785/ – .749/.671 .764/.771 .770/.722 .769/.763 .760/.764 .461/.088 .475/.111 .487/ – .467/.230 .498/.078 .494/.148 .501/.082 .503/.082 .228/.519 .234/.532 .248/ – .248/.544 .255/.545 .260/.542 .262/.544 .265/.539 .095/.029 .102/.030 .109/ – .108/.398 .140/.372 .005/.030 .142/.031 .004/.030 .658/.515 .673/ – .670/ – .613/.594 .661/.551 .677/.657 .673/.629 .678/.595 Table 3: Model selection accuracies (Hits@1) on WMT18 hybrid systems. We report the average of 3. We bold the highest numbers for 100K samples and the 0.95 confidence intervals are below 10− each language pair and direction. Metric en↔cs (5k/5k) en↔de (78k/ 20k) en↔et (57k/32k) en↔fi (16k/10k) en↔ru (10k/22k) en↔tr (9k/1k) en↔zh (33k/29k) BLEU ITER RUSE YiSi-1 PBERT RBERT FBERT FBERT (idf) .233/.389 .198/.333 .347/ – .319/.496 .387/.541 .388/.570 .404/.562 .408/.553 .415/.620 .396/.610 .498/ – .488/.691 .541/.715 .546/.728 .550/.728 .550/.721 .285/.414 .235/.392 .368/ – .351/.546 .389/.549 .391/.594 .397/.586 .395/585 .154/.355 .128/.311 .273/ – .231/.504 .283/.486 .304/.565 .296/.546 .293/.537 .228/.330 .139/.291 .311/ – .300/.407 .345/.414 .343/.420 .353/.423 .346/.425 .145/.261 -.029/.236 .259/ – .234/.418 .280/.328 .290/.411 .292/.399 .296/.406 .178/.311 .144/ – .218/ – .211/.323 .248/.337 .255/.367 .264/.364 .260/.366 Table 4: Kendall correlations with segment-level human judgments on WMT18. For each language pair, the left number is the to-English correlation, and the right is the from-English. We bold corre- lations of metrics not significantly outperformed by any other metric under bootstrap sampling for that language pair and direction. The numbers in parenthesis are the number of candidate-reference sentence pairs for each language pair and direction. # 5 RESULTS Machine Translation Tables 1–3 show system-level correlation to human judgements, correla- tions on hybrid systems, and model selection performance. We observe that BERTSCORE is con- sistently a top performer. In to-English results, RUSE (Shimanaka et al., 2018) shows competitive performance. However, RUSE is a supervised method trained on WMT16 and WMT15 human judgment data. In cases where RUSE models were not made available, such as for our from-English experiments, it is not possible to use RUSE without additional data and training. Table 4 shows segment-level correlations. We see that BERTSCORE exhibits significantly higher performance compared to the other metrics. The large improvement over BLEU stands out, making BERTSCORE particularly suitable to analyze specific examples, where SENTBLEU is less reliable. In Appendix A, we provide qualitative examples to illustrate the segment-level performance difference between SENTBLEU and BERTSCORE. At the segment-level, BERTSCORE even significantly outperforms RUSE. Overall, we find that applying importance weighting using idf at times provides small bene- fit, but in other cases does not help. Understanding better when such importance weighting is likely to help is an important direction for future work, and likely depends on the domain of the text and the available test data. We continue without idf weighting for the rest of our experiments. While recall RBERT, precision PBERT, and F1 FBERT alternate as the best measure in different setting, F1 FBERT performs reliably well across all the different settings. Our overall recommendation is there- fore to use F1. We present additional results using the full set of 351 systems and evaluation metrics in Tables 12–28 in the appendix, including for experiments with idf importance weighting, different contextual embedding models, and model selection. Image Captioning Table 5 shows correlation results for the COCO Captioning Challenge. BERTSCORE outperforms all task-agnostic baselines by large margins. Image captioning presents a challenging evaluation scenario, and metrics based on strict n-gram matching, including BLEU and ROUGE, show weak correlations with human judgments. idf importance weighting shows signifi- 7 Published as a conference paper at ICLR 2020 Metric BLEU METEOR ROUGE-L CIDER SPICE LEIC BEER EED CHRF++ CHARACTER M1 -0.019∗ 0.606∗ 0.090∗ 0.438∗ 0.759∗ 0.939∗ 0.491 0.545 0.702 0.800 M2 -0.005∗ 0.594∗ 0.096∗ 0.440∗ 0.750∗ 0.949∗ 0.562 0.599 0.729 0.801 PBERT RBERT FBERT RBERT (idf) -0.105 0.888 0.322 0.917 -0.041 0.863 0.350 0.889 Type Trained on QQP (supervised) Method DecAtt DIIN BERT QQP 0.939∗ 0.952∗ 0.963∗ PAWSQQP 0.263 0.324 0.351 Trained on QQP + PAWSQQP (supervised) DecAtt DIIN BERT - - - 0.511 0.778 0.831 Metric (Not trained on QQP or PAWSQQP) BLEU METEOR ROUGE-L CHRF++ BEER EED CHARACTER 0.707 0.755 0.740 0.577 0.741 0.743 0.698 0.527 0.532 0.536 0.608 0.564 0.611 0.650 # PBERT RBERT FBERT FBERT (idf) 0.757 0.744 0.761 0.777 0.687 0.685 0.685 0.693 Table 5: Pearson correlation on the 2015 COCO Captioning Challenge. The M1 and M2 measures are described in Section 4. LEIC uses images as addi- tional inputs. Numbers with ∗ are cited from Cui et al. (2018). We bold the highest correlations of task-specific and task-agnostic metrics. Table 6: Area under ROC curve (AUC) on QQP and PAWSQQP datasets. The scores of trained De- cATT (Parikh et al., 2016), DIIN (Gong et al., 2018), and fine-tuned BERT are reported by Zhang et al. (2019). Numbers with ∗ are scores on the held-out test set of QQP. We bold the highest correlations of task- specific and task-agnostic metrics. cant benefit for this task, suggesting people attribute higher importance to content words. Finally, LEIC (Cui et al., 2018), a trained metric that takes images as additional inputs and is optimized specifically for the COCO data and this set of systems, outperforms all other methods. Speed Despite the use of a large pre-trained model, computing BERTSCORE is relatively fast. We are able to process 192.5 candidate-reference pairs/second using a GTX-1080Ti GPU. The complete WMT18 en-de test set, which includes 2,998 sentences, takes 15.6sec to process, compared to 5.4sec with SacreBLEU (Post, 2018), a common BLEU implementation. Given the sizes of commonly used test and validation sets, the increase in processing time is relatively marginal, and BERTSCORE is a good fit for using during validation (e.g., for stopping) and testing, especially when compared to the time costs of other development stages. # 6 ROBUSTNESS ANALYSIS We test the robustness of BERTSCORE using adversarial paraphrase classification. We use the Quora Question Pair corpus (QQP; Iyer et al., 2017) and the adversarial paraphrases from the Para- phrase Adversaries from Word Scrambling dataset (PAWS; Zhang et al., 2019). Both datasets con- tain pairs of sentences labeled to indicate whether they are paraphrases or not. Positive examples in QQP are real duplicate questions, while negative examples are related, but different questions. Sentence pairs in PAWS are generated through word swapping. For example, in PAWS, Flights from New York to Florida may be changed to Flights from Florida to New York and a good classifier should identify that these two sentences are not paraphrases. PAWS includes two parts: PAWSQQP, which is based on the QQP data, and PAWSWiki. We use the PAWSQQP development set which contains 667 sentences. For the automatic metrics, we use no paraphrase detection training data. We expect that pairs with higher scores are more likely to be paraphrases. To evaluate the automatic metrics on QQA, we use the first 5,000 sentences in the training set instead of the the test set because the test labels are not available. We treat the first sentence as the reference and the second sentence as the candidate. Table 6 reports the area under ROC curve (AUC) for existing models and automatic metrics. We observe that supervised classifiers trained on QQP perform worse than random guess on PAWSQQP, which shows these models predict the adversarial examples are more likely to be paraphrases. When 8 Published as a conference paper at ICLR 2020 adversarial examples are provided in training, state-of-the-art models like DIIN (Gong et al., 2018) and fine-tuned BERT are able to identify the adversarial examples but their performance still de- creases significantly from their performance on QQP. Most metrics have decent performance on QQP, but show a significant performance drop on PAWSQQP, almost down to chance performance. This suggests these metrics fail to to distinguish the harder adversarial examples. In contrast, the performance of BERTSCORE drops only slightly, showing more robustness than the other metrics. # 7 DISCUSSION We propose BERTSCORE, a new metric for evaluating generated text against gold standard refer- ences. BERTSCORE is purposely designed to be simple, task agnostic, and easy to use. Our analysis illustrates how BERTSCORE resolves some of the limitations of commonly used metrics, especially on challenging adversarial examples. We conduct extensive experiments with various configuration choices for BERTSCORE, including the contextual embedding model used and the use of impor- tance weighting. Overall, our extensive experiments, including the ones in the appendix, show that BERTSCORE achieves better correlation than common metrics, and is effective for model selec- tion. However, there is no one configuration of BERTSCORE that clearly outperforms all others. While the differences between the top configurations are often small, it is important for the user to be aware of the different trade-offs, and consider the domain and languages when selecting the exact configuration to use. In general, for machine translation evaluation, we suggest using FBERT, which we find the most reliable. For evaluating text generation in English, we recommend using the 24- layer RoBERTalarge model to compute BERTSCORE. For non-English language, the multilingual BERTmulti is a suitable choice although BERTSCORE computed with this model has less stable performance on low-resource languages. We report the optimal hyperparameter for all models we experimented with in Appendix B Briefly following our initial preprint publication, Zhao et al. (2019) published a concurrently devel- oped method related to ours, but with a focus on integrating contextual word embeddings with earth mover’s distance (EMD; Rubner et al., 1998) rather than our simple matching process. They also propose various improvements compared to our use of contextualized embeddings. We study these improvements in Appendix C and show that integrating them into BERTSCORE makes it equivalent or better than the EMD-based approach. Largely though, the effect of the different improvements on BERTSCORE is more modest compared to their method. Shortly after our initial publication, YiSi-1 was updated to use BERT embeddings, showing improved performance (Lo, 2019). This further corroborates our findings. Other recent related work includes training a model on top of BERT to maximize the correlation with human judgments (Mathur et al., 2019) and evaluating gen- eration with a BERT model fine-tuned on paraphrasing (Yoshimura et al., 2019). More recent work shows the potential of using BERTSCORE for training a summarization system (Li et al., 2019) and for domain-specific evaluation using SciBERT (Beltagy et al., 2019) to evaluate abstractive text summarization (Gabriel et al., 2019). In future work, we look forward to designing new task-specific metrics that use BERTSCORE as a subroutine and accommodate task-specific needs, similar to how Wieting et al. (2019) suggests to use semantic similarity for machine translation training. Because BERTSCORE is fully differentiable, it also can be incorporated into a training procedure to compute a learning loss that reduces the mismatch between optimization and evaluation objectives. # ACKNOWLEDGEMENT This research is supported in part by grants from the National Science Foundation (III-1618134, III- 1526012, IIS1149882, IIS-1724282, TRIPODS-1740822, CAREER-1750499), the Office of Naval Research DOD (N00014-17-1-2175), and the Bill and Melinda Gates Foundation, SAP, Zillow, Workday, and Facebook Research. We thank Graham Neubig and David Grangier for for their insightful comments. We thank the Cornell NLP community including but not limited to Claire Cardie, Tianze Shi, Alexandra Schofield, Gregory Yauney, and Rishi Bommasani. We thank Yin Cui and Guandao Yang for their help with the COCO 2015 dataset. 9 Published as a conference paper at ICLR 2020 # REFERENCES Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. SPICE: Semantic proposi- tional image caption evaluation. In ECCV, 2016. Ben Athiwaratkun, Andrew Wilson, and Anima Anandkumar. Probabilistic fasttext for multi-sense word embeddings. In ACL, 2018. Satanjeev Banerjee and Alon Lavie. METEOR: An automatic metric for mt evaluation with im- proved correlation with human judgments. In IEEvaluation@ACL, 2005. Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A pretrained language model for scientific text. ArXiv, 2019. Ondˇrej Bojar, Yvette Graham, Amir Kamran, and MiloÅ¡ Stanojevi´c. Results of the WMT16 metrics shared task. In WMT, 2016. Ondˇrej Bojar, Yvette Graham, and Amir Kamran. Results of the WMT17 metrics shared task. In WMT, 2017. Arun Chaganty, Stephen Mussmann, and Percy Liang. The price of debiasing automatic metrics in natural language evalaution. In ACL, 2018. Julian Chow, Lucia Specia, and Pranava Madhyastha. WMDO: Fluency-based word mover’s dis- tance for machine translation evaluation. In WMT, 2019. Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. Sentence mover’s similarity: Automatic evaluation for multi-sentence texts. In ACL, 2019. Courtney Corley and Rada Mihalcea. Measuring the semantic similarity of texts. In ACL Workshop, EMSEE ’05, 2005. Yin Cui, Guandao Yang, Andreas Veit, Xun Huang, and Serge J. Belongie. Learning to evaluate image captioning. In CVPR, 2018. Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In WMT@ACL, 2014. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. George Doddington. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In HLT, 2002. William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In IWP, 2005. Saadia Gabriel, Antoine Bosselut, Ari Holtzman, Kyle Lo, Asli Çelikyilmaz, and Yejin Choi. Co- operative generator-discriminator networks for abstractive summarization with narrative flow. ArXiv, 2019. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and William B. Dolan. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In ACL, 2015. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In ICML, 2017. Yichen Gong, Heng Luo, and Jian Zhang. Natural language inference over interaction space. In ICLR, 2018. Yvette Graham and Timothy Baldwin. Testing for significance of increased correlation with human judgment. In EMNLP, 2014. 10 Published as a conference paper at ICLR 2020 Yvette Graham and Qun Liu. Achieving accurate conclusions in evaluation of automatic machine translation metrics. In NAACL, 2016. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. Learning word vectors for 157 languages. arXiv preprint arXiv:1802.06893, 2018. Yinuo Guo and Junfeng Hu. Meteor++ 2.0: Adopt syntactic level paraphrase knowledge into ma- chine translation evaluation. In WMT, 2019. Tatsu Hashimoto, Hugh Zhang, and Percy Liang. Unifying human and statistical evaluation for natural language generation. In NAACL-HLT, 2019. Chenyang Huang, Amine Trabelsi, and Osmar R Zaïane. ANA at semeval-2019 task 3: Contex- tual emotion detection in conversations through hierarchical LSTMs and BERT. arXiv preprint arXiv:1904.00132, 2019. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. Automatic evaluation of translation quality for distant language pairs. In EMNLP, 2010. Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First quora dataset release: Question pairs. https://tinyurl.com/y2y8u5ed, 2017. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In ACL, 2007. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In ICML, 2015. Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv, 2019. Gregor Leusch, Nicola Ueffing, and Hermann Ney. CDER: Efficient MT evaluation using block movements. In EACL, 2006. Vladimir Iosifovich Levenshtein. Binary Codes Capable of Correcting Deletions, Insertions and Rever sals. Soviet Physics Doklady, 10, 1966. Siyao Li, Deren Lei, Pengda Qin, and William Yang Wang. Deep reinforcement learning with distributional semantic rewards for abstractive summarization. In EMNLP-IJCNLP, 2019. Chin-Yew Lin. ROUGE: A package for automatic evaluation of summaries. In ACL, 2004. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. arXiv preprint arXiv:1903.08855, 2019a. Yang Liu. Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318, 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretrain- ing approach. arXiv, abs/1907.11692, 2019b. Chi-kiu Lo. MEANT 2.0: Accurate semantic mt evaluation for any output language. In WMT, 2017. Chi-kiu Lo. YiSi - a unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In WMT, 2019. 11 Published as a conference paper at ICLR 2020 Chi-kiu Lo, Michel Simard, Darlene Stewart, Samuel Larkin, Cyril Goutte, and Patrick Littell. Ac- curate semantic textual similarity for cleaning noisy parallel corpora using semantic machine translation evaluation metric: The NRC supervised submissions to the parallel corpus filtering task. In WMT, 2018. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. Towards an automatic Turing test: Learning to evaluate dialogue responses. In ACL, 2017. Qingsong Ma, Yvette Graham, Shugen Wang, and Qun Liu. Blend: a novel combined mt metric based on direct assessment – casict-dcu submission to WMT17 metrics task. In WMT, 2017. Qingsong Ma, Ondrej Bojar, and Yvette Graham. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance. In WMT, 2018. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. Putting evaluation in context: Contextual em- beddings improve machine translation evaluation. In ACL, 2019. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. Dai Quoc Nguyen, Dat Quoc Nguyen, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. A mixture model for learning multi-sense word embeddings. In ACL, 2017. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In WMT, 2018. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint and Michael Auli. arXiv:1904.01038, 2019. Joybrata Panja and Sudip Kumar Naskar. Iter: Improving translation edit rate through optimizable edit costs. In WMT, 2018. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, 2002. Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In EMNLP, 2016. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, 2014. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. Deep contextualized word representations. In NAACL-HLT, 2018. Maja Popovi´c. chrf: character n-gram f-score for automatic mt evaluation. In WMT@ACL, 2015. Maja Popovi´c. chrf++: words helping character n-grams. In WMT, 2017. Matt Post. A call for clarity in reporting BLEU scores. In WMT, 2018. Nils Reimers and Iryna Gurevych. Alternative weighting schemes for elmo embeddings. arXiv preprint arXiv:1904.02954, 2019. Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. A metric for distributions with applications to image databases. In ICCV. IEEE, 1998. Vasile Rus and Mihai Lintean. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. ACL, 2012. 12 Published as a conference paper at ICLR 2020 Andreas Rücklé, Steffen Eger, Maxime Peyrard, and Iryna Gurevych. Concatenated power mean word embeddings as universal cross-lingual sentence representations. arXiv, 2018. Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation. arXiv preprint arXiv:1706.09799, 2018. Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. Ruse: Regressor using sentence embeddings for automatic machine translation evaluation. In WMT, 2018. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. A study of translation edit rate with targeted human annotation. In AMTA, 2006. Peter Stanchev, Weiyue Wang, and Hermann Ney. EED: Extended edit distance measure for machine translation. In WMT, 2019. MiloÅ¡ Stanojevi´c and Khalil Sima’an. Beer: Better evaluation as ranking. In WMT, 2014. Christoph Tillmann, Stephan Vogel, Hermann Ney, Arkaitz Zubiaga, and Hassan Sawaf. Accelerated dp based search for statistical translation. In EUROSPEECH, 1997. Kristina Toutanova, Chris Brockett, Ke M Tran, and Saleema Amershi. A dataset and evaluation metrics for abstractive compression of sentences and short paragraphs. In EMNLP, 2016. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. CIDEr: Consensus-based image description evaluation. In CVPR, 2015. Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. Character: Translation edit rate on character level. In WMT, 2016. John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. Beyond BLEU:training neural machine translation with semantic similarity. In ACL, 2019. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sen- tence understanding through inference. In ACL, 2018. Evan James Williams. Regression analysis. wiley, 1959. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In ICLR, 2019. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. Wei Yang, Haotian Zhang, and Jimmy Lin. Simple applications of BERT for ad hoc document retrieval. arXiv preprint arXiv:1903.10972, 2019a. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. arXiv, 2019b. Ryoma Yoshimura, Hiroki Shimanaka, Yukio Matsumura, Hayahide Yamagishi, and Mamoru Ko- machi. Filtering pseudo-references by paraphrasing for automatic evaluation of machine transla- tion. In WMT, 2019. 13 Published as a conference paper at ICLR 2020 Yuan Zhang, Jason Baldridge, and Luheng He. PAWS: Paraphrase adversaries from word scram- bling. arXiv preprint arXiv:1904.01130, 2019. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance. In EMNLP, 2019. 14 Published as a conference paper at ICLR 2020 Case No. Reference and Candidate Pairs Human U E L B > T R E B F 1. 2. 3. 4. 5. x: At the same time Kingfisher is closing 60 B&Q outlets across the country ˆx: At the same time, Kingfisher will close 60 B & Q stores nationwide x: Hewlett-Packard to cut up to 30,000 jobs ˆx: Hewlett-Packard will reduce jobs up to 30.000 x: According to opinion in Hungary, Serbia is “a safe third country". ˆx: According to Hungarian view, Serbia is a “safe third country." x: Experts believe November’s Black Friday could be holding back spending. ˆx: Experts believe that the Black Friday in November has put the brakes on spending x: And it’s from this perspective that I will watch him die. ˆx: And from this perspective, I will see him die. 38 119 23 73 37 125 39 96 147 111 T R E B F > U E L B 6. 8. 7. 9. 10. x: In their view the human dignity of the man had been violated. ˆx: Look at the human dignity of the man injured. x: For example when he steered a shot from Ideye over the crossbar in the 56th minute. ˆx: So, for example, when he steered a shot of Ideye over the latte (56th). x: A good prank is funny, but takes moments to reverse. ˆx: A good prank is funny, but it takes only moments before he becomes a boomerang. x: I will put the pressure on them and onus on them to make a decision. ˆx: I will exert the pressure on it and her urge to make a decision. x: Transport for London is not amused by this flyposting "vandalism." ˆx: Transport for London is the Plaka animal "vandalism" is not funny. 500 516 495 507 527 470 524 424 471 527 n a m u H > T R E B F 11. 12. 13. 14. 15. x: One big obstacle to access to the jobs market is the lack of knowledge of the German language. ˆx: A major hurdle for access to the labour market are a lack of knowledge of English. x: On Monday night Hungary closed its 175 km long border with Serbia. ˆx: Hungary had in the night of Tuesday closed its 175 km long border with Serbia. x: They got nothing, but they were allowed to keep the clothes. ˆx: You got nothing, but could keep the clothes. x: A majority of Republicans don’t see Trump’s temperament as a problem. ˆx: A majority of Republicans see Trump’s temperament is not a problem. x:His car was still running in the driveway. ˆx: His car was still in the driveway. 558 413 428 290 299 131 135 174 34 49 T R E B F > n a m u H 16. 17. 18. 19. 20. x: Currently the majority of staff are men. ˆx: At the moment the men predominate among the staff. x: There are, indeed, multiple variables at play. ˆx: In fact, several variables play a role. x: One was a man of about 5ft 11in tall. ˆx: One of the men was about 1,80 metres in size. x: All that stuff sure does take a toll. ˆx: All of this certainly exacts its toll. x: Wage gains have shown signs of picking up. ˆx: Increases of wages showed signs of a recovery. 77 30 124 90 140 525 446 551 454 464 BLEU 530 441 465 492 414 115 185 152 220 246 313 55 318 134 71 553 552 528 547 514 Table 7: Examples sentences where similarity ranks assigned by Human, FBERT, and BLEU differ significantly on WMT16 German-to-English evaluation task. x: gold reference, ˆx: candidate outputs of MT systems. Rankings assigned by Human, FBERT, and BLEU are shown in the right three columns. The sentences are ranked by the similarity, i.e. rank 1 is the most similar pair assigned by a score. An ideal metric should rank similar to humans. # A QUALITATIVE ANALYSIS We study BERTSCORE and SENTBLEU using WMT16 German-to-English (Bojar et al., 2016). We rank all 560 candidate-reference pairs by human score, BERTSCORE, or SENTBLEU from most similar to least similar. Ideally, the ranking assigned by BERTSCORE and SENTBLEU should be similar to the ranking assigned by the human score. Table 7 first shows examples where BERTSCORE and SENTBLEU scores disagree about the ranking for a candidate-reference pair by a large number. We observe that BERTSCORE is effectively able to capture synonyms and changes in word order. For example, the reference and candidate sentences in pair 3 are almost identical except that the candidate replaces opinion in Hungary with Hungarian view and switches the order of the quotation mark (“) and a. While BERTSCORE ranks the pair relatively high, SENTBLEU judges the pair as dissimilar, because it cannot match synonyms and is sensitive to the small word order changes. Pair 5 shows a set of changes that preserve the semantic meaning: replacing to cut with will reduce and swapping the order of 30,000 and jobs. BERTSCORE ranks the candidate translation similar to the human judgment, whereas SENTBLEU ranks it much lower. We also see that SENTBLEU potentially over-rewards n-gram overlap, even when phrases are used very differently. In pair 6, both the candidate and the reference contain the human dignity of the man. Yet the two sentences convey very different meaning. BERTSCORE agrees with the human judgment and ranks the pair low. In contrast, SENTBLEU considers the pair as relatively similar because of the significant word overlap. 15 Published as a conference paper at ICLR 2020 Reference: people enjoy driving foreign cars . Reference: where did you live before you moved to prague ? Candidate: Bo candidate: | | CO ccc Most Similar Least Similar Figure 2: BERTSCORE visualization. The cosine similarity of each word matching in PBERT are color-coded. The bottom half of Table 7 shows examples where BERTSCORE and human judgments disagree about the ranking. We observe that BERTSCORE finds it difficult to detect factual errors. For example, BERTSCORE assigns high similarity to pair 11 when the translation replaces German language with English and pair 12 where the translation incorrectly outputs Tuesday when it is supposed to generate Monday. BERTSCORE also fails to identify that 5ft 11in is equivalent with 1.80 metres in pair 18. As a result, BERTSCORE assigns low similarity to the eighth pair in Table 7. SENTBLEU also suffers from these limitations. Figure 2 visualizes the BERTSCORE matching of two pairs of candidate and reference sentences. The figure illustrates how FBERT matches synonymous phrases, such as imported cars and foreign cars. We also see that FBERT effectively matches words even given a high ordering distortion, for example the token people in the figure. 16 Published as a conference paper at ICLR 2020 # B REPRESENTATION CHOICE As suggested by previous works (Peters et al., 2018; Reimers & Gurevych, 2019), selecting a good layer or a good combination of layers from the BERT model is important. In designing BERTSCORE, we use WMT16 segment-level human judgment data as a development set to fa- cilitate our representation choice. For Chinese models, we tune with the WMT17 “en-zh” data because the language pair “en-zh” is not available in WMT16. In Figure 3, we plot the change of human correlation of FBERT over different layers of BERT, RoBERTa, XLNet and XLM models. Based on results from different models, we identify a common trend that FBERT computed with the intermediate representations tends to work better. We tune the number of layer to use for a range of publicly available models.8 Table 8 shows the results of our hyperparameter search. Model Total Number of Layers Best Layer bert-base-uncased bert-large-uncased bert-base-cased-finetuned-mrpc bert-base-multilingual-cased bert-base-chinese roberta-base roberta-large roberta-large-mnli xlnet-base-cased xlnet-large-cased xlm-mlm-en-2048 xlm-mlm-100-1280 12 24 12 12 12 12 24 24 12 24 12 16 9 18 9 9 8 10 17 19 5 7 7 11 Table 8: Recommended layer of representation to use for BERTSCORE. The layers are chosen based on a held-out validation set (WMT16). # 8https://huggingface.co/pytorch-transformers/pretrained_models.html 17 Published as a conference paper at ICLR 2020 BERT models: Pearson Correlation of BERT-F1 with human assessment on WMT- 16 to-en s S 0.65 & Oo E 0.60 = io} —e— BERT ime a —e— BERT pmttitingual 0.55 =e BERT ise 0 2 4 6 8 1012 14 16 18 20 22 24 Layer RoBERTa Models: Pearson Correlation of BERT-F1 with human assessment on WMT- 16 to-en s qn 3 o 0.5 E = © 0d} t= ROBERTA —e— ROBERTA}: 0.34 = ROBERT Ajsrg¢—mnit 0 2 4 6 8 1012 14 16 18 20 22 24 Layer XLNet models: Pearson Correlation of BERT-F1 with human assessment on WMT-16 to-en 0.6 s Sos 0.5 3 Bos 50. oO.” 0.37 —e— XLNet, a XLNetiarge 0.2 0 2 4 6 8 10 12 14 16 18 20 22 24 Layer XLM models: Pearson Correlation of BERT-F1 with human assessment on WMT- 16 to-en 0.5 s 204 B= 3 g 503 1S) 5 || XL Men 0.27 a XEMio 0 2 4 6 8 10 12 14 16 Layer BERT-Chinese: Pearson Correlation of BERT-F1 with human assessment on WMT-17 en-zh —e BERT chinese Correlation Layer Figure 3: Pearson correlation of FBERT computed with different models, across different layers, with segment-level human judgments on the WMT16 to-English machine translation task. The WMT17 English-Chinese data is used for the BERT Chinese model. Layer 0 corresponds to using BPE embeddings. Consistently, correlation drops significantly in the final layers. 18 Published as a conference paper at ICLR 2020 C ABLATION STUDY OF MOVERSCORE Word Mover’s Distance (WMD; Kusner et al., 2015) is a semantic similarity metric that relies on word embeddings and optimal transport. MOVERSCORE (Zhao et al., 2019) combines contextual embeddings and WMD for text generation evaluation. In contrast, BERTSCORE adopts a greedy approach to aggregate token-level information. In addition to using WMD for generation evalu- ation, Zhao et al. (2019) also introduce various other improvements. We do a detailed ablation study to understand the benefit of each improvement, and to investigate whether it can be applied to BERTSCORE. We use a 12-layer uncased BERT model on the WMT17 to-English segment-level data, the same setting as Zhao et al. (2019). We identify several differences between MOVERSCORE and BERTSCORE by analyzing the released source code. We isolate each difference, and mark it with a bracketed tag for our ablation study: 1. [MNLI] Use a BERT model fine-tuned on MNLI (Williams et al., 2018). 2. [PMEANS] Apply power means (Rücklé et al., 2018) to aggregate the information of dif- ferent layers.9 3. [IDF-L] For reference sentences, instead of computing the idf scores on the 560 sen- tences in the segment-level data ([IDF-S]), compute the idf scores on the 3,005 sentences in the system-level data. 4. [SEP] For candidate sentences, recompute the idf scores on the candidate sentences. The weighting of reference tokens are kept the same as in [IDF-S] 5. [RM] Exclude punctuation marks and sub-word tokens except the first sub-word in each word from the matching. We follow the setup of Zhao et al. (2019) and use their released fine-tuned BERT model to conduct the experiments. Table 9 shows the results of our ablation study. We report corre- lations for the two variants of WMD Zhao et al. (2019) study: unigrams (WMD1) and bi- grams (WMD2). Our FBERT corresponds to the vanilla setting and the importance weighted vari- ant corresponds to the [IDF-S] setting. The complete MOVERSCORE metric corresponds to [IDF-S]+[SEP]+[PMEANS]+[MNLI]+[RM]. We make several observations. First, for all lan- guage pairs except fi-en and lv-en, we can replicate the reported performance. For these two lan- guage pairs, Zhao et al. (2019) did not release their implementations at the time of publication.10 Second, we confirm the effectiveness of [PMEANS] and [MNLI]. In Appendix F, we study more pre-trained models and further corroborate this conclusion. However, the contribution of other tech- niques, including [RM] and [SEP], seems less stable. Third, replacing greedy matching with WMD does not lead to consistent improvement. In fact, oftentimes BERTSCORE is the better met- ric when given the same setup. In general, for any given language pair, BERTSCORE is always among the best performing ones. Given the current results, it is not clear tht WMD is better than greedy matching for text generation evaluation. 9 Zhao et al. (2019) uses the embeddings from the last five layers from BERT and L2-normalizes the embed- ding vectors at each layer before computing the P-MEANs and L2-normalizing the concatenated P-MEANS. 10A public comment on the project page indicates that some of the techniques are not applied for these two language pairs (https://github.com/AIPHES/emnlp19-moverscore/issues/1). 19 Published as a conference paper at ICLR 2020 Ablation Metric cs-en de-en fi-en lv-en ru-en tr-en zh-en Vanilla WMD1 WMD2 FBERT 0.628 0.638 0.659 0.655 0.661 0.680 0.795 0.797 0.817 0.692 0.695 0.702 0.701 0.700 0.719 0.715 0.728 0.727 0.699 0.714 0.717 IDF-S WMD1 WMD2 FBERT 0.636 0.643 0.657 0.662 0.662 0.681 0.824 0.821 0.823 0.709 0.708 0.713 0.716 0.712 0.725 0.728 0.732 0.718 0.713 0.715 0.711 IDF-L WMD1 WMD2 FBERT 0.633 0.641 0.655 0.659 0.661 0.682 0.825 0.822 0.823 0.708 0.708 0.713 0.716 0.713 0.726 0.727 0.730 0.718 0.715 0.716 0.712 IDF-L + SEP WMD1 WMD2 FBERT 0.651 0.659 0.664 0.660 0.662 0.681 0.819 0.816 0.818 0.703 0.702 0.709 0.714 0.712 0.724 0.724 0.729 0.716 0.715 0.715 0.710 IDF-L + SEP + RM WMD1 WMD2 FBERT 0.651 0.664 0.659 0.686 0.687 0.695 0.803 0.797 0.800 0.681 0.679 0.683 0.730 0.728 0.734 0.730 0.735 0.722 0.720 0.718 0.712 IDF-L + SEP + PMEANS WMD1 WMD2 FBERT 0.658 0.667 0.671 0.663 0.665 0.682 0.820 0.817 0.819 0.707 0.707 0.708 0.717 0.717 0.725 0.725 0.727 0.715 0.712 0.712 0.704 IDF-L + SEP + MNLI WMD1 WMD2 FBERT 0.659 0.664 0.668 0.679 0.682 0.701 0.822 0.819 0.825 0.732 0.731 0.737 0.718 0.715 0.727 0.746 0.748 0.744 0.725 0.722 0.725 IDF-L + SEP + PMEANS + MNLI WMD1 WMD2 FBERT 0.672 0.677 0.682 0.686 0.690 0.707 0.831 0.828 0.836 0.738 0.736 0.741 0.725 0.722 0.732 0.753 0.755 0.751 0.737 0.735 0.736 IDF-L + SEP + PMEANS + MNLI + RM WMD1 WMD2 FBERT 0.670 0.679 0.676 0.708 0.709 0.717 0.821 0.814 0.824 0.717 0.716 0.719 0.738 0.736 0.740 0.762 0.762 0.757 0.744 0.738 0.738 Table 9: Ablation Study of MOVERSCORE and BERTSCORE using Pearson correlations on the WMT17 to-English segment-level data. Correlations that are not outperformed by others for that language pair under Williams Test are bolded. We observe that using WMD does not consistently improve BERTSCORE. 20 Published as a conference paper at ICLR 2020 Type Metric Meaning Grammar Combined BERTSCORE PBERT RBERT FBERT 0.36 0.64 0.58 0.47 0.29 0.41 0.46 0.52 0.56 Common metrics BLEU METEOR ROUGE-L SARI 0.46 0.53 0.51 0.50 0.13 0.11 0.16 0.15 0.33 0.36 0.38 0.37 Best metrics according to Toutanova et al. (2016) SKIP-2+RECALL+MULT-PROB PARSE-2+RECALL+MULT-MAX PARSE-2+RECALL+MULT-PROB 0.59 N/A 0.57 N/A 0.35 0.35 0.51 0.52 0.52 Table 10: Pearson correlations with human judgments on the MSR Abstractive Text Compression Dataset. # D ADDITIONAL EXPERIMENTS ON ABSTRACTIVE TEXT COMPRESSION We use the human judgments provided from the MSR Abstractive Text Compression Dataset (Toutanova et al., 2016) to illustrate the applicability of BERTSCORE to abstractive text compression evaluation. The data includes three types of human scores: (a) meaning: how well a compressed text preserve the meaning of the original text; (b) grammar: how grammatically correct a compressed text is; and (c) combined: the average of the meaning and the grammar scores. We follow the experimental setup of Toutanova et al. (2016) and report Pearson correlation between BERTSCORE and the three types of human scores. Table 10 shows that RBERT has the highest cor- relation with human meaning judgments, and PBERT correlates highly with human grammar judg- ments. FBERT provides a balance between the two aspects. 21 Published as a conference paper at ICLR 2020 Task Model BLEU ˆPBERT ˆRBERT ˆFBERT PBERT RBERT FBERT WMT14 En-De ConvS2S (Gehring et al., 2017) Transformer-big∗∗ (Ott et al., 2018) DynamicConv∗∗∗ (Wu et al., 2019) 0.266 0.6099 0.6055 0.6075 0.8499 0.8482 0.8488 0.298 0.6587 0.6528 0.6558 0.8687 0.8664 0.8674 0.297 0.6526 0.6464 0.6495 0.8664 0.8640 0.8650 WMT14 En-Fr ConvS2S (Gehring et al., 2017) Transformer-big (Ott et al., 2018) DynamicConv (Wu et al., 2019) 0.408 0.6998 0.6821 0.6908 0.8876 0.8810 0.8841 0.432 0.7148 0.6978 0.7061 0.8932 0.8869 0.8899 0.432 0.7156 0.6989 0.7071 0.8936 0.8873 0.8902 IWSLT14 De-En Transformer-iwslt+ (Ott et al., 2019) 0.350 0.6749 0.6590 0.6672 0.9452 0.9425 0.9438 0.348 0.6737 0.6542 0.6642 0.9450 0.9417 0.9433 LightConv (Wu et al., 2019) 0.352 0.6770 0.6586 0.6681 0.9456 0.9425 0.9440 DynamicConv (Wu et al., 2019) Table 11: BLEU scores and BERTSCOREs of publicly available pre-trained MT models in fairseq (Ott et al., 2019). We show both rescaled scores marked with ˆ and raw BERTSCOREs. ∗: trained on unconfirmed WMT data version, ∗∗: trained on WMT16 + ParaCrawl, ∗∗∗: trained on WMT16, +: trained by us using fairseq. # E BERTSCORE OF RECENT MT MODELS Table 11 shows the BLEU scores and the BERTSCOREs of pre-trained machine translation models on WMT14 English-to-German, WMT14 English-to-French, IWSLT14 German-to-English task. We used publicly available pre-trained models from fairseq (Ott et al., 2019).11 Because a pre- trained Transformer model on IWSLT is not released, we trained our own using the fairseq library. 12 for English-to-German and English-to-French pairs, and We use multilingual cased BERTbase 13 for German-to-English pairs. Interestingly, the gap between a Dy- English uncased BERTbase namicConv (Wu et al., 2019) trained on only WMT16 and a Transformer (Ott et al., 2018) trained on WMT16 and ParaCrawl14 (about 30× more training data) becomes larger when evaluated with BERTSCORE rather than BLEU. 11 Code and pre-trained model available at https://github.com/pytorch/fairseq. 12Hash code: bert-base-multilingual-cased_L9_version=0.2.0 13Hash code: roberta-large_L17_version=0.2.0 14http://paracrawl.eu/download.html 22 Published as a conference paper at ICLR 2020 # F ADDITIONAL RESULTS In this section, we present additional experimental results: 1. Segment-level and system-level correlation studies on three years of WMT metric evalua- tion task (WMT16–18) 2. Model selection study on WMT18 10K hybrid systems 3. System-level correlation study on 2015 COCO captioning challenge 4. Robustness study on PAWS-QQP. Following BERT (Devlin et al., 2019), a variety of Transformer-based (Vaswani et al., 2017) pre- trained contextual embeddings have been proposed and released. We conduct additional experiments with four types of pre-trained embeddings: BERT, XLM (Lample & Conneau, 2019), XLNet (Yang et al., 2019b), and RoBERTa (Liu et al., 2019b). XLM (Cross-lingual Language Model) is a Trans- former pre-trained on the translation language modeling of predicting masked tokens from a pair of sentence in two different languages and masked language modeling tasks using multi-lingual train- ing data. Yang et al. (2019b) modify the Transformer architecture and pre-train it on a permutation language modeling task resulting in some improvement on top of the original BERT when fine-tuned on several downstream tasks. Liu et al. (2019b) introduce RoBERTa (Robustly optimized BERT ap- proach) and demonstrate that an optimized BERT model is comparable to or sometimes outperforms an XLNet on downstream tasks. We perform a comprehensive study with the following pre-trained contextual embedding models:15 • BERT models: bert-based-chinese, bert-base-cased-mrpc bert-base-uncased, bert-large-uncased, and bert-base-multilingual-cased, RoBERTa models: roberta-base, roberta-large, and roberta-large-mnli • XLNet models: xlnet-base-cased and xlnet-base-large • XLM models: xlm-mlm-en-2048 and xlm-mlm-100-1280 F.1 WMT CORRELATION STUDY Experimental setup Because of missing data in the released WMT16 dataset (Bojar et al., 2016), we are only able to experiment with to-English segment-level data, which contains the outputs of 50 different systems on 6 language pairs. We use this data as the validation set for hyperparam- eter tuning (Appendix B). Table 12 shows the Pearson correlations of all participating metrics and BERTSCOREs computed with different pre-trained models. Significance testing for this dataset does not include the baseline metrics because the released dataset does not contain the original outputs from the baseline metrics. We conduct significance testing between BERTSCORE results only. The WMT17 dataset (Bojar et al., 2017) contains outputs of 152 different translations on 14 lan- guage pairs. We experiment on the segment-level and system-level data on both to-English and from-English language pairs. We exclude fi-en data from the segment-level experiment due to an error in the released data. We compare our results to all participating metrics and perform standard significance testing as done by Bojar et al. (2017). Tables 13–16 show the results. The WMT18 dataset (Ma et al., 2018) contains outputs of 159 translation systems on 14 lan- guage pairs. In addition to the results in Tables 1–4, we complement the study with the correla- tions of all participating metrics in WMT18 and results from using different contextual models for BERTSCORE. Results Table 12–22 collectively showcase the effectiveness of BERTSCORE in correlating with human judgments. The improvement of BERTSCORE is more pronounced on the segment-level than on the system-level. We also see that more optimized or larger BERT models can produce better contextual representations (e.g., comparing FRoBERTa–Large and FBERT–Large). In contrast, the smaller XLNet performs better than a large one. Based on the evidence in Figure 8 and Tables 12–22, we 15Denoted by names specified at https://huggingface.co/pytorch-transformers/pretrained_models.html. 23 Published as a conference paper at ICLR 2020 hypothesize that the permutation language task, though leading to a good set of model weights for fine-tuning on downstream tasks, does not necessarily produce informative pre-trained embeddings for generation evaluation. We also observe that fine-tuning pre-trained models on a related task, such as natural language inference (Williams et al., 2018), can lead to better human correlation in evaluating text generation. Therefore, for evaluating English sentences, we recommend computing BERTSCORE with a 24-layer RoBERTa model fine-tuned on the MNLI dataset. For evaluating Non-English sentences, both the multilingual BERT model and the XLM model trained on 100 languages are suitable candidates. We also recommend using domain- or language-specific contex- tual embeddings when possible, such as using BERT Chinese models for evaluating Chinese tasks. In general, we advise users to consider the target domain and languages when selecting the exact configuration to use. F.2 MODEL SELECTION STUDY Experimental setup Similar to Section 4, we use the 10K hybrid systems super-sampled from WMT18. We randomly select 100 out of 10K hybrid systems, rank them using automatic metrics, and repeat this process 100K times. We add to the results in the main paper (Table 3) performance of all participating metrics in WMT18 and results from using different contextual embedding models for BERTSCORE. We reuse the hybrid configuration and metric outputs released in WMT18. In addition to the Hits@1 measure, we evaluate the metrics using (a) mean reciprocal rank (MRR) of the top metric-rated system in human rankings, and (b) the absolute human score difference (Diff) between the top metric- and human-rated systems. Hits@1 captures a metric’s ability to select the best system. The other two measures quantify the amount of error a metric makes in the selection process. Tables 23–28 show the results from these experiments. Results The additional results further support our conclusion from Table 3: BERTSCORE demon- strates better model selection performance. We also observe that the supervised metric RUSE dis- plays strong model selection ability. IMAGE CAPTIONING ON COCO We follow the experimental setup described in Section 4. Table 29 shows the correlations of several pre-trained contextual embeddings. We observe that precision-based methods such as BLEU and PBERT are weakly correlated with human judgments on image captioning tasks. We hypothesize that this is because human judges prefer captions that capture the main objects in a picture for image captioning. In general, RBERT has a high correlation, even surpassing the task-specific metric SPICE Anderson et al. (2016). While the fine-tuned RoBERTa-Large model does not result in the highest correlation, it is one of the best metrics. F.4 ROBUSTNESS ANALYSIS ON PAWS-QQP We present the full results of the robustness study described in Section 6 in Table 30. In general, we observe that BERTSCORE is more robust than other commonly used metrics. BERTSCORE computed with the 24-layer RoBERTa model performs the best. Fine-tuning RoBERTa-Large on MNLI (Williams et al., 2018) can significantly improve the robustness against adversarial sentences. However, a fine-tuned BERT on MRPC (Microsoft Research Paraphrasing Corpus) (Dolan & Brock- ett, 2005) performs worse than its counterpart. 24 Published as a conference paper at ICLR 2020 Metric cs-en 560 de-en 560 fi-en 560 ro-en 560 ru-en 560 tr-en 560 DPMFCOMB METRICS-F COBALT-F. UPF-COBA. MPEDA CHRF2 CHRF3 CHRF1 UOW-REVAL WORDF3 WORDF2 WORDF1 SENTBLEU DTED 0.713 0.696 0.671 0.652 0.644 0.658 0.660 0.644 0.577 0.599 0.596 0.585 0.557 0.394 0.584 0.601 0.591 0.550 0.538 0.457 0.455 0.454 0.528 0.447 0.445 0.435 0.448 0.254 0.598 0.557 0.554 0.490 0.513 0.469 0.472 0.452 0.471 0.473 0.471 0.464 0.484 0.361 0.627 0.662 0.639 0.616 0.587 0.581 0.582 0.570 0.547 0.525 0.522 0.508 0.499 0.329 0.615 0.618 0.618 0.556 0.545 0.534 0.535 0.522 0.528 0.504 0.503 0.497 0.502 0.375 0.663 0.649 0.627 0.626 0.616 0.556 0.555 0.551 0.531 0.536 0.537 0.535 0.532 0.267 BEER 0.661 0.462 0.471 0.551 0.533 0.545 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (no idf) RBERT–Base (no idf) FBERT–Base (no idf) 0.729 0.741 0.747 0.723 0.745 0.747 0.617 0.639 0.640 0.638 0.656 0.663 0.719 0.616 0.661 0.662 0.638 0.666 0.651 0.693 0.723 0.700 0.697 0.714 0.684 0.660 0.672 0.633 0.653 0.662 0.678 0.660 0.688 0.696 0.674 0.703 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.697 0.723 0.725 0.713 0.727 0.735 0.618 0.636 0.644 0.613 0.631 0.637 0.614 0.587 0.617 0.630 0.573 0.620 0.676 0.667 0.691 0.693 0.666 0.700 0.62 0.648 0.654 0.635 0.642 0.658 0.695 0.664 0.702 0.691 0.662 0.697 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.756 0.768 0.774 0.758 0.771 0.774 0.671 0.684 0.693 0.653 0.680 0.678 0.701 0.677 0.705 0.704 0.661 0.700 0.723 0.720 0.736 0.734 0.718 0.740 0.678 0.686 0.701 0.685 0.687 0.701 0.706 0.699 0.717 0.705 0.692 0.711 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.738 0.745 0.761 0.751 0.744 0.767 0.642 0.669 0.674 0.626 0.652 0.653 0.671 0.645 0.686 0.678 0.638 0.688 0.712 0.698 0.732 0.723 0.699 0.737 0.669 0.682 0.697 0.685 0.685 0.705 0.671 0.653 0.689 0.668 0.657 0.685 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.757 0.765 0.780 0.771 0.762 0.786 0.702 0.713 0.724 0.682 0.695 0.704 0.709 0.686 0.728 0.705 0.683 0.727 0.735 0.718 0.753 0.727 0.711 0.747 0.721 0.714 0.738 0.714 0.708 0.732 0.676 0.676 0.709 0.681 0.678 0.711 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.777 0.790 0.795 0.794 0.792 0.804 0.718 0.731 0.736 0.695 0.706 0.710 0.733 0.702 0.733 0.731 0.694 0.729 0.744 0.741 0.757 0.752 0.737 0.760 0.729 0.727 0.744 0.732 0.724 0.742 0.747 0.732 0.756 0.747 0.733 0.754 PXLNet–Base RXLNet–Base FXLNet–Base PXLNet–Base (idf) RXLNet–Base (idf) FXLNet–Base (idf) 0.708 0.728 0.727 0.726 0.734 0.739 0.612 0.630 0.631 0.618 0.633 0.633 0.639 0.617 0.640 0.655 0.618 0.649 0.650 0.645 0.659 0.678 0.66 0.681 0.606 0.621 0.626 0.629 0.635 0.643 0.690 0.675 0.695 0.700 0.682 0.702 PXL-NET–LARGE RXL-NET–LARGE FXL-NET–LARGE PXL-NET–LARGE (idf) RXL-NET–LARGE (idf) FXL-NET–LARGE (idf) 0.710 0.732 0.733 0.728 0.735 0.742 0.577 0.600 0.600 0.574 0.592 0.592 0.643 0.610 0.643 0.652 0.597 0.643 0.647 0.636 0.655 0.669 0.642 0.670 0.616 0.627 0.637 0.633 0.629 0.645 0.684 0.668 0.691 0.681 0.662 0.685 # Setting # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.688 0.715 0.713 0.728 0.730 0.739 0.569 0.603 0.597 0.576 0.597 0.594 0.613 0.577 0.610 0.649 0.591 0.636 0.645 0.645 0.657 0.681 0.659 0.682 0.583 0.609 0.610 0.604 0.622 0.626 (0.636-—«0.682—(0.626 0.659 0.644 0.668 0.683 0.669 0.691 Table 12: Pearson correlations with segment-level human judgments on WMT16 to-English trans- lations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples. 25 Published as a conference paper at ICLR 2020 Metric cs-en 560 de-en 560 fi-en 560 lv-en 560 ru-en 560 tr-en 560 zh-en 560 CHRF CHRF++ MEANT 2.0 MEANT 2.0-NOSRL SENTBLEU TREEAGGREG UHH_TSKM 0.514 0.523 0.578 0.566 0.435 0.486 0.507 0.531 0.534 0.565 0.564 0.432 0.526 0.479 0.671 0.678 0.687 0.682 0.571 0.638 0.600 0.525 0.520 0.586 0.573 0.393 0.446 0.394 0.599 0.588 0.607 0.591 0.484 0.555 0.465 0.607 0.614 0.596 0.582 0.538 0.571 0.478 0.591 0.593 0.639 0.630 0.512 0.535 0.477 AUTODA BEER BLEND BLEU2VEC NGRAM2VEC 0.499 0.511 0.594 0.439 0.436 0.543 0.530 0.571 0.429 0.435 0.673 0.681 0.733 0.590 0.582 0.533 0.515 0.577 0.386 0.383 0.584 0.577 0.622 0.489 0.490 0.625 0.600 0.671 0.529 0.538 0.583 0.582 0.661 0.526 0.520 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.625 0.653 0.654 0.626 0.652 0.657 0.659 0.645 0.671 0.668 0.658 0.680 0.808 0.782 0.811 0.819 0.789 0.823 0.688 0.662 0.692 0.708 0.678 0.712 0.698 0.678 0.707 0.719 0.696 0.725 0.713 0.716 0.731 0.702 0.703 0.718 0.675 0.715 0.714 0.667 0.712 0.711 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.599 0.613 0.627 0.609 0.611 0.633 0.630 0.620 0.647 0.630 0.628 0.649 0.788 0.754 0.792 0.801 0.759 0.803 0.657 0.616 0.656 0.680 0.633 0.678 0.659 0.650 0.676 0.676 0.665 0.690 0.710 0.685 0.717 0.712 0.687 0.719 0.681 0.705 0.712 0.682 0.703 0.713 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.638 0.661 0.666 0.644 0.665 0.671 0.685 0.676 0.701 0.692 0.686 0.707 0.816 0.782 0.814 0.827 0.796 0.829 0.717 0.693 0.723 0.728 0.712 0.738 0.719 0.705 0.730 0.729 0.729 0.745 0.746 0.744 0.760 0.734 0.733 0.746 0.693 0.730 0.731 0.689 0.730 0.729 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.639 0.648 0.675 0.629 0.652 0.673 0.663 0.652 0.683 0.655 0.646 0.673 0.801 0.768 0.818 0.804 0.773 0.823 0.689 0.651 0.693 0.702 0.667 0.708 0.688 0.669 0.707 0.711 0.676 0.719 0.700 0.684 0.718 0.707 0.689 0.721 0.704 0.734 0.740 0.700 0.734 0.739 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.658 0.685 0.710 0.644 0.683 0.703 0.724 0.714 0.745 0.721 0.705 0.737 0.811 0.778 0.833 0.815 0.783 0.838 0.743 0.711 0.756 0.740 0.718 0.761 0.727 0.718 0.746 0.734 0.720 0.752 0.720 0.713 0.751 0.736 0.726 0.764 0.744 0.759 0.775 0.734 0.751 0.767 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.694 0.706 0.722 0.686 0.697 0.714 0.736 0.725 0.747 0.733 0.717 0.740 0.822 0.785 0.822 0.836 0.796 0.835 0.764 0.732 0.764 0.772 0.741 0.774 0.741 0.741 0.758 0.760 0.753 0.773 0.754 0.750 0.767 0.767 0.757 0.776 0.737 0.760 0.765 0.738 0.762 0.767 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.595 0.603 0.610 0.616 0.614 0.627 0.579 0.560 0.580 0.603 0.583 0.603 0.779 0.746 0.775 0.795 0.765 0.795 0.632 0.617 0.636 0.665 0.640 0.663 0.626 0.624 0.639 0.659 0.648 0.665 0.688 0.689 0.700 0.693 0.697 0.707 0.646 0.677 0.675 0.649 0.688 0.684 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.620 0.622 0.635 0.635 0.626 0.646 0.622 0.601 0.627 0.633 0.611 0.636 0.796 0.758 0.794 0.808 0.770 0.809 0.648 0.628 0.654 0.673 0.646 0.675 0.648 0.645 0.664 0.672 0.661 0.682 0.694 0.684 0.705 0.688 0.682 0.700 0.660 0.701 0.698 0.649 0.700 0.695 PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.565 0.592 0.595 0.599 0.624 0.630 0.594 0.586 0.605 0.618 0.605 0.624 0.769 0.734 0.768 0.795 0.768 0.798 0.631 0.618 0.641 0.670 0.652 0.676 0.649 0.647 0.664 0.686 0.680 0.698 0.672 0.673 0.686 0.690 0.684 0.698 0.643 0.686 0.683 0.657 0.698 0.694 # Setting # Setting # Unsupervised # Supervised # Pre-Trained Table 13: Absolute Pearson correlations with segment-level human judgments on WMT17 to- English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples. 26 Published as a conference paper at ICLR 2020 Setting Metric en-cs 32K τ en-de 3K τ en-fi 3K τ en-lv 3K τ en-ru 560 |r| en-tr 247 τ Unsupervised AUTODA AUTODA-TECTO CHRF CHRF+ CHRF++ MEANT 2.0 MEANT 2.0-NOSRL SENTBLEU TREEAGGREG 0.041 0.336 0.376 0.377 0.368 - 0.395 0.274 0.361 0.099 - 0.336 0.325 0.328 0.350 0.324 0.269 0.305 0.204 - 0.503 0.514 0.484 - 0.565 0.446 0.509 0.130 - 0.420 0.421 0.417 - 0.425 0.259 0.383 0.511 - 0.605 0.609 0.604 - 0.636 0.468 0.535 0.409 - 0.466 0.474 0.466 - 0.482 0.377 0.441 Supervised BEER BLEND BLEU2VEC NGRAM2VEC 0.398 - 0.305 - 0.336 - 0.313 - 0.557 - 0.503 0.486 0.420 - 0.315 0.317 0.569 0.578 0.472 - 0.490 - 0.425 - PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.412 0.443 0.440 0.411 0.449 0.447 0.364 0.430 0.404 0.328 0.416 0.379 0.561 0.587 0.587 0.568 0.591 0.588 0.435 0.480 0.466 0.444 0.479 0.470 0.606 0.663 0.653 0.616 0.665 0.657 0.579 0.571 0.587 0.555 0.579 0.571 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.406 0.446 0.444 0.419 0.450 0.448 0.383 0.436 0.424 0.367 0.424 0.419 0.553 0.587 0.577 0.557 0.592 0.580 0.423 0.458 0.456 0.427 0.464 0.459 0.562 0.626 0.613 0.571 0.632 0.617 0.611 0.652 0.628 0.595 0.644 0.644 en-zh 560 |r| 0.609 - 0.608 - 0.602 0.727 0.705 0.642 0.566 0.622 - - - 0.759 0.804 0.806 0.741 0.796 0.793 0.722 0.779 0.778 0.719 0.770 0.771 Table 14: Absolute Pearson correlation (|r|) and Kendall correlation (τ ) with segment-level human judgments on WMT17 from-English translations. Correlations of metrics not significantly outper- formed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples. 27 Published as a conference paper at ICLR 2020 Metric cs-en 4 de-en 11 fi-en 6 lv-en 9 ru-en 9 tr-en 10 zh-en 16 BLEU CDER CHARACTER CHRF CHRF++ MEANT 2.0 MEANT 2.0-NOSRL NIST PER TER TREEAGGREG UHH_TSKM WER 0.971 0.989 0.972 0.939 0.940 0.926 0.902 1.000 0.968 0.989 0.983 0.996 0.987 0.923 0.930 0.974 0.968 0.965 0.950 0.936 0.931 0.951 0.906 0.920 0.937 0.896 0.903 0.927 0.946 0.938 0.927 0.941 0.933 0.931 0.896 0.952 0.977 0.921 0.948 0.979 0.985 0.932 0.968 0.973 0.970 0.963 0.960 0.962 0.971 0.986 0.990 0.969 0.912 0.922 0.958 0.952 0.945 0.962 0.960 0.912 0.911 0.912 0.918 0.914 0.907 0.976 0.973 0.949 0.944 0.960 0.932 0.896 0.971 0.932 0.954 0.987 0.987 0.925 0.864 0.904 0.799 0.859 0.880 0.838 0.800 0.849 0.877 0.847 0.861 0.902 0.839 AUTODA BEER BLEND BLEU2VEC NGRAM2VEC 0.438 0.972 0.968 0.989 0.984 0.959 0.960 0.976 0.936 0.935 0.925 0.955 0.958 0.888 0.890 0.973 0.978 0.979 0.966 0.963 0.907 0.936 0.964 0.907 0.907 0.916 0.972 0.984 0.961 0.955 0.734 0.902 0.894 0.886 0.880 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.975 0.995 0.987 0.983 0.997 0.992 0.936 0.975 0.961 0.937 0.981 0.967 0.991 0.944 0.979 0.998 0.962 0.995 0.993 0.978 0.991 0.992 0.968 0.992 0.918 0.953 0.937 0.939 0.977 0.960 0.981 0.991 0.991 0.985 0.985 0.996 0.892 0.975 0.953 0.878 0.949 0.951 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.982 0.999 0.994 0.989 0.999 0.997 0.926 0.979 0.957 0.936 0.987 0.968 0.990 0.950 0.986 0.992 0.962 0.995 0.987 0.982 0.994 0.979 0.980 0.997 0.916 0.957 0.938 0.931 0.975 0.956 0.970 0.977 0.980 0.976 0.979 0.989 0.899 0.985 0.960 0.892 0.973 0.963 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.981 0.996 0.990 0.986 0.997 0.994 0.937 0.975 0.960 0.938 0.982 0.965 0.991 0.953 0.981 0.998 0.967 0.993 0.996 0.985 0.995 0.995 0.979 0.995 0.921 0.954 0.938 0.939 0.974 0.958 0.987 0.992 0.992 0.994 0.992 0.998 0.905 0.977 0.957 0.897 0.966 0.959 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.987 0.999 0.996 0.990 0.998 0.996 0.930 0.982 0.961 0.938 0.987 0.970 0.984 0.947 0.993 0.980 0.963 0.999 0.966 0.979 0.993 0.956 0.979 0.994 0.916 0.956 0.937 0.929 0.971 0.952 0.963 0.986 0.983 0.967 0.986 0.989 0.955 0.984 0.982 0.962 0.974 0.982 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.989 0.998 0.996 0.989 0.995 0.996 0.948 0.988 0.973 0.959 0.991 0.982 0.984 0.957 0.997 0.975 0.962 0.998 0.949 0.983 0.991 0.935 0.979 0.991 0.927 0.969 0.949 0.944 0.981 0.965 0.960 0.982 0.984 0.968 0.981 0.991 0.967 0.984 0.987 0.974 0.970 0.984 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.994 0.995 0.999 0.995 0.994 0.999 0.963 0.991 0.982 0.970 0.992 0.989 0.995 0.962 0.992 0.997 0.967 0.996 0.990 0.981 0.996 0.985 0.977 0.997 0.944 0.973 0.961 0.955 0.983 0.972 0.981 0.985 0.988 0.988 0.988 0.994 0.974 0.984 0.989 0.979 0.972 0.987 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.988 0.999 0.996 0.992 0.999 0.998 0.938 0.978 0.963 0.951 0.986 0.974 0.993 0.956 0.986 0.998 0.968 0.996 0.993 0.977 0.991 0.996 0.973 0.994 0.914 0.946 0.932 0.930 0.964 0.950 0.974 0.981 0.981 0.982 0.987 0.990 0.960 0.980 0.978 0.939 0.955 0.970 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.991 0.996 0.999 0.995 0.993 1.000 0.944 0.981 0.969 0.955 0.985 0.978 0.996 0.945 0.986 0.999 0.951 0.994 0.995 0.971 0.992 0.996 0.960 0.993 0.924 0.961 0.945 0.941 0.975 0.962 0.982 0.986 0.992 0.985 0.974 0.994 0.943 0.958 0.961 0.937 0.910 0.954 # Setting # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.983 0.998 0.994 0.986 0.999 0.995 0.933 0.978 0.960 0.940 0.983 0.967 0.994 0.949 0.985 0.997 0.966 0.996 0.989 0.983 0.995 0.992 0.980 0.998 0.918 0.957 0.938 0.939 0.975 0.959 0.973 0.985 0.984 0.979 0.991 0.993 0.928 0.972 0.964 0.916 0.952 0.958 Table 15: Absolute Pearson correlations with system-level human judgments on WMT17 to-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 28 Published as a conference paper at ICLR 2020 Setting Metric en-cs 14 en-de 16 en-lv 17 en-ru 9 en-tr 8 en-zh 11 Unsupervised BLEU CDER CHARACTER CHRF CHRF++ MEANT 2.0 MEANT 2.0-NOSRL NIST PER TER TREEAGGREG UHH_TSKM WER 0.956 0.968 0.981 0.976 0.974 – 0.976 0.962 0.954 0.955 0.947 – 0.954 0.804 0.813 0.938 0.863 0.852 0.858 0.770 0.769 0.687 0.796 0.773 – 0.802 0.866 0.930 0.897 0.955 0.956 – 0.959 0.935 0.851 0.909 0.927 – 0.906 0.898 0.924 0.939 0.950 0.945 – 0.957 0.920 0.887 0.933 0.921 – 0.934 0.924 0.957 0.975 0.991 0.986 – 0.991 0.986 0.963 0.967 0.983 – 0.956 – – 0.933 0.976 0.976 0.956 0.943 – – – 0.938 – – Supervised AUTODA BEER BLEND BLEU2VEC NGRAM2VEC 0.975 0.970 – 0.963 – 0.603 0.842 – 0.810 – 0.729 0.930 – 0.859 0.862 0.850 0.944 0.953 0.903 – 0.601 0.980 – 0.911 – 0.976 0.914 – – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.959 0.982 0.976 0.963 0.985 0.979 0.798 0.909 0.859 0.760 0.907 0.841 0.960 0.957 0.959 0.960 0.955 0.958 0.946 0.980 0.966 0.947 0.981 0.968 0.981 0.979 0.980 0.984 0.984 0.984 0.970 0.994 0.992 0.971 0.982 0.991 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.967 0.980 0.979 0.968 0.981 0.979 0.825 0.902 0.868 0.809 0.894 0.856 0.965 0.965 0.969 0.965 0.964 0.966 0.953 0.982 0.971 0.955 0.984 0.973 0.974 0.977 0.976 0.980 0.983 0.982 0.977 0.979 0.986 0.975 0.968 0.979 Table 16: Absolute Pearson correlations with system-level human judgments on WMT17 from- English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 29 Published as a conference paper at ICLR 2020 Metric cs-en 5K de-en 78K et-en 57K fi-en 16K ru-en 10K tr-en 9K zh-en 33K CHARACTER ITER METEOR++ SENTBLEU UHH_TSKM YISI-0 YISI-1 YISI-1 SRL 0.256 0.198 0.270 0.233 0.274 0.301 0.319 0.317 0.450 0.396 0.457 0.415 0.436 0.474 0.488 0.483 0.286 0.235 0.329 0.285 0.300 0.330 0.351 0.345 0.185 0.128 0.207 0.154 0.168 0.225 0.231 0.237 0.244 0.139 0.253 0.228 0.235 0.294 0.300 0.306 0.172 -0.029 0.204 0.145 0.154 0.215 0.234 0.233 0.202 0.144 0.179 0.178 0.151 0.205 0.211 0.209 BEER BLEND RUSE 0.295 0.322 0.347 0.481 0.492 0.498 0.341 0.354 0.368 0.232 0.226 0.273 0.288 0.290 0.311 0.229 0.232 0.259 0.214 0.217 0.218 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.349 0.370 0.373 0.352 0.368 0.375 0.522 0.528 0.531 0.524 0.536 0.535 0.373 0.378 0.385 0.382 0.388 0.393 0.264 0.291 0.287 0.27 0.300 0.294 0.325 0.333 0.341 0.326 0.340 0.339 0.264 0.257 0.266 0.277 0.284 0.289 0.232 0.244 0.243 0.235 0.244 0.243 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.343 0.370 0.366 0.348 0.379 0.373 0.520 0.524 0.529 0.522 0.531 0.534 0.365 0.373 0.377 0.371 0.383 0.383 0.247 0.277 0.271 0.25 0.285 0.274 0.333 0.34 0.342 0.318 0.339 0.342 0.25 0.261 0.263 0.256 0.266 0.275 0.227 0.244 0.242 0.224 0.242 0.242 PBERT–LARGE RBERT–LARGE FBERT–LARGE PBERT–LARGE (idf) RBERT–LARGE (idf) FBERT–LARGE (idf) 0.361 0.386 0.402 0.377 0.386 0.388 0.529 0.532 0.537 0.532 0.544 0.545 0.380 0.386 0.390 0.390 0.396 0.399 0.276 0.297 0.296 0.287 0.308 0.309 0.340 0.347 0.344 0.342 0.356 0.358 0.266 0.268 0.274 0.292 0.287 0.300 0.241 0.247 0.252 0.246 0.251 0.257 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.368 0.383 0.391 0.379 0.389 0.400 0.53 0.536 0.540 0.528 0.539 0.540 0.371 0.376 0.383 0.372 0.384 0.385 0.274 0.283 0.273 0.261 0.288 0.274 0.318 0.336 0.339 0.314 0.332 0.337 0.265 0.253 0.270 0.265 0.267 0.277 0.235 0.245 0.249 0.232 0.245 0.247 PRoBERTa–LARGE RRoBERTa–LARGE FRoBERTa–LARGE PRoBERTa–LARGE (idf) RRoBERTa–LARGE (idf) FRoBERTa–LARGE (idf) 0.387 0.388 0.404 0.391 0.386 0.408 0.541 0.546 0.550 0.540 0.548 0.550 0.389 0.391 0.397 0.387 0.394 0.395 0.283 0.304 0.296 0.280 0.305 0.293 0.345 0.343 0.353 0.334 0.338 0.346 0.280 0.290 0.292 0.284 0.295 0.296 0.248 0.255 0.264 0.252 0.252 0.260 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.397 0.404 0.418 0.414 0.412 0.417 0.549 0.553 0.557 0.552 0.555 0.559 0.396 0.393 0.402 0.399 0.400 0.403 0.299 0.313 0.312 0.301 0.316 0.309 0.351 0.351 0.362 0.349 0.357 0.357 0.295 0.279 0.290 0.306 0.289 0.307 0.253 0.253 0.258 0.249 0.258 0.258 PXLNet–Base RXLNet–Base FXLNet–Base PXLNet–Base (idf) RXLNet–Base (idf) FXLNet–Base (idf) 0.335 0.351 0.351 0.339 0.364 0.355 0.514 0.515 0.517 0.516 0.521 0.524 0.359 0.362 0.365 0.366 0.371 0.374 0.243 0.261 0.257 0.258 0.268 0.265 0.308 0.311 0.315 0.307 0.317 0.320 0.247 0.227 0.25 0.261 0.242 0.261 0.232 0.232 0.237 0.236 0.238 0.241 PXL-NET–LARGE RXL-NET–LARGE FXL-NET–LARGE PXL-NET–LARGE (idf) RXL-NET–LARGE (idf) FXL-NET–LARGE (idf) 0.344 0.358 0.357 0.348 0.366 0.375 0.522 0.524 0.530 0.520 0.529 0.530 0.371 0.374 0.380 0.373 0.378 0.382 0.252 0.275 0.265 0.260 0.278 0.274 0.316 0.332 0.334 0.319 0.331 0.332 0.264 0.249 0.263 0.265 0.266 0.274 0.233 0.239 0.238 0.235 0.241 0.240 PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.349 0.358 0.358 0.355 0.362 0.367 0.516 0.518 0.525 0.527 0.528 0.531 0.366 0.364 0.373 0.374 0.376 0.382 0.244 0.264 0.259 0.254 0.274 0.273 0.310 0.320 0.322 0.311 0.333 0.330 0.259 0.244 0.258 0.28 0.26 0.275 0.233 0.237 0.238 0.238 0.24 0.246 # Setting # Unsupervised # Supervised # Pre-Trained Table 17: Kendall correlations with segment-level human judgments on WMT18 to-English transla- tions. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples. 30 Published as a conference paper at ICLR 2020 Setting Metric en-cs 5K en-de 20K en-et 32K en-fi 10K en-ru 22K en-tr 1K en-zh 29K Unsupervised CHARACTER ITER SENTBLEU YISI-0 YISI-1 YISI-1 SRL 0.414 0.333 0.389 0.471 0.496 - 0.604 0.610 0.620 0.661 0.691 0.696 0.464 0.392 0.414 0.531 0.546 - 0.403 0.311 0.355 0.464 0.504 - 0.352 0.291 0.330 0.394 0.407 - 0.404 0.236 0.261 0.376 0.418 - 0.313 - 0.311 0.318 0.323 0.310 Supervised BEER BLEND 0.518 - 0.686 - 0.558 - 0.511 - 0.403 0.394 0.374 - 0.302 - PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.541 0.570 0.562 0.525 0.569 0.553 0.715 0.728 0.728 0.7 0.727 0.721 0.549 0.594 0.586 0.54 0.601 0.585 0.486 0.565 0.546 0.495 0.561 0.537 0.414 0.420 0.423 0.423 0.423 0.425 0.328 0.411 0.399 0.352 0.420 0.406 0.337 0.367 0.364 0.338 0.374 0.366 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.496 0.564 0.533 0.520 0.567 0.554 0.711 0.724 0.727 0.710 0.722 0.724 0.561 0.612 0.599 0.572 0.609 0.601 0.527 0.584 0.573 0.546 0.587 0.584 0.417 0.418 0.421 0.421 0.420 0.422 0.364 0.432 0.408 0.370 0.439 0.389 0.340 0.363 0.362 0.328 0.365 0.355 Table 18: Kendall correlations with segment-level human judgments on WMT18 from-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of examples. 31 Published as a conference paper at ICLR 2020 Metric cs-en 5 de-en 16 et-en 14 fi-en 9 ru-en 8 tr-en 5 zh-en 14 BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.970 0.972 0.970 0.975 0.945 0.954 0.970 0.950 0.952 0.951 0.956 0.950 0.965 0.971 0.980 0.993 0.990 0.991 0.984 0.985 0.970 0.980 0.961 0.994 0.992 0.995 0.986 0.990 0.979 0.975 0.978 0.983 0.983 0.990 0.989 0.991 0.975 0.979 0.981 0.973 0.984 0.989 0.996 0.971 0.975 0.993 0.968 0.982 0.961 0.978 0.973 0.977 0.979 0.980 0.991 0.937 0.995 0.973 0.967 0.970 0.980 0.968 0.988 0.991 0.992 0.657 0.664 0.782 0.861 0.864 0.970 0.159 0.533 0.547 0.041 0.954 0.958 0.869 0.978 0.982 0.950 0.980 0.962 0.968 0.931 0.975 0.981 0.975 0.957 0.951 0.962 BEER BLEND RUSE 0.958 0.973 0.981 0.994 0.991 0.997 0.985 0.985 0.990 0.991 0.994 0.991 0.982 0.993 0.988 0.870 0.801 0.853 0.976 0.976 0.981 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.965 0.994 0.982 0.961 0.996 0.981 0.995 0.991 0.994 0.993 0.994 0.995 0.986 0.979 0.983 0.987 0.977 0.984 0.973 0.992 0.986 0.988 0.995 0.995 0.976 0.991 0.985 0.976 0.995 0.988 0.941 0.067 0.949 0.984 0.874 0.994 0.974 0.988 0.984 0.973 0.983 0.981 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.957 0.992 0.975 0.957 0.991 0.975 0.994 0.994 0.995 0.997 0.997 0.998 0.989 0.983 0.987 0.989 0.981 0.987 0.953 0.988 0.975 0.967 0.994 0.985 0.976 0.993 0.986 0.975 0.993 0.987 0.798 0.707 0.526 0.894 0.052 0.784 0.977 0.990 0.986 0.980 0.987 0.987 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.978 0.997 0.989 0.977 0.998 0.989 0.992 0.990 0.992 0.992 0.993 0.993 0.987 0.985 0.987 0.988 0.983 0.986 0.971 0.990 0.983 0.986 0.996 0.993 0.977 0.992 0.985 0.976 0.995 0.987 0.920 0.098 0.784 0.980 0.809 0.976 0.978 0.990 0.986 0.977 0.986 0.984 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.970 0.996 0.984 0.966 0.995 0.981 0.995 0.996 0.997 0.993 0.998 0.998 0.991 0.982 0.989 0.991 0.981 0.989 0.998 0.998 0.999 0.994 0.998 0.997 0.976 0.994 0.987 0.977 0.995 0.988 0.796 0.477 0.280 0.880 0.230 0.741 0.980 0.991 0.989 0.984 0.989 0.990 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.980 0.998 0.990 0.972 0.996 0.985 0.998 0.997 0.999 0.997 0.997 0.999 0.990 0.986 0.990 0.993 0.984 0.992 0.995 0.997 0.998 0.985 0.997 0.992 0.982 0.995 0.990 0.982 0.995 0.991 0.791 0.054 0.499 0.920 0.578 0.826 0.981 0.990 0.988 0.983 0.989 0.989 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.989 1.000 0.996 0.986 0.999 0.995 0.998 0.996 0.998 0.998 0.997 0.998 0.994 0.988 0.992 0.994 0.986 0.991 0.998 0.996 0.998 0.993 0.997 0.996 0.985 0.995 0.992 0.986 0.993 0.993 0.908 0.097 0.665 0.989 0.633 0.963 0.982 0.991 0.989 0.985 0.990 0.990 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.970 0.994 0.983 0.968 0.993 0.981 0.996 0.997 0.997 0.998 0.998 0.999 0.986 0.979 0.983 0.986 0.978 0.984 0.990 0.995 0.993 0.990 0.996 0.995 0.979 0.994 0.987 0.978 0.994 0.989 0.739 0.795 0.505 0.923 0.439 0.722 0.982 0.990 0.988 0.982 0.988 0.988 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.969 0.995 0.983 0.963 0.992 0.978 0.998 0.997 0.998 0.996 0.997 0.997 0.986 0.977 0.983 0.986 0.975 0.983 0.995 0.997 0.997 0.995 0.993 0.996 0.979 0.995 0.988 0.978 0.996 0.990 0.880 0.430 0.713 0.939 0.531 0.886 0.981 0.988 0.988 0.979 0.982 0.984 # Setting # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.965 0.990 0.978 0.960 0.991 0.976 0.996 0.995 0.997 0.996 0.997 0.998 0.990 0.984 0.988 0.990 0.983 0.988 0.978 0.996 0.990 0.987 0.996 0.994 0.980 0.996 0.989 0.980 0.998 0.992 0.946 0.286 0.576 0.989 0.612 0.943 0.981 0.987 0.987 0.981 0.985 0.985 Table 19: Absolute Pearson correlations with system-level human judgments on WMT18 to-English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 32 Published as a conference paper at ICLR 2020 Setting Metric en-cs 5 en-de 16 en-et 14 en-fi 12 en-ru 9 en-tr 8 en-zh 14 Unsupervised BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.995 0.997 0.993 0.915 – 0.999 0.991 0.997 – 0.997 0.973 0.987 – 0.981 0.986 0.989 0.984 – 0.986 0.981 0.988 – 0.986 0.985 0.985 0.990 0.975 0.984 0.956 0.981 – 0.983 0.958 0.981 – 0.981 0.968 0.979 – 0.962 0.964 0.974 0.973 – 0.949 0.906 0.942 – 0.945 0.944 0.940 – 0.983 0.984 0.983 0.975 – 0.990 0.988 0.987 – 0.985 0.990 0.992 – 0.826 0.861 0.833 0.865 – 0.902 0.859 0.867 – 0.853 0.990 0.976 – 0.947 0.961 0.983 – – 0.950 0.964 0.963 – 0.957 0.957 0.963 0.952 Supervised BEER BLEND RUSE 0.992 – – 0.991 – – 0.980 – – 0.961 – – 0.988 0.988 – 0.965 – – 0.928 – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.994 0.997 0.997 0.992 0.997 0.995 0.988 0.990 0.989 0.986 0.993 0.990 0.981 0.980 0.982 0.974 0.982 0.981 0.957 0.980 0.972 0.954 0.982 0.972 0.990 0.989 0.990 0.991 0.992 0.991 0.935 0.879 0.908 0.969 0.901 0.941 0.954 0.976 0.967 0.954 0.984 0.973 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.984 0.991 0.988 0.982 0.993 0.989 0.992 0.992 0.993 0.992 0.993 0.993 0.993 0.992 0.993 0.994 0.991 0.994 0.972 0.989 0.986 0.975 0.989 0.985 0.993 0.992 0.993 0.993 0.993 0.993 0.962 0.895 0.935 0.968 0.911 0.945 0.965 0.983 0.976 0.964 0.986 0.979 Table 20: Absolute Pearson correlations with system-level human judgments on WMT18 from- English translations. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 33 Published as a conference paper at ICLR 2020 Metric cs-en 10K de-en 10K et-en 10K fi-en 10K ru-en 10K tr-en 10K zh-en 10K BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.956 0.964 0.960 0.966 0.937 0.942 0.937 0.942 0.943 0.942 0.947 0.942 0.957 0.969 0.980 0.992 0.990 0.990 0.982 0.982 0.970 0.979 0.961 0.992 0.991 0.994 0.981 0.988 0.975 0.975 0.975 0.980 0.978 0.988 0.987 0.989 0.972 0.976 0.978 0.962 0.976 0.979 0.989 0.962 0.965 0.983 0.960 0.974 0.953 0.969 0.964 0.968 0.972 0.974 0.984 0.943 0.989 0.965 0.955 0.963 0.973 0.962 0.982 0.985 0.986 0.586 0.577 0.680 0.742 0.787 0.862 0.043 0.450 0.443 0.072 0.863 0.881 0.785 0.968 0.973 0.942 0.978 0.954 0.959 0.923 0.967 0.972 0.967 0.950 0.943 0.954 BEER BLEND RUSE 0.950 0.965 0.974 0.993 0.990 0.996 0.983 0.982 0.988 0.982 0.985 0.983 0.976 0.986 0.982 0.723 0.724 0.780 0.968 0.969 0.973 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.954 0.988 0.973 0.957 0.986 0.974 0.992 0.994 0.994 0.994 0.990 0.993 0.984 0.974 0.981 0.983 0.976 0.980 0.980 0.987 0.987 0.966 0.984 0.978 0.970 0.988 0.982 0.970 0.984 0.978 0.917 0.801 0.924 0.875 0.019 0.853 0.965 0.975 0.973 0.966 0.980 0.976 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.949 0.983 0.967 0.949 0.984 0.967 0.995 0.997 0.997 0.994 0.994 0.995 0.986 0.979 0.984 0.986 0.980 0.984 0.960 0.986 0.978 0.946 0.980 0.968 0.969 0.986 0.981 0.969 0.986 0.979 0.832 0.099 0.722 0.743 0.541 0.464 0.972 0.980 0.979 0.969 0.982 0.978 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.969 0.990 0.982 0.970 0.989 0.981 0.991 0.993 0.993 0.991 0.990 0.991 0.985 0.980 0.984 0.984 0.982 0.984 0.979 0.988 0.986 0.963 0.982 0.976 0.970 0.988 0.981 0.971 0.985 0.978 0.915 0.745 0.909 0.858 0.047 0.722 0.969 0.978 0.976 0.970 0.982 0.978 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.959 0.987 0.973 0.963 0.988 0.976 0.992 0.997 0.997 0.994 0.996 0.997 0.988 0.978 0.987 0.988 0.979 0.986 0.986 0.989 0.989 0.989 0.989 0.990 0.971 0.988 0.982 0.970 0.987 0.980 0.809 0.238 0.674 0.711 0.353 0.277 0.976 0.981 0.982 0.972 0.983 0.980 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.965 0.989 0.978 0.972 0.990 0.982 0.995 0.997 0.998 0.997 0.996 0.998 0.990 0.982 0.989 0.988 0.983 0.988 0.976 0.989 0.983 0.986 0.989 0.989 0.976 0.988 0.985 0.976 0.989 0.983 0.846 0.540 0.760 0.686 0.096 0.453 0.975 0.981 0.981 0.973 0.982 0.980 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.978 0.991 0.987 0.982 0.992 0.989 0.997 0.996 0.998 0.998 0.996 0.998 0.991 0.984 0.989 0.992 0.985 0.990 0.984 0.989 0.988 0.990 0.988 0.990 0.980 0.987 0.986 0.978 0.988 0.985 0.914 0.566 0.873 0.822 0.022 0.583 0.977 0.982 0.982 0.974 0.983 0.980 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.960 0.985 0.974 0.962 0.986 0.975 0.997 0.997 0.998 0.995 0.996 0.996 0.984 0.975 0.981 0.983 0.976 0.980 0.982 0.988 0.986 0.982 0.987 0.985 0.972 0.988 0.982 0.972 0.987 0.981 0.849 0.303 0.628 0.657 0.666 0.259 0.974 0.980 0.980 0.974 0.982 0.980 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.955 0.984 0.971 0.961 0.987 0.976 0.995 0.996 0.996 0.997 0.996 0.997 0.983 0.972 0.980 0.983 0.975 0.980 0.986 0.984 0.987 0.987 0.989 0.989 0.972 0.989 0.984 0.973 0.988 0.982 0.875 0.491 0.821 0.816 0.320 0.623 0.970 0.975 0.976 0.973 0.981 0.980 # Setting # Setting # Unsupervised # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.953 0.983 0.969 0.957 0.982 0.970 0.995 0.996 0.997 0.996 0.995 0.996 0.988 0.980 0.986 0.987 0.981 0.985 0.979 0.988 0.986 0.970 0.988 0.982 0.974 0.991 0.985 0.974 0.989 0.982 0.918 0.561 0.869 0.862 0.213 0.519 0.972 0.977 0.977 0.973 0.980 0.978 0.213-—-0.980 Table 21: Absolute Pearson correlations with human judgments on WMT18 to-English language pairs for 10K hybrid systems. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 34 Published as a conference paper at ICLR 2020 Setting Metric en-cs 10K en-de 10K en-et 10K en-fi 10K en-ru 10K en-tr 10K en-zh 10K Unsupervised BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.993 0.995 0.990 0.865 – 0.997 0.987 0.995 – 0.994 0.971 0.985 – 0.977 0.984 0.986 0.978 – 0.984 0.979 0.986 – 0.984 0.983 0.983 0.988 0.971 0.981 0.950 0.982 – 0.980 0.954 0.977 – 0.977 0.965 0.976 – 0.958 0.961 0.963 0.966 – 0.944 0.904 0.939 – 0.942 0.942 0.938 – 0.977 0.982 0.981 0.965 – 0.988 0.986 0.985 – 0.983 0.988 0.989 – 0.796 0.832 0.775 0.872 – 0.870 0.829 0.837 – 0.824 0.953 0.942 – 0.941 0.956 0.978 – – 0.944 0.950 0.959 – 0.954 0.951 0.957 0.948 Supervised BEER BLEND RUSE 0.990 – – 0.989 – – 0.978 – – 0.959 – – 0.986 0.986 – 0.933 – – 0.925 – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.989 0.995 0.993 0.992 0.995 0.995 0.983 0.991 0.988 0.986 0.988 0.988 0.970 0.979 0.978 0.978 0.977 0.979 0.951 0.977 0.969 0.954 0.976 0.969 0.988 0.989 0.989 0.988 0.987 0.987 0.936 0.872 0.910 0.903 0.850 0.877 0.950 0.980 0.969 0.950 0.972 0.963 Pre-Trained Table 22: Absolute Pearson correlations with human judgments on WMT18 from-English language pairs for 10K hybrid systems. Correlations of metrics not significantly outperformed by any other for that language pair are highlighted in bold. For each language pair, we specify the number of systems. 35 Published as a conference paper at ICLR 2020 Metric cs-en de-en et-en fi-en ru-en tr-en zh-en BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.135 0.162 0.146 0.152 0.172 0.136 0.121 0.139 0.191 0.149 0.148 0.157 0.159 0.804 0.795 0.737 0.814 0.804 0.802 0.764 0.789 0.803 0.776 0.780 0.808 0.814 0.757 0.764 0.696 0.746 0.646 0.739 0.602 0.768 0.768 0.760 0.703 0.752 0.763 0.460 0.493 0.496 0.474 0.456 0.469 0.455 0.470 0.469 0.471 0.483 0.466 0.484 0.230 0.234 0.201 0.234 0.253 0.228 0.218 0.232 0.240 0.227 0.229 0.250 0.243 0.096 0.087 0.082 0.100 0.052 0.135 0.000 0.001 0.002 0.000 0.106 0.110 0.008 0.661 0.660 0.584 0.673 0.597 0.665 0.602 0.652 0.642 0.654 0.629 0.613 0.620 BEER BLEND RUSE 0.165 0.184 0.213 0.811 0.820 0.823 0.765 0.779 0.788 0.485 0.484 0.487 0.237 0.254 0.250 0.030 0.003 0.109 0.675 0.611 0.672 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.190 0.189 0.194 0.189 0.192 0.193 0.815 0.813 0.819 0.817 0.808 0.817 0.778 0.775 0.778 0.775 0.771 0.774 0.468 0.481 0.474 0.477 0.484 0.483 0.261 0.266 0.265 0.255 0.248 0.262 0.130 0.014 0.144 0.131 0.005 0.081 0.655 0.663 0.670 0.650 0.674 0.669 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.190 0.199 0.197 0.186 0.200 0.196 0.701 0.826 0.824 0.806 0.823 0.821 0.766 0.765 0.767 0.765 0.760 0.763 0.487 0.493 0.491 0.492 0.495 0.497 0.254 0.258 0.260 0.247 0.258 0.254 0.126 0.000 0.147 0.125 0.000 0.031 0.653 0.671 0.668 0.661 0.680 0.676 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.200 0.194 0.199 0.200 0.197 0.199 0.815 0.809 0.810 0.813 0.806 0.811 0.778 0.779 0.782 0.772 0.769 0.772 0.474 0.493 0.484 0.485 0.495 0.494 0.261 0.270 0.266 0.256 0.262 0.262 0.137 0.006 0.142 0.136 0.005 0.006 0.661 0.672 0.672 0.657 0.675 0.673 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.173 0.165 0.173 0.172 0.172 0.178 0.675 0.816 0.820 0.691 0.809 0.820 0.757 0.764 0.764 0.755 0.758 0.758 0.502 0.483 0.498 0.503 0.490 0.501 0.258 0.266 0.262 0.252 0.268 0.260 0.126 0.000 0.090 0.123 0.000 0.001 0.654 0.674 0.669 0.661 0.678 0.674 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.174 0.163 0.175 0.181 0.165 0.179 0.704 0.805 0.825 0.821 0.787 0.824 0.765 0.770 0.770 0.758 0.763 0.761 0.497 0.491 0.499 0.500 0.495 0.502 0.255 0.263 0.262 0.256 0.270 0.265 0.140 0.005 0.143 0.089 0.000 0.004 0.663 0.679 0.675 0.669 0.684 0.679 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.185 0.179 0.186 0.190 0.181 0.188 0.828 0.779 0.827 0.820 0.769 0.822 0.780 0.775 0.778 0.771 0.766 0.768 0.504 0.494 0.502 0.504 0.494 0.501 0.263 0.266 0.267 0.261 0.266 0.265 0.133 0.004 0.113 0.102 0.004 0.004 0.654 0.670 0.669 0.661 0.674 0.671 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.186 0.182 0.186 0.178 0.183 0.182 0.771 0.823 0.824 0.819 0.817 0.821 0.762 0.764 0.765 0.756 0.754 0.755 0.496 0.496 0.499 0.506 0.501 0.505 0.247 0.256 0.253 0.241 0.256 0.250 0.153 0.000 0.049 0.130 0.000 0.000 0.658 0.671 0.673 0.656 0.673 0.670 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.195 0.192 0.196 0.191 0.196 0.195 0.721 0.821 0.824 0.811 0.815 0.822 0.767 0.766 0.773 0.765 0.762 0.764 0.493 0.494 0.496 0.500 0.495 0.499 0.152 0.260 0.261 0.167 0.259 0.256 0.144 0.001 0.155 0.144 0.000 0.046 0.661 0.659 0.675 0.657 0.673 0.674 # Setting # Setting # Unsupervised # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.192 0.202 0.199 0.189 0.202 0.196 0.796 0.818 0.827 0.818 0.812 0.821 0.779 0.772 0.778 0.770 0.761 0.766 0.486 0.495 0.491 0.485 0.490 0.490 0.255 0.261 0.262 0.259 0.250 0.263 0.131 0.005 0.086 0.116 0.003 0.003 0.263-:0.003 0.665 0.662 0.674 0.662 0.668 0.672 Table 23: Model selection accuracies (Hits@1) on to-English WMT18 hybrid systems. We report 3. We bold the highest the average of 100K samples and the 0.95 confidence intervals are below 10− numbers for each language pair and direction. 36 Published as a conference paper at ICLR 2020 Metric cs-en de-en et-en fi-en ru-en tr-en zh-en BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.338 0.362 0.349 0.356 0.369 0.338 0.325 0.342 0.387 0.353 0.344 0.352 0.351 0.894 0.890 0.854 0.901 0.895 0.894 0.866 0.885 0.894 0.876 0.881 0.896 0.901 0.866 0.870 0.814 0.856 0.798 0.857 0.771 0.873 0.873 0.868 0.834 0.864 0.871 0.666 0.689 0.690 0.676 0.662 0.672 0.663 0.673 0.671 0.674 0.681 0.671 0.682 0.447 0.451 0.429 0.454 0.470 0.446 0.435 0.447 0.460 0.443 0.452 0.470 0.464 0.265 0.256 0.254 0.278 0.174 0.323 0.021 0.063 0.063 0.034 0.275 0.285 0.086 0.799 0.799 0.739 0.811 0.757 0.803 0.754 0.792 0.788 0.790 0.776 0.765 0.770 BEER BLEND RUSE 0.364 0.382 0.417 0.899 0.904 0.906 0.871 0.880 0.885 0.684 0.681 0.686 0.460 0.473 0.468 0.125 0.077 0.273 0.811 0.767 0.809 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.386 0.383 0.388 0.390 0.390 0.393 0.901 0.899 0.903 0.902 0.896 0.902 0.880 0.877 0.879 0.877 0.874 0.876 0.674 0.683 0.678 0.681 0.686 0.685 0.481 0.486 0.484 0.475 0.475 0.483 0.318 0.100 0.331 0.318 0.077 0.225 0.793 0.804 0.808 0.786 0.811 0.806 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.392 0.397 0.398 0.392 0.400 0.400 0.832 0.908 0.907 0.896 0.906 0.905 0.872 0.870 0.872 0.870 0.867 0.869 0.686 0.691 0.690 0.689 0.691 0.693 0.475 0.478 0.481 0.467 0.479 0.475 0.319 0.025 0.335 0.316 0.018 0.097 0.791 0.811 0.806 0.797 0.817 0.812 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.398 0.391 0.397 0.398 0.395 0.398 0.901 0.897 0.898 0.900 0.895 0.899 0.880 0.879 0.882 0.875 0.873 0.875 0.678 0.690 0.684 0.685 0.692 0.691 0.481 0.490 0.486 0.475 0.488 0.482 0.327 0.085 0.328 0.323 0.080 0.086 0.799 0.810 0.810 0.794 0.813 0.810 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.372 0.366 0.374 0.373 0.374 0.380 0.814 0.902 0.904 0.825 0.898 0.904 0.866 0.870 0.870 0.865 0.866 0.866 0.697 0.683 0.694 0.697 0.688 0.696 0.475 0.483 0.480 0.470 0.486 0.479 0.313 0.026 0.224 0.303 0.028 0.037 0.795 0.813 0.808 0.802 0.816 0.812 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.375 0.366 0.378 0.384 0.368 0.382 0.833 0.895 0.907 0.905 0.885 0.907 0.871 0.874 0.874 0.866 0.869 0.868 0.693 0.689 0.694 0.694 0.692 0.696 0.474 0.480 0.480 0.475 0.487 0.484 0.327 0.039 0.324 0.220 0.030 0.048 0.800 0.816 0.811 0.806 0.819 0.815 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.383 0.378 0.385 0.389 0.380 0.387 0.909 0.880 0.909 0.905 0.874 0.906 0.880 0.877 0.879 0.874 0.870 0.872 0.698 0.692 0.697 0.698 0.691 0.696 0.480 0.481 0.484 0.478 0.483 0.482 0.323 0.078 0.286 0.268 0.079 0.082 0.795 0.811 0.809 0.803 0.814 0.811 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 0.385 0.381 0.385 0.381 0.384 0.384 0.875 0.907 0.907 0.904 0.903 0.905 0.869 0.869 0.871 0.864 0.863 0.864 0.692 0.693 0.694 0.699 0.696 0.699 0.469 0.477 0.476 0.464 0.479 0.472 0.342 0.026 0.128 0.289 0.013 0.032 0.796 0.809 0.810 0.794 0.812 0.809 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 0.392 0.389 0.393 0.393 0.395 0.396 0.844 0.905 0.907 0.899 0.901 0.906 0.873 0.871 0.876 0.870 0.868 0.870 0.689 0.690 0.691 0.694 0.690 0.693 0.367 0.482 0.483 0.387 0.483 0.478 0.338 0.031 0.348 0.333 0.023 0.128 0.799 0.800 0.812 0.794 0.810 0.811 # Setting # Setting # Unsupervised # Unsupervised # Supervised # Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.394 0.401 0.400 0.391 0.402 0.398 0.891 0.903 0.909 0.903 0.900 0.905 0.880 0.875 0.878 0.874 0.868 0.871 0.685 0.692 0.689 0.684 0.688 0.688 0.476 0.483 0.483 0.480 0.477 0.487 0.322 0.082 0.234 0.293 0.068 0.079 »—-0.688-—0.477—0.068 -—0.487-—«0.079 0.802 0.803 0.811 0.797 0.806 0.809 Table 24: Mean Reciprocal Rank (MRR) of the top metric-rated system on to-English WMT18 hybrid systems. We report the average of 100K samples and the 0.95 confidence intervals are below 10− 37 Published as a conference paper at ICLR 2020 Metric cs-en de-en et-en fi-en ru-en tr-en zh-en BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 3.85 3.88 3.77 3.55 3.70 3.93 2.02 3.86 3.98 3.85 3.81 3.88 3.67 0.45 0.43 0.49 0.46 0.41 0.49 0.46 0.43 0.40 0.44 0.48 0.44 0.41 1.01 0.87 0.94 1.25 0.69 1.10 1.71 1.14 1.27 1.48 0.72 0.65 0.64 2.17 1.33 2.07 1.43 1.13 1.19 1.49 1.14 1.10 1.18 1.20 1.13 1.20 2.34 2.30 2.25 4.65 2.28 2.36 2.25 4.34 2.23 4.87 1.75 2.17 2.15 4.48 4.58 4.07 3.11 1.40 1.42 4.22 5.18 4.26 5.96 1.40 1.32 1.31 3.19 3.43 3.37 2.92 3.50 3.92 3.20 3.82 3.47 3.72 3.44 3.40 3.55 BEER BLEND RUSE 3.82 3.77 3.13 0.41 0.41 0.32 0.79 0.66 0.64 1.08 1.09 1.03 1.92 2.21 1.51 1.96 1.28 1.94 3.43 3.46 3.15 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 3.97 1.51 3.70 3.94 1.54 2.75 0.36 0.43 0.36 0.36 0.43 0.39 0.72 0.60 0.59 0.64 0.63 0.60 1.16 1.65 1.08 1.18 1.87 1.10 2.20 1.33 1.92 2.06 1.12 1.38 1.25 1.34 1.27 2.55 5.96 1.26 3.26 3.50 3.38 3.54 3.38 3.51 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 4.02 2.66 3.89 4.02 1.63 3.86 0.35 0.43 0.36 0.35 0.43 0.38 0.74 0.62 0.60 0.67 0.65 0.61 1.15 1.75 1.09 1.18 1.93 1.11 1.09 1.10 1.08 1.48 1.13 1.14 3.33 5.64 3.82 3.30 7.26 4.24 3.06 3.34 3.23 3.49 3.13 3.28 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 3.82 1.49 1.71 3.74 1.51 1.49 0.34 0.40 0.35 0.35 0.42 0.38 0.66 0.59 0.58 0.65 0.62 0.60 1.12 1.56 1.08 1.12 1.86 1.17 2.10 1.17 1.65 1.90 1.10 1.24 1.31 1.35 1.29 1.98 5.84 1.96 3.60 3.61 3.60 3.77 3.21 3.53 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 3.89 1.92 3.56 3.89 1.61 3.18 0.37 0.39 0.37 0.38 0.42 0.38 0.75 0.64 0.59 0.67 0.67 0.60 1.18 1.57 1.10 1.20 1.65 1.11 1.07 1.11 1.08 1.30 1.14 1.13 3.45 5.75 3.79 3.27 6.55 6.54 2.62 3.13 2.90 3.47 2.95 3.11 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 3.64 1.60 2.38 2.70 1.55 1.68 0.36 0.37 0.35 0.36 0.39 0.37 0.71 0.64 0.58 0.69 0.66 0.59 1.10 1.51 1.06 1.13 1.59 1.08 1.03 1.09 1.05 1.08 1.10 1.08 2.69 3.91 3.57 3.18 6.66 5.58 2.57 3.27 2.95 2.89 3.18 2.91 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 2.14 1.45 1.42 1.55 1.45 1.42 0.35 0.37 0.35 0.35 0.39 0.36 0.61 0.64 0.59 0.60 0.64 0.60 1.07 1.49 1.07 1.08 1.65 1.10 1.09 1.10 1.07 1.12 1.09 1.08 1.21 4.42 1.27 1.54 5.89 3.80 3.35 3.55 3.41 3.87 3.32 3.45 PXLNET–Base RXLNET–Base FXLNET–Base PXLNET–Base (idf) RXLNET–Base (idf) FXLNET–Base (idf) 3.90 1.71 3.78 3.90 1.51 3.67 0.37 0.45 0.39 0.46 0.45 0.42 0.68 0.72 0.62 0.65 0.82 0.66 1.07 1.58 1.05 1.08 1.78 1.11 1.16 1.07 1.07 2.93 1.12 1.22 2.47 6.29 3.60 3.30 10.77 7.13 2.91 3.36 3.20 3.39 3.13 3.23 PXLNET–Large RXLNET–Large FXLNET–Large PXLNET–Large (idf) RXLNET–Large (idf) FXLNET–Large (idf) 3.94 2.23 3.84 3.92 1.60 3.80 0.37 0.41 0.36 0.41 0.43 0.38 0.71 0.69 0.60 0.64 0.78 0.63 1.10 1.34 1.03 1.12 1.70 1.06 21.10 1.07 1.07 21.10 1.09 1.09 1.85 4.46 3.38 3.24 6.13 3.72 2.90 3.40 3.22 3.37 3.20 3.25 # Setting # Unsupervised # Supervised Pre-Trained # PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 3.88 1.98 3.78 3.84 1.70 3.72 0.33 0.41 0.36 0.36 0.42 0.40 0.75 0.60 0.61 0.69 0.63 0.62 1.16 1.41 1.09 1.17 1.55 1.14 2.16 1.21 1.71 1.86 1.11 1.32 1.28 3.30 1.30 1.33 5.87 4.15 3.29 3.47 3.40 3.47 3.36 3.43 Table 25: Absolute Difference (×100) of the top metric-rated and the top human-rated system on to- English WMT18 hybrid systems. Smaller difference signify higher agreement with human scores. 3. We bold We report the average of 100K samples and the 0.95 confidence intervals are below 10− the lowest numbers for each language pair and direction. 38 Published as a conference paper at ICLR 2020 Setting Metric en-cs en-de en-et en-fi en-ru en-tr en-zh Unsupervised BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.151 0.163 0.135 0.000 – 0.182 0.179 0.175 – 0.155 0.154 0.178 – 0.611 0.663 0.737 0.691 – 0.662 0.555 0.657 – 0.643 0.674 0.670 0.708 0.617 0.731 0.639 0.734 – 0.549 0.454 0.550 – 0.552 0.622 0.674 – 0.087 0.081 0.492 0.112 – 0.083 0.062 0.065 – 0.067 0.356 0.230 – 0.519 0.541 0.543 0.534 – 0.537 0.535 0.545 – 0.538 0.523 0.548 – 0.029 0.032 0.027 0.031 – 0.033 0.032 0.029 – 0.029 0.383 0.396 – 0.515 0.552 0.667 – – 0.553 0.539 0.551 – 0.546 0.600 0.595 0.537 Supervised BEER BLEND RUSE 0.174 – – 0.670 – – 0.662 – – 0.113 – – 0.555 0.559 – 0.296 – – 0.531 – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.181 0.184 0.185 0.175 0.177 0.178 0.665 0.728 0.703 0.713 0.725 0.721 0.771 0.722 0.764 0.769 0.752 0.766 0.077 0.146 0.081 0.080 0.178 0.081 0.550 0.544 0.548 0.542 0.538 0.543 0.373 0.031 0.032 0.031 0.031 0.030 0.550 0.657 0.629 0.549 0.628 0.594 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.175 0.195 0.187 0.163 0.191 0.180 0.669 0.671 0.670 0.664 0.681 0.672 0.748 0.770 0.775 0.750 0.770 0.774 0.079 0.222 0.099 0.091 0.231 0.127 0.550 0.555 0.552 0.550 0.548 0.550 0.314 0.034 0.034 0.288 0.033 0.033 0.582 0.658 0.615 0.578 0.645 0.616 Table 26: Model selection accuracies (Hits@1) on to-English WMT18 hybrid systems. We report 3. We bold the highest the average of 100K samples and the 0.95 confidence intervals are below 10− numbers for each language pair and direction. 39 Published as a conference paper at ICLR 2020 Setting Metric en-cs en-de en-et en-fi en-ru en-tr en-zh Unsupervised BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 0.363 0.371 0.346 0.044 – 0.393 0.387 0.384 – 0.367 0.370 0.390 – 0.764 0.803 0.853 0.825 – 0.803 0.719 0.798 – 0.787 0.811 0.808 0.835 0.766 0.851 0.781 0.853 – 0.710 0.624 0.708 – 0.710 0.775 0.811 – 0.323 0.319 0.667 0.365 – 0.326 0.301 0.305 – 0.308 0.553 0.439 – 0.714 0.729 0.732 0.717 – 0.726 0.725 0.733 – 0.728 0.715 0.735 – 0.205 0.210 0.205 0.210 – 0.211 0.211 0.209 – 0.209 0.602 0.612 – 0.666 0.700 0.809 – – 0.698 0.678 0.695 – 0.696 0.753 0.750 0.691 Supervised BEER BLEND RUSE 0.388 – – 0.808 – – 0.804 – – 0.353 – – 0.739 0.742 – 0.507 – – 0.683 – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 0.395 0.401 0.400 0.390 0.395 0.395 0.805 0.849 0.832 0.839 0.847 0.844 0.876 0.844 0.872 0.875 0.864 0.873 0.314 0.368 0.317 0.320 0.398 0.319 0.736 0.732 0.735 0.730 0.727 0.730 0.586 0.212 0.214 0.213 0.212 0.212 0.694 0.802 0.775 0.691 0.776 0.739 Pre-Trained PXLM–100 RXLM–100 FXLM–100 PXLM–100 (idf) RXLM–100 (idf) FXLM–100 (idf) 0.391 0.413 0.404 0.377 0.409 0.396 0.808 0.809 0.809 0.805 0.816 0.810 0.862 0.876 0.878 0.863 0.876 0.878 0.316 0.435 0.333 0.326 0.444 0.355 0.735 0.738 0.737 0.735 0.733 0.735 0.522 0.216 0.216 0.497 0.214 0.214 0.733 0.803 0.767 0.729 0.793 0.767 Table 27: Mean Reciprocal Rank (MRR) of the top metric-rated system on to-English WMT18 hybrid systems. We report the average of 100K samples and the 0.95 confidence intervals are below 10− 40 Published as a conference paper at ICLR 2020 Setting Metric en-cs en-de en-et en-fi en-ru en-tr en-zh Unsupervised BLEU CDER CHARACTER ITER METEOR++ NIST PER TER UHH_TSKM WER YISI-0 YISI-1 YISI-1 SRL 1.26 1.25 1.23 1.25 – 1.24 1.25 1.21 – 1.22 1.25 1.22 – 6.36 6.70 6.90 9.14 – 5.28 6.62 6.02 – 6.15 6.62 6.27 6.57 2.59 1.90 2.19 2.52 – 2.55 4.92 4.34 – 4.19 1.53 1.21 – 0.92 1.41 4.35 1.52 – 1.02 7.43 2.17 – 2.43 1.46 1.13 – 0.76 0.87 0.93 1.35 – 0.75 0.68 0.73 – 0.72 0.75 0.71 – 9.40 9.37 5.22 7.33 – 8.82 9.76 8.80 – 9.28 3.47 3.51 – 3.01 1.75 1.64 – – 3.34 2.31 1.43 – 1.49 2.87 3.33 3.71 Supervised BEER BLEND RUSE 1.21 – – 5.96 – – 1.84 – – 0.77 – – 0.74 0.71 – 3.36 – – 1.96 – – PBERT–Multi RBERT–Multi FBERT–Multi PBERT–Multi (idf) RBERT–Multi (idf) FBERT–Multi (idf) 1.17 1.16 1.15 1.14 1.15 1.14 3.27 6.68 5.17 3.82 6.97 5.63 1.38 0.77 0.90 1.66 0.83 1.13 1.24 0.94 0.98 1.27 3.65 1.19 0.75 0.68 0.71 0.76 0.68 0.71 4.14 3.22 3.26 4.57 3.32 3.38 2.08 1.31 1.62 2.04 1.37 1.58 Pre-Trained 0.79 0.77 0.76 0.78 0.77 0.76 Table 28: Absolute Difference (×100) of the top metric-rated and the top human-rated system on to- English WMT18 hybrid systems. Smaller difference indicate higher agreement with human scores. 3. We bold We report the average of 100K samples and the 0.95 confidence intervals are below 10− the lowest numbers for each language pair and direction. 41 Published as a conference paper at ICLR 2020 Metric M1 M2 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDER SPICE LEIC BEER EED CHRF++ CHARACTER 0.124∗ 0.037∗ 0.004∗ -0.019∗ 0.606∗ 0.090∗ 0.438∗ 0.759∗ 0.939∗ 0.491 0.545 0.702 0.800 0.135∗ 0.048∗ 0.016∗ -0.005∗ 0.594∗ 0.096∗ 0.440∗ 0.750∗ 0.949∗ 0.562 0.599 0.729 0.801 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.313 0.679 0.531 0.243 0.834 0.579 0.344 0.622 0.519 0.286 0.783 0.581 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.252 0.644 0.470 0.264 0.794 0.575 0.331 0.641 0.512 0.300 0.767 0.583 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.454 0.756 0.649 0.327 0.873 0.645 0.486 0.697 0.634 0.372 0.821 0.647 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) -0.223 0.827 0.176 -0.256 0.901 0.188 -0.179 0.800 0.191 -0.267 0.869 0.157 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) -0.105 0.888 0.322 0.063 0.917 0.519 -0.041 0.863 0.350 -0.011 0.889 0.453 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.129 0.820 0.546 0.081 0.906 0.605 0.208 0.823 0.592 0.099 0.875 0.596 PXLNet–Base RXLNet–Base FXLNet–Base PXLNet–Base (idf) RXLNet–Base (idf) FXLNet–Base (idf) -0.046 0.409 0.146 0.006 0.655 0.270 0.080 0.506 0.265 0.145 0.720 0.391 PXLNet–Large RXLNet–Large FXLNet–Large PXLNet–Large (idf) RXLNet–Large (idf) FXLNet–Large (idf) -0.188 0.178 -0.014 -0.186 0.554 0.151 -0.115 0.195 0.036 -0.072 0.555 0.234 PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.230 0.333 0.297 0.266 0.700 0.499 0.220 0.263 0.243 0.275 0.640 0.470 Table 29: Pearson correlation on the 2015 COCO Captioning Challenge. The M1 and M2 measures are described in Section 4. We bold the best correlating task-specific and task-agnostic metrics in each setting LEIC uses images as additional inputs. Numbers with ∗ are cited from Cui et al. (2018). 42 Published as a conference paper at ICLR 2020 # Type Trained on QQP (supervised) # Trained on QQP + PAWSQQP (supervised) # Metric (Not trained on QQP or PAWSQQP) Method QQP PAWSQQP DecAtt DIIN BERT 0.939* 0.952* 0.963* 0.263 0.324 0.351 DecAtt DIIN BERT - - - 0.511 0.778 0.831 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CHRF++ BEER EED CHARACTER 0.737 0.720 0.712 0.707 0.755 0.740 0.577 0.741 0.743 0.698 0.402 0.548 0.527 0.527 0.532 0.536 0.608 0.564 0.611 0.650 PBERT–Base RBERT–Base FBERT–Base PBERT–Base (idf) RBERT–Base (idf) FBERT–Base (idf) 0.750 0.739 0.755 0.766 0.752 0.770 0.654 0.655 0.654 0.665 0.665 0.664 PBERT–Base–MRPC RBERT–Base–MRPC FBERT–Base–MRPC PBERT–Base–MRPC (idf) RBERT–Base–MRPC (idf) FBERT–Base–MRPC (idf) 0.742 0.729 0.746 0.752 0.737 0.756 0.615 0.617 0.614 0.618 0.619 0.617 PBERT–Large RBERT–Large FBERT–Large PBERT–Large (idf) RBERT–Large (idf) FBERT–Large (idf) 0.752 0.740 0.756 0.766 0.751 0.769 0.706 0.710 0.707 0.713 0.718 0.714 PRoBERTa–Base RRoBERTa–Base FRoBERTa–Base PRoBERTa–Base (idf) RRoBERTa–Base (idf) FRoBERTa–Base (idf) 0.746 0.736 0.751 0.760 0.745 0.765 0.657 0.656 0.654 0.666 0.666 0.664 PRoBERTa–Large RRoBERTa–Large FRoBERTa–Large PRoBERTa–Large (idf) RRoBERTa–Large (idf) FRoBERTa–Large (idf) 0.757 0.744 0.761 0.773 0.757 0.777 0.687 0.685 0.685 0.691 0.697 0.693 PRoBERTa–Large–MNLI RRoBERTa–Large–MNLI FRoBERTa–Large–MNLI PRoBERTa–Large–MNLI (idf) RRoBERTa–Large–MNLI (idf) FRoBERTa–Large–MNLI (idf) 0.763 0.750 0.766 0.783 0.767 0.784 0.767 0.772 0.770 0.756 0.764 0.759 PXLNet–Base RXLNet–Base FXLNet–Base PXLNet–Base (idf) RXLNet–Base (idf) FXLNet–Base (idf) 0.737 0.731 0.739 0.751 0.743 0.751 0.603 0.607 0.605 0.625 0.630 0.626 PXLNet–Large RXLNet–Large FXLNet–Large PXLNet–Large (idf) RXLNet–Large (idf) FXLNet–Large (idf) 0.742 0.734 0.744 0.759 0.749 0.760 0.593 0.598 0.596 0.604 0.610 0.606 PXLM–En RXLM–En FXLM–En PXLM–En (idf) RXLM–En (idf) FXLM–En (idf) 0.734 0.725 0.737 0.757 0.745 0.759 0.600 0.604 0.602 0.596 0.603 0.600 Table 30: Area under ROC curve (AUC) on QQP and PAWSQQP datasets. The scores of trained DecATT (Parikh et al., 2016), DIIN (Gong et al., 2018), and fine-tuned BERT are reported by Zhang et al. (2019). We bold the best task-specific and task-agnostic metrics. Numbers with ∗ are scores on the held-out test set of QQP. 43
{ "id": "1904.01038" }
1904.09482
Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding
This paper explores the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple natural language understanding tasks. Although ensemble learning can improve model performance, serving an ensemble of large DNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge distillation method (Hinton et al., 2015) in the multi-task learning setting. For each task, we train an ensemble of different MT-DNNs (teacher) that outperforms any single model, and then train a single MT-DNN (student) via multi-task learning to \emph{distill} knowledge from these ensemble teachers. We show that the distilled MT-DNN significantly outperforms the original MT-DNN on 7 out of 9 GLUE tasks, pushing the GLUE benchmark (single model) to 83.7\% (1.5\% absolute improvement\footnote{ Based on the GLUE leaderboard at https://gluebenchmark.com/leaderboard as of April 1, 2019.}). The code and pre-trained models will be made publicly available at https://github.com/namisan/mt-dnn.
http://arxiv.org/pdf/1904.09482
Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao
cs.CL
8 pages, 2 figures and 3 tables
null
cs.CL
20190420
20190420
9 1 0 2 r p A 0 2 ] L C . s c [ 1 v 2 8 4 9 0 . 4 0 9 1 : v i X r a # Improving Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding Xiaodong Liu1, Pengcheng He2, Weizhu Chen2, Jianfeng Gao1 1 Microsoft Research 2 Microsoft Dynamics 365 AI {xiaodl,penhe,wzchen,jfgao}@microsoft.com # Abstract This paper explores the use of knowledge dis- tillation to improve a Multi-Task Deep Neu- ral Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple natural language understanding tasks. Al- though ensemble learning can improve model performance, serving an ensemble of large DNNs such as MT-DNN can be prohibitively expensive. Here we apply the knowledge distillation method (Hinton et al., 2015) in For each the multi-task learning setting. task, we train an ensemble of different MT- DNNs (teacher) that outperforms any single model, and then train a single MT-DNN (stu- dent) via multi-task learning to distill knowl- edge from these ensemble teachers. We show that the distilled MT-DNN significantly out- performs the original MT-DNN on 7 out of 9 GLUE tasks, pushing the GLUE bench- mark (single model) to 83.7% (1.5% abso- lute improvement1). The code and pre-trained models will be made publicly available at https://github.com/namisan/mt-dnn. # Introduction Ensemble learning is an effective approach to im- prove model generalization, and has been used to achieve new state-of-the-art results in a wide range of natural language understanding (NLU) tasks, including question answering and machine read- ing comprehension (Devlin et al., 2018; Liu et al., 2018; Huang et al., 2017; Hancock et al., 2019). A recent survey is included in (Gao et al., 2019). However, these ensemble models typically con- sist of tens or hundreds of different deep neural network (DNN) models and are prohibitively ex- pensive to deploy due to the computational cost 1 at https://gluebenchmark.com/leaderboard as of April 1, 2019. of runtime inference. Recently, large-scale pre- trained models, such as BERT (Devlin et al., 2018) and GPT (Radford et al., 2018), have been used effectively as the base models for building task- specific NLU models via fine-tuning. The pre- trained models by themselves are already expen- sive to serve at runtime (e.g. BERT contains 24 transformer layers with 344 million parameters, and GPT-2 contains 48 transformer layers with 1.5 billion parameters), the ensemble versions of these models multiplying the extreme for online deploy- ment. Knowledge distillation is a process of distill- ing or transferring the knowledge from a (set of) large, cumbersome model(s) to a lighter, easier- to-deploy single model, without significant loss in performance (Bucilu et al., 2006; Hinton et al., 2015; Balan et al., 2015; Ba et al., 2016; Chen et al., 2015; Tan et al., 2019). In this paper, we explore the use of knowledge distillation to improve a Multi-Task Deep Neural Network (MT-DNN) (Liu et al., 2019) for learning text representations across multiple NLU tasks. Since MT-DNN incorporates a pre-trained BERT model, its ensemble is expensive to serve at run- time. We extend the knowledge distillation method (Hinton et al., 2015) to the multi-task learning set- ting (Caruana, 1997; Xu et al., 2018; Collobert et al., 2011; Zhang and Yang, 2017; Liu et al., 2015). In the training process, we first pick a few tasks, each with an available task-specific training dataset which is stored in the form of (x, y) pairs, where x is an input and y is its correct target. For each task, we train an ensemble of MT-DNN mod- els (teacher) that outperform the best single model. Although the ensemble model is not feasible for online deployment, it can be utilized, in an offline manner, to produce a set of soft targets for each x in the training dataset , which, for example, in a classification task are the class probabilities aver- aged over the ensemble of different models. Then, we train a single MT-DNN (student) via multi-task learning with the help of the teachers by using both the soft targets and correct targets across different tasks. We show in our experiments that knowledge distillation effectively transfers the generalization ability of the teachers to the student. As a re- sult, the distilled MT-DNN outperforms the vanilla MT-DNN that is trained in a normal way, as de- scribed in (Liu et al., 2019), on the same training data as was used to train the teachers. We validate the effectiveness of our approach on the General Language Understanding Evaluation (GLUE) dataset (Wang et al., 2019) which con- sists of 9 NLU tasks. We find that the distilled MT-DNN outperforms the vanilla MT-DNN on 7 tasks, including the tasks where we do not have teachers. This distilled model improves the GLUE benchmark (single model) to 83.7%, amounting to 3.2% absolute improvement over BERT and 1.5% absolute improvement over the previous state of the art model based on the GLUE leaderboard2 as of April 1, 2019. In the rest of the paper, Section 2 describes the MT-DNN of Liu et al. (2019) which is the base- line and vanilla model for this study. Section 3 de- scribes in detail knowledge distillation for multi- task learning. Section 4 presents our experiments on GLUE. Section 5 concludes the paper. # 2 MT-DNN The architecture of the MT-DNN model is shown in Figure 1. The lower layers are shared across all tasks, while the top layers represent task-specific outputs. The input X, which is a word sequence (either a sentence or a set of sentences packed to- gether) is first represented as a sequence of embed- ding vectors, one for each word, in l1. Then the transformer encoder captures the contextual infor- mation for each word via self-attention, and gen- erates a sequence of contextual embeddings in l2. This is the shared semantic representation that is trained by our multi-task objectives. Lexicon Encoder (l1): The input X = {x1, ..., xm} is a sequence of tokens of length m. Following Devlin et al. (2018), the first token x1 is always the [CLS] token. If X is packed by a set of sentences (X1, X2), we separate the these 2https://gluebenchmark.com sentences with special tokens [SEP]. The lexicon encoder maps X into a sequence of input embed- ding vectors, one for each token, constructed by summing the corresponding word, segment, and positional embeddings. Transformer Encoder (l2): We use a multi- layer bidirectional Transformer encoder (Vaswani et al., 2017) to map the input representation vec- tors (l1) into a sequence of contextual embedding vectors C ∈ Rd×m. This is the shared representa- tion across different tasks. Task-Specific Output Layers: We can incorpo- rate arbitrary natural language tasks, each with its task-specific output layers. For example, we im- plement the output layers as a neural decoder for text generation, a neural ranker for relevance rank- ing, a logistic regression for text classification, and so on. Below, we elaborate the implementation de- tail using text classification as an example. Suppose that x is the contextual embedding (l2) of the token [CLS], which can be viewed as the semantic representation of input sentence X. The probability that X is labeled as class c (i.e., the sentiment is postive or negative) is predicted by a logistic regression with softmax: Pr(c|X) = softmax(Wt · x), (1) where Wt is the task-specific parameter matrix for task t. # 2.1 The Training Procedure The training procedure of MT-DNN consists of two stages: pre-training and multi-task learning (MTL). In the pre-training stage, Liu et al. (2019) used a publicly available pre-trained BERT model to initialize the parameters of the shared layers (i.e., the lexicon encoder and the transformer en- coder). In the MTL stage, mini-batch based stochastic gradient descent (SGD) is used to learn the model parameters (i.e., the parameters of all the shared layers and the task-specific layers), as shown in Algorithm 1. First, the training samples from mul- tiple tasks (e.g., 9 GLUE tasks) are packed into mini-batches. We denote a mini-batch by bt, indi- cating that it contains only the samples from task t. In each epoch, a mini-batch bt is selected, and the model is updated according to the task-specific ob- jective for task t, denoted by Lt(Θ). This approx- imately optimizes the sum of all multi-task objec- tives. P(clX) (e.g,, probability of labeling text X by c) Sim(X,, X2) (e.g., semantic similarity between X, PIP, A) (eg., probability of logic relationship R Rel(Q, A) (e.g., relevance score of candidate answer A and X2) between P and H) given query Q) Task specific | t t hi t Output layers Single-Sentence Pairwise Text Pairwise Text Pairwise Classification Similarity Classification Ranking (e.g., CoLA, SST-2) (eg. STS-B) (eg., RTE, MNLI, (eg., QNLI) WNLI, QQP, MRPC) t i Ly: context embedding vectors, one for each token. i t Transformer Encoder (contextual embedding layers) Shared layers 1,: input embedding vectors, one each token. Lexicon Encoder (word, position and segment) X: a sentence or a pair of sentences Figure 1: Architecture of the MT-DNN model for representation learning (Liu et al., 2019). The lower layers are shared across all tasks while the top layers are task-specific. The input X (either a sentence or a set of sentences) is first represented as a sequence of embedding vectors, one for each word, in l1. Then the Transformer encoder captures the contextual information for each word and generates the shared contextual embedding vectors in l2. Finally, for each task, additional task-specific layers generate task-specific representations, followed by operations necessary for classification, similarity scoring, or relevance ranking. Take text classification as an example. We use the cross-entropy loss as the objective in Line 3 of Algorithm 1: - > 1(X, c) log(P,-(e X)), (2) where 1(X, c) is the binary indicator (0 or 1) if class label c is the correct classification for X, and Pr(.) is defined by Equation 1. Then, in Line 5, the parameters of the shared layers and the output layers corresponding to task t are updated using the gradient computed in Line 4. After MT-DNN is trained via MTL, it can be fine-tuned (or adapted) using task-specific labeled training data to perform prediction on any individ- ual task, which can be a task used in the MTL stage or a new task that is related to the ones used in MTL. Liu et al. (2019) showed that the shared layers of MT-DNN produce more univer- sal text representations than that of BERT. As a result, MT-DNN allows fine-tuning or adaptation with substantially fewer task-specific labels. 3 Knowledge Distillation Algorithm 1: Training a MT-DNN model. Initialize model parameters © randomly. Initialize the shared layers (i.e., the lexicon encoder and the transformer encoder) using a pre-trained BERT model. Set the max number of epoch: epochnax- //Prepare the data for T tasks. for t in 1,2,...,T do Pack the dataset t into mini-batch: D,. end for epoch in 1,2,...,epochmaz do 1. Merge all the datasets: D=D,UD,...UDr 2. Shuffle D for b; in Ddo //b; is a mini-batch of task t. 3. Compute task-specific loss : L,(@) 4. Compute gradient: V(O) 5. Update model: 0 = 0 — eV(0) end end The process of knowledge distillation for MTL is illustrated in Figure 2. First, we pick a few tasks where there are task-specific labeled training data. Then, for each task, we train an ensemble of dif- ferent neural nets as a teacher. Each neural net is Multi-Task Teacher Teacher task1 taskT Qa O1%, 1) QrQlX, 67) Data of Task 1 Data of Task T Dy Dr, > Loss Function L(@IX, 4, 87) Back Mulei-task ‘i julti-Ta: ropagation eee P-(1X,8), t=1..T or Figure 2: Process of knowledge distillation for multi-task learning. A set of tasks where there is task-specific labeled training data are picked. Then, for each task, an ensemble of different neural nets (teacher) is trained. The teacher is used to generate for each task-specific training sample a set of soft targets. Given the soft targets of the training datasets across multiple tasks, a single MT-DNN (student) is trained using multi-task learning and back propagation as described in Algorithm 1, except that if task t has a teacher, the task-specific loss in Line 3 is the average of two objective functions, one for the correct targets and the other for the soft targets assigned by the teacher. an instance of MT-DNN described in Section 2, and is fine-tuned using task-specific training data while the parameters of its shared layers are ini- tialized using the MT-DNN model pre-trained on the GLUE dataset via MTL, as in Algorithm 1, and the parameters of its task-specific output layers are randomly initialized. For each task, a teacher generates a set of soft targets for each task-specific training sample. Take text classification as an example. A neural net- work model typically produces class probabilities using a softmax layer as in Equation 1. Let Qk be the class probabilities produced by the k-th single network of the ensemble. The teacher produces the soft targets by averaging the class probabilities across networks: Q = avg([Q1, Q2, ..., QK]). (3) We want to approximate the teacher using a stu- dent neural network model, which also has a soft- max output for the same task Pr(c|X), as in Equa- tion 1. Hence, we use the standard cross entropy loss: − Q(c|X) log(Pr(c|X)). (4) c As pointed out by Hinton et al. (2015), the use of soft targets produced by the teacher is the key to successfully transferring the generalization ability of the teacher to the student. The relative prob- abilities of the teacher labels contain information about how the teacher generalizes. For example, the sentiment of the sentence “I really enjoyed the conversation with Tom” has a small chance of be- ing classified as negative. But the sentence “Tom and I had an interesting conversation” can be ei- ther positive or negative, depending on its context if available, leading to a high entropy of the soft targets assigned by the teacher. In these cases, the soft targets provide more information per training sample than the hard targets and less variance in the gradient between training samples. By opti- mizing the student for the soft targets produced by the teacher, we expect the student to learn to generalize in the same way as the teacher. In our case, each task-specific teacher is the average of a set of different neural networks, and thus general- izes well. The single MT-DNN (student) trained to generalize in the same way as the teachers is expected to do much better on test data than the vanilla MT-DNN that is trained in the normal way on the same training dataset. We will demonstrate in our experiments that this is indeed the case. Note that the above loss function differs from the cross entropy loss in Equation 2 in that the former uses the soft targets Q(c|X) while the lat- ter uses the hard correct target via the indicator 1(X, c). We also find that when the correct targets are known, the model performance can be signifi- cantly improved by training the distilled model on a combination of soft and hard targets. We do so by defining a loss function for each task that take a weighted average between the cross entropy loss with the correct targets as Equation 2 and the cross entropy with the soft targets as Equation 4. Hinton et al. (2015) suggested using a considerably lower weight on the first loss term. But in our experi- ments we do not observe any significant difference by using different weights for the two loss terms, respectively. Finally, given the soft targets of the training datasets across multiple tasks, the student MT- DNN can be trained using MTL as described in Algorithm 1, except that if task t has a teacher, the task-specific loss in Line 3 is the average of two objective functions, one for the correct targets and the other for the soft targets assigned by the teacher. # 4 Experiments We evaluate the MT-DNN trained using Knowl- edge Distillation, termed as MT-DNNKD in this section, on the General Language Understanding Evaluation (GLUE) benchmark. GLUE is a col- lection of nine NLU tasks as in Table 1, includ- ing question answering, sentiment analysis, text similarity and textual entailment. We refer read- ers to Wang et al. (2019) for a detailed description of GLUE. We compare MT-DNNKD with existing state-of-the-art models including BERT (Devlin et al., 2018), STILT (Phang et al., 2018), Snorkel MeTal (Hancock et al., 2019), and MT-DNN (Liu et al., 2019). Furthermore, we investigate the rel- ative contribution of using knowledge distillation for MTL with an ablation study. # 4.1 Implementation details Our implementation is based on the PyTorch im- plementations of MT-DNN3 and BERT4. We used Adamax (Kingma and Ba, 2014) as our optimizer with a learning rate of 5e-5 and a batch size of 32. The maximum number of epochs was set to 5. A linear learning rate decay schedule with warm-up over 0.1 was used, unless stated otherwise. We also set the dropout rate of all the task-specific layers as 0.1, except 0.3 for MNLI and 0.05 for CoLA/SST-2. To avoid the gradient explosion is- sue, we clipped the gradient norm within 1. All the # 3https://github.com/namisan/mt-dnn 4https://github.com/huggingface/pytorch-pretrained- BERT texts were tokenized using wordpieces, and were chopped to spans no longer than 512 tokens. To obtain a set of diverse single models to form ensemble models (teachers), we first trained 6 sin- gle MT-DNNs, initialized using Cased/Uncased BERT models as (Hancock et al., 2019) with a dif- ferent dropout rate, ranged in {0.1, 0.2, 0.3}, on the shared layers, while keeping other training hy- perparameters the same as aforementioned. Then, we selected top 3 best models according to the re- sults on the MNLI and RTE development datasets. Finally, we fine-tuned the 3 models on each of the MNLI, QQP, RTE and QNLI tasks to form four task-specific ensembles (teachers), each consist- ing of 3 single MT-DNNs fine-tuned for the task. The teachers are used to generate soft targets for the four tasks as Equation 3, described in Section 3. We only pick four out of nine GLUE tasks to train teachers to investigate the generalization ability of MT-DNNKD, i.e., its performance on the tasks with and without teachers. # 4.2 GLUE Main Results We compare MT-DNNKD with a list of state-of- the-art models that have been submitted to the GLUE leaderboard. BERTLARGE This is the large BERT model re- leased by Devlin et al. (2018), which we used as a baseline. We used single-task fine-tuning to pro- duce the best result for each GLUE task according to the development set. MT-DNN This is the model described in Section 2 and Liu et al. (2019). We used the pre-trained BERTLARGE model to initialize its shared layers, refined the shared layers via MTL on all GLUE tasks, and then perform a fine-tune for each GLUE task using the task-specific data. MT-DNNKD This is the MT-DNN model trained using knowledge distillation as described in Sec- tion 3. MT-DNNKD uses the same model architec- ture as that of MT-DNN. But the former is trained with the help from four task-specific ensembles (teachers). MT-DNNKD is optimized for the multi- task objectives that are based on the hard correct targets, as well as the soft targets produced by the teachers if available. After knowledge distillation based MTL, MT-DNNKD is further fine-tuned for each task using task-specific data to produce the final predictions for each GLUE task on blind test data for evaluation. Corpus Task #Label Single-Sentence Classification (GLUE) #Train #Dev #Test Metrics CoLA SST-2 Acceptability Sentiment 8.5k 67k 1k 872 1k 1.8k 2 2 Matthews corr Accuracy Pairwise Text Classification (GLUE) MNLI RTE WNLI QQP MRPC STS-B QNLI NLI NLI NLI Paraphrase Paraphrase Similarity QA/NLI 20k 393k 276 2.5k 71 634 40k 364k 3.7k 408 Text Similarity (GLUE) 1.5k 20k 3k 146 391k 1.7k 3 2 2 2 2 7k 1.4k Relevance Ranking (GLUE) 5.7k 1 108k 5.7k 2 Accuracy Accuracy Accuracy Accuracy/F1 Accuracy/F1 Pearson/Spearman corr Accuracy Table 1: Summary of the GLUE benchmark. Model QQP MNLI-m/mm QNLI RTE WNLI AX Score 364k CoLA SST-2 MRPC STS-B 8.5k BiLSTM+ELMo+Attn 1 36.0 Singletask Pretrain Transformer 2 GPT on STILTs 3 BERTLARGE MT-DNN5 Snorkel MeTaL 6 ALICE ∗ MT-DNNKD Human Performance 108k 79.8 393k 76.4/76.1 2.5k 56.8 3.7k 7k 67k 90.4 84.9/77.9 75.1/73.3 64.8/84.7 634 65.1 26.5 53.4 45.4 29.8 82.1/81.4 87.4 91.3 82.3/75.7 82.0/80.0 70.3/88.5 56.0 65.1 65.1 65.1 65.1 65.1 65.1 95.9 80.8/80.6 86.7/85.9 86.7/86.0 87.6/87.2 87.9/87.4 87.5/86.7 92.0/92.8 - 92.7 - 93.9 95.7 96.0 91.2 93.1 87.7/83.7 85.3/84.8 70.1/88.1 94.9 89.3/85.4 87.6/86.5 72.1/89.3 95.6 90.0/86.7 88.3/87.7 72.4/89.6 96.2 91.5/88.5 90.1/89.7 73.1/89.9 95.2 91.8/89.0 89.8/88.8 74.0/90.4 95.6 91.1/88.2 89.6/89.0 72.7/89.6 97.8 86.3/80.8 92.7/92.6 59.5/80.4 47.2 60.5 61.5 63.8 63.5 65.4 66.4 69.1 70.1 75.5 80.9 80.9 85.1 93.6 29.4 39.6 40.3 39.9 40.7 42.8 - 4 70.0 72.8 76.9 80.5 82.2 83.2 83.3 83.7 87.1 Table 2: GLUE test set results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The state-of-the-art results are in bold. MT-DNNKD uses BERTLARGE to initialize its shared layers. All the results are obtained from https://gluebenchmark.com/leaderboard on April 1, 2019. Note that Snorkel MeTaL is an ensemble model. - denotes the missed result of the latest GLUE version. ∗ denotes the unpublished work, thus not knowing whether it is a single model or an ensemble model. For QNLI, we treat it as two tasks, pair-wise ranking and classification task on v1 and v2 training datasets, respectively, and then merge results on the test set. Model references: 1:(Wang et al., 2019) ; 2:(Radford et al., 2018); 3: (Phang et al., 2018); 4:(Devlin et al., 2018); 5: (Liu et al., 2019); 6: (Hancock et al., 2019). The main results on the official test datasets of GLUE are reported in Table 2. Compared to other recent submissions on the GLUE leader- board, MT-DNNKD is the best performer, creating a new state-of-the-art result of 83.7%. The margin between MT-DNNKD and the second-best model ALICE is 0.5%, larger than the margin of 0.1% between the second and the third (and the fourth) places. It is worth noting that MT-DNNKD is a sin- gle model while Snorkel MetaL (Hancock et al., 2019) is an ensemble model. The description of ALICE is not disclosed yet. Table 2 also shows that MT-DNNKD signifi- cantly outperforms MT-DNN not only in overall score but on 7 out of 9 GLUE tasks, including the tasks without a teacher. Since MT-DNNKD and MT-DNN use the same network architecture, and are trained with the same initialization and on the same datasets, the improvement of MT-DNNKD is solely attributed to the use of knowledge distilla- tion in MTL. We note that the most significant per-task im- provements are from CoLA (65.4% vs. 61.5%) and RTE (85.1% vs. 75.5%). Both tasks have rela- tively small amounts of in-domain data. Similarly, for the same type of tasks, the improvements of MT-DNNKD over MT-DNN are much more sub- stantial for the tasks with less in-domain training MNLI-m/mm QQP RTE QNLI(v2) MRPC CoLa SST-2 93.5 94.3 94.3 94.7 86.3/86.2 87.1/86.7 87.3/87.3 88.1/87.9 91.1/88.0 71.1 91.9/89.2 83.4 91.9/89.4 88.6 92.5/90.1 86.7 92.4 92.9 93.2 93.5 89.5/85.8 61.8 91.0/87.5 63.5 93.3/90.7 64.5 93.4/91.0 64.5 STS-B 89.6/89.3 90.7/90.6 91.0/90.8 92.1/91.6 Table 3: GLUE dev set results. The best result on each task produced by a single model is in bold. MT-DNN uses BERTLARGE as their initial shared layers. MT-DNNKD is the MT-DNN trained using the proposed knowledge distillation based MTL. MT-DNN-ensemble denotes the results of the ensemble models described in Section 4.1. The ensemble models on MNLI, QQP, RTE and QNLI are used as teachers in the knowledge distillation based MTL, while the other ensemble modes, whose results are in blue and italic, are not used as teachers. data e.g., for the two NLI tasks, the improvement in RTE is much larger than that in MNLI; for the two paraphrase tasks, the improvement in MRPC is larger than that in QQP. These results suggest that knowledge distillation based MTL is effective at improving model performance for not only tasks with teachers but also ones without teachers, and more so for tasks with fewer in-domain labels. mance on the tasks where no teacher is used. On MRPC, CoLA, and STS-B, the performance of MT-DNNKD is much better than MT-DNN and is close to the ensemble models although the latter are not used as teachers in MTL. # 5 Conclusion # 4.3 Ablation Study We perform an ablation study to investigate how effective it can distill knowledge from the ensem- ble models (teachers) to a single MT-DNN (stu- dent). To this end, we compare the performance of the ensemble models with the corresponding stu- dent model. In this work, we have extended knowledge distilla- tion to MTL in training a MT-DNN for natural lan- guage understanding. We have shown that distil- lation works very well for transferring knowledge from a set of ensemble models (teachers) into a single, distilled MT-DNN (student). On the GLUE datasets, the distilled MT-DNN creates new state of the art result on 7 out of 9 NLU tasks, includ- ing the tasks where there is no teacher, pushing the GLUE benchmark (single model) to 83.7%. The results on dev sets are shown in Table 3, where MT-DNN-ensemble are the task-specific ensemble models trained using the process de- scribed in Section 3. We only use four ensem- ble models (i.e., the models for MNLI, QQP, RTE, QNLI) as teachers. The results of the other ensem- ble models (i.e., MRPC, CoLa, SST-2, STS-B) are reported to show the effectiveness of the knowl- edge distillation based MTL at improving the per- formance on tasks without a teacher. We can draw several conclusions from the re- sults in Table 3. First, MT-DNNKD significantly outperforms MT-DNN and BERTLARGE across multiple GLUE tasks on the dev sets, which is consistent with what we observe on test sets in Table 2. Second, comparing MT-DNNKD with MT-DNN-ensemble, we see that the MT-DNNKD successfully distills knowledge from the teachers. Although the distilled model is simpler than the teachers, it retains nearly all of the improvement that is achieved by the ensemble models. More in- terestingly, we find that incorporating knowledge distillation into MTL improves the model perfor- We show that the distilled MT-DNN retains nearly all of the improvements achieved by ensem- ble models, while keeping the model size the same as the vanilla MT-DNN model. There are several research areas for future ex- ploration. First, we will seek better ways of com- bining the soft targets and hard correct targets for multi-task learning. Second, the teachers might be used to produce the soft targets for large amounts of unlabeled data, which in turn can be used to train a better student model in a way conceptually similar to semi-supervised learning. Third, instead of compressing a complicated model to a simpler one, knowledge distillation can also be used to im- prove the model performance regardless of model complexity, in machine learning scenarios such as self-learning in which both the student and teacher are the same model. # Acknowledgments We thank Asli Celikyilmaz, Xuedong Huang, Moontae Lee, Chunyuan Li, Xiujun Li, and Michael Patterson for helpful discussions and comments. # References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- arXiv preprint ton. 2016. Layer normalization. arXiv:1607.06450. Anoop Korattikara Balan, Vivek Rathod, Kevin P Mur- phy, and Max Welling. 2015. Bayesian dark knowl- edge. In Advances in Neural Information Process- ing Systems, pages 3438–3446. and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, pages 535–541. ACM. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2015. Net2net: Accelerating learning via knowl- edge transfer. arXiv preprint arXiv:1511.05641. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from Journal of Machine Learning Research, scratch. 12(Aug):2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Jianfeng Gao, Michel Galley, and Lihong Li. 2019. Neural approaches to conversational ai. Founda- tions and Trends®) in Information Retrieval, 13(2- 3):127-298. Ines Chami, Vincent Chen, Jared Dunnmon, Sen Wu, Paroma Varma, Max Lam, and Chris R. 2019. Massive multi-task snorkel metal: supervision to bear. https://dawn.cs.stanford.edu/2019/03/22/glue/. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2017. Fusionnet: Fusing via fully- aware attention with application to machine compre- hension. arXiv preprint arXiv:1711.07341. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representa- tion learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912–921. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Xiaodong Liu, Yelong Shen, Kevin Duh, and Jianfeng Gao. 2018. Stochastic answer networks for ma- chine reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers). Asso- ciation for Computational Linguistics. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Xu Tan, Yi Ren, Di He, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In International Conference on Learning Representations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Inter- national Conference on Learning Representations. Yichong Xu, Xiaodong Liu, Yelong Shen, Jingjing Liu, and Jianfeng Gao. 2018. Multi-task learning for machine reading comprehension. arXiv preprint arXiv:1809.06963. Yu Zhang and Qiang Yang. 2017. A survey on multi- task learning. arXiv preprint arXiv:1707.08114.
{ "id": "1811.01088" }
1904.09223
ERNIE: Enhanced Representation through Knowledge Integration
We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the masking strategy of BERT, ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words.Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit.Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. We also demonstrate that ERNIE has more powerful knowledge inference capacity on a cloze test.
http://arxiv.org/pdf/1904.09223
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu
cs.CL
8 pages
null
cs.CL
20190419
20190419
9 1 0 2 r p A 9 1 ] L C . s c [ 1 v 3 2 2 9 0 . 4 0 9 1 : v i X r a # ERNIE: Enhanced Representation through Knowledge Integration Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, Hua Wu Baidu Inc. {sunyu02,wangshuohuan,liyukun01,fengshikun01,tianhao,wu hua}@baidu.com # Abstract We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowl- Inspired by the mask- edge IntEgration). ing strategy of BERT (Devlin et al., 2018), ERNIE is designed to learn language represen- tation enhanced by knowledge masking strate- gies, which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words. Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit. Experimental results show that ERNIE outper- forms other baseline methods, achieving new state-of-the-art results on five Chinese natu- ral language processing tasks including nat- ural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. We also demonstrate that ERNIE has more powerful knowledge in- ference capacity on a cloze test. 2018) improved word representation via different strategies, which has been shown to be more effec- tive for down-stream natural language processing tasks. The vast majority of these studies model the representations by predicting the missing word only through the contexts. These works do not consider the prior knowledge in the sentence. For example, In the sentence ” Harry Potter is a series of fantasy novels written by J. K. Rowling”. Harry Potter is a novel name and J. K. Rowling is the writer. It is easy for the model to predict the miss- ing word of the entity Harry Potter by word collo- cations inside this entity without the help of long contexts. The model cannot predict Harry Pot- ter according to the relationship between Harry Potter and J. K. Rowling. It is intuitive that if the model learns more about prior knowledge, the model can obtain more reliable language represen- tation. # Introduction Language representation pre-training (Mikolov et al., 2013; Devlin et al., 2018) has been shown effective for improving many natural language processing tasks such as named entity recognition, sentiment analysis, and question answering. In order to get reliable word representation, neural language models are designed to learn word co- occurrence and then obtain word embedding with unsupervised learning. The methods in Word2Vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) represent words as vectors, where similar words have similar word representations. These word representations provide an initializa- tion for the word vectors in other deep learning models. Recently, lots of works such as Cove (Mc- Cann et al., 2017), Elmo (Peters et al., 2018), GPT (Radford et al., 2018) and BERT (Devlin et al., In this paper, we propose a model called ERNIE (enhanced representation through knowledge inte- gration) by using knowledge masking strategies. In addition to basic masking strategy, we use two kinds of knowledge strategies: phrase-level strat- egy and entity-level strategy. We take a phrase or a entity as one unit, which is usually com- posed of several words. All of the words in the same unit are masked during word representa- tion training, instead of only one word or charac- ter being masked. In this way, the prior knowl- edge of phrases and entities are implicitly learned during the training procedure. Instead of adding the knowledge embedding directly, ERNIE im- plicitly learned the information about knowledge and longer semantic dependency, such as the re- lationship between entities, the property of a en- tity and the type of a event, to guide word em- bedding learning. This can make the model have better generalization and adaptability. In order to reduce the training cost of the model, ERNIE is pre-trained on heterogeneous Chinese data, and then applied to 5 Chinese NLP tasks. ERNIE advances the state-of-the-art results on all of these tasks. An additional experiment on the cloze test shows that ERNIE has better knowl- edge inference capacity over other strong baseline methods. Our Contribution are as follows: (1) We introduce a new learning processing of language model which masking the units such as phrases and entities in order to implicitly learn both syntactic and semantic information from these units. (2) ERNIE significantly outperforms the previ- ous state-of-the art methods on various Chinese natural language processing tasks. (3) We released the codes of ERNIE and pre-trained models, which are available in https://github.com/PaddlePaddle/ LARK/tree/develop/ERNIE . # 2 Related Work # 2.1 Context-independent Representation Representation of words as continuous vectors has a long history. A very popular model architec- ture for estimating neural network language model (NNLM) was proposed in (Bengio et al., 2003), where a feed forward neural network with a linear projection layer and a non-linear hidden layer was used to learn the word vector representation. It is effective to learn general language repre- sentation by using a large number of unlabeled data to pretrain a language model. Traditional methods focused on context-independent word embedding. Methods such as Word2Vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) take a large corpus of text as inputs and produces a word vectors, typically in several hundred dimen- sions. They generate a single word embedding representation for each word in the vocabulary. # 2.2 Context-aware Representation However, a word can have completely different senses or meanings in the contexts. Skip-thought (Kiros et al., 2015) proposed a approach for un- supervised learning of a generic, distributed sen- tence encoder. Cove (McCann et al., 2017) show that adding these context vectors improves per- formance over using only unsupervised word and character vectors on a wide variety of common NLP tasks. ULMFit (Howard and Ruder, 2018) proposed an effective transfer learning method that can be applied to any task in NLP. ELMo (Peters et al., 2018) generalizes traditional word embedding research along a different dimension. They propose to extract context-sensitive features from a language model. The GPT (Radford et al., 2018) enhanced the context-sensitive embedding by adapting the Transformer. BERT (Devlin et al., 2018) uses two different pretraining tasks for language modeling. BERT randomly masks a certain percentage of words in the sentences and learn to predict those masked words. Moreover, BERT learn to predict whether two sentences are adjacent. This task tries to the relationship between two sentences model which is not captured by traditional language models. Consequently, this particular pretraining scheme helps BERT to outperform state-of-the-art techniques by a large margin on various key NLP datasets such as GLUE (Wang et al., 2018) and SQUAD (Rajpurkar et al., 2016) and so on. Some other researchers try to add more infor- mation based on these models. MT-DNN (Liu et al., 2019) combine pre-training learning and multi-task learning to improve the performances over several different tasks in GLUE (Wang et al., 2018). GPT-2 (Radford et al., 2019) adds task in- formation into the pre-training process and adapt their model to zero-shot tasks. XLM (Lample and Conneau, 2019) adds language embedding to the pre-training process which achieved better results in cross-lingual tasks. # 2.3 Heterogeneous Data Semantic encoder pre-trained on heterogeneous unsupervised data can improve the transfer learn- ing performance. Universal sentence encoder (Cer et al., 2018) adopts heterogeneous training data drawn from Wikipedia, web news, web QA pages and discussion forum. Sentence encoder (Yang et al., 2018) based on response prediction ben- efits from query-response pair data drawn from Reddit conversation. XLM (Lample and Conneau, 2019) introduce parallel corpus to BERT, which is trained jointly with masked language model task. With transformer model pre-trained on heteroge- neous data, XLM shows great performance gain on supervise/unsupervised MT task and classifica- tion task. BERT Transformer ss rs # ERNIE Transformer oe EEE ~ i » a # oy Figure 1: The different masking strategy between BERT and ERNIE # 3 Methods # 3.2.1 Basic-Level Masking We introduce ERNIE and its detailed implementa- tion in this section. We first describe the model’s transformer encoder,and then introduce the knowl- edge integration method in Section 3.2. The com- parisons between BERT and ERNIE are shown vi- sually in Figure 1. # 3.1 Transformer Encoder ERNIE use multi-layer Transformer (Vaswani et al., 2017) as basic encoder like previous pre- traning model such as GPT, BERT and XLM. The Transformer can capture the contextual in- formation for each token in the sentence via self- attention, and generates a sequence of contextual embeddings. For Chinese corpus, we add spaces around ev- ery character in the CJK Unicode range and use the WordPiece (Wu et al., 2016) to tokenize Chi- nese sentences. For a given token, its input rep- resentation is constructed by summing the cor- responding token, segment and position embed- dings. The first token of every sequence is the spe- cial classification embedding([CLS]). # 3.2 Knowledge Integration we use prior knowledge to enhance our pretrained language model. Instead of adding the knowledge embedding directly, we proposed a multi-stage knowledge masking strategy to integrate phrase and entity level knowledge into the Language rep- resentation. The different masking level of a sen- tence is described in Figure 2. The first learning stage is to use basic level mask- ing, It treat a sentence as a sequence of basic Language unit, for English, the basic language unit is word, and for Chinese, the basic language unit is Chinese Character. In the training process, We randomly mask 15 percents of basic language units, and using other basic units in the sentence as inputs, and train a transformer to predict the mask units. Based on basic level mask, we can obtain a basic word representation. Because it is trained on a random mask of basic semantic units, high level semantic knowledge is hard to be fully modeled. # 3.2.2 Phrase-Level Masking The second stage is to employ phrase-level mask- ing. Phrase is a small group of words or characters together acting as a conceptual unit. For English, we use lexical analysis and chunking tools to get the boundary of phrases in the sentences, and use some language dependent segmentation tools to get the word/phrase information in other language such as Chinese. In phrase-level mask stage, we also use basic language units as training input, un- like random basic units mask, this time we ran- domly select a few phrases in the sentence, mask and predict all the basic units in the same phrase. At this stage, phrase information is encoded into the word embedding. # 3.2.3 Entity-Level Masking The third stage is entity-level masking. Name entities contain persons, locations, organizations, products, etc., which can be denoted with a proper Basic-level Masking mask] Potter is Harry Potter is Entity-level Masking Phrase-level Masking Foxe f= Poe) of Por] ones fice y | a series [mask] fantasy novels [mask] a series [mask] fantasy novels [mask] by British author [mask] [mask] [mask] Harry Potter is [mask] (mask] {mask fantasy novels |{mask]) by British author (mask) (mask] | [mask ) by British author J. mask] Rowling Figure 2: Different masking level of a sentence name. It can be abstract or have a physical exis- tence. Usually entities contain important informa- tion in the sentences. As in the phrase masking stage, we first analyze the named entities in a sen- tence, and then mask and predict all slots in the entities. After three stage learninga word repre- sentation enhanced by richer semantic information is obtained. # 4 Experiments ERNIE was chosen to have the same model size as BERT-base for comparison purposes. ERNIE uses 12 encoder layers, 768 hidden units and 12 attention heads. # 4.1 Heterogeneous Corpus Pre-training ERNIE adopts Heterogeneous corpus for pre- training. Following (Cer et al., 2018), we draw the mixed corpus Chinese Wikepedia, Baidu Baike, Baidu news and Baidu Tieba. The number of sen- tences are 21M, 51M, 47M, 54M. respectively. Baidu Baike contains encyclopedia articles writ- ten in formal languages, which is used as a strong basis for language modeling. Baidu news provides the latest information about movie names, actor names, football team names, etc. Baidu Tieba is an open discussion forum like Reddits, where each post can be regarded as a dialogue thread. Tieba corpus is used in our DLM task, which will be dis- cussed in the next section. We perform traditional-to-simplified conversion on the Chinese characters, and upper-to-lower conversion on English letters. We use a shared vocabulary of 17,964 unicode characters for our model. is different from that of universal sentence en- coder (Cer et al., 2018). ERNIE’s Dialogue em- bedding plays the same roles as token type em- bedding in BERT, except that ERNIE can also rep- resent multi-turn conversations (e.g. QRQ, QRR, QQR, where Q and R stands for ”Query” and ”Re- sponse” respectively). Like MLM in BERT, masks are applied to enforce the model to predict missing words conditioned on both query and response. What’s more, we generate fake samples by replac- ing the query or the response with a randomly se- lected sentence. The model is designed to judge whether the multi-turn conversation is real or fake. The DLM task helps ERNIE to learn the im- plicit relationship in dialogues, which also en- hances the model’s ability to learn semantic rep- resentation. The model architecture of DLM task is compatible with that of the MLM task, thus it is pre-trained alternatively with the MLM task. # 4.3 Experiments on Chinese NLP Tasks ERNIE is applied to 5 Chinese NLP tasks, includ- ing natural language inference, semantic similar- ity, named entity recognition, sentiment analysis, and question answering. 4.3.1 Natural Language Inference The Cross-lingual Natural Language Inference (XNLI) corpus (Liu et al., 2019) is a crowd- sourced collection for the MultiNLI corpus. The pairs are annotated with textual entailment and translated into 14 languages including Chinese. The labels contains contradiction, neutral and en- tailment. We follow the Chinese experiments in BERT(Devlin et al., 2018). # 4.2 DLM Dialogue data is important for semantic represen- tation, since the corresponding query semantics of the same replies are often similar. ERNIE mod- els the Query-Response dialogue structure on the DLM (Dialogue Language Model) task. As shown in figure 3, our method introduces dialogue em- bedding to identify the roles in the dialogue, which 4.3.2 Semantic Similarity The Large-scale Chinese Question Matching Cor- pus (LCQMC) (Liu et al., 2018) aims at identify- ing whether two sentences have the same inten- tion. Each pair of sentences in the dataset is as- sociated with a binary label indicating whether the two sentences share the same intention, and the task can be formalized as predicting a binary la- bel. _ Dialogue Response Loss 2 | Embedding Embedding Embedding eogage as Transformer Beate hometown esac Q Q Q # Token # Position # Dialogue Figure 3: Dialogue Language Model. Source sentence: [cls] How [mask] are you [sep] 8 . [sep] Where is your [mask] ? [sep]. Target sentence (words the predict): old, 8, hometown) 4.3.3 Name Entity Recognition The MSRA-NER dataset is designed for named entity recognition, which is published by Mi- crosoft Research Asia. The entities contains sev- eral types including person name, place name, or- ganization name and so on. This task can be seen as a sequence labeling task. 4.3.4 Sentiment Analysis ChnSentiCorp (Song-bo) is a dataset which aims at judging the sentiment of a sentence. It in- cludes comments in several domains such as ho- tels, books and electronic computers. the goal of this task is to judge whether the sentence is posi- tive or negative. section. # 4.5.1 Effect of Knowledge Masking Strategies We sample 10% training data from the whole cor- pus to verify the effectiveness of the knowledge masking strategy. Results are presented in Table 2. We can see that adding phrase-level mask to the baseline word-level mask can improve the per- formance of the model. Based on this, we add the entity-level masking strategythe performance of the model is further improved. In addition. The results also show that with 10 times larger size of the pre-training dataset, 0.8% performance gain is achieved on XNLI test set. 4.3.5 Retrieval Question Answering The goal of NLPCC-DBQA dataset ( http: //tcci.ccf.org.cn/conference/ 2016/dldoc/evagline2.pdf) is to select answers of the corresponding questions. The evaluation methods on this dataset include MRR (Voorhees, 2001) and F1 score. # 4.4 Experiment results The test results on 5 Chinese NLP tasks are pre- sented in Table 1. It can be seen that ERNIE out- performs BERT on all tasks, creating new state- of-the-art results on these Chinese NLP tasks. For the XNLI, MSRA-NER, ChnSentiCorp and nlpcc- dbqa tasks, ERNIE obtains more than 1% abso- lute accuracy improvement over BERT. The gain of ERNIE is attributed to its knowledge integra- tion strategy. # 4.5 Ablation Studies 4.5.2 Effect of DLM Ablation study is also performed on the DLM task. we use 10% of all training corpus with different proportions to illustrate the contributions of DLM task on XNLI develop set. we pre-train ERNIE from scratch on these datasets, and report aver- age result on XNLI task from 5 random restart of fine-tuning. Detail experiment setting and develop set result is presented in Table 3, We can see that 0.7%/1.0% of improvement in develop/test accu- racy is achieved on this DLM task. # 4.6 Cloze Test To verify ERNIE’s knowledge learning ability, We use several Cloze test samples (Taylor, 1953) to examine the model. In the experiment, the name entity is removed from the paragraphs and the model need to infer what it is. Some cases are show in Figure 4. We compared the predictions of BERT and ERNIE. To better understand ERNIE, we perform ablation experiments over every strategy of ERNIE in this In case 1, BERT try to copy the name appeared in the context while ERNIE remembers the knowl- Table 1: Results on 5 major Chinese NLP tasks Task XNLI LCQMC MSRA-NER ChnSentiCorp nlpcc-dbqa Metrics accuracy accuracy F1 accuracy mrr F1 Bert dev 78.1 88.8 94.0 94.6 94.7 80.7 test 77.2 87.0 92.6 94.3 94.6 80.8 ERNIE dev 79.9 (+1.8) 89.7 (+0.9) 95.0 (+1.0) 95.2 (+0.6) 95.0 (+0.3) 82.3 (+1.6) test 78.4 (+1.2) 87.4 (+0.4) 93.8 (+1.2) 95.4 (+1.1) 95.1 (+0.5) 82.7 (+1.9) Table 2: XNLI performance with different masking strategy and dataset size dev Accuracy word-level(chinese character) word-level&phrase-level word-level&phrase-leve&entity-level word-level&phrase-level&entity-level 77.7% 78.3% 78.7% 79.9 % 76.8% 77.3% 77.6% 78.4% # test Accuracy Table 3: XNLI finetuning performance with DLM dev Accuracy 76.5% 77.0% 77.7% 75.9% 75.8% 76.8% Text 20064F9A , SAS, BABE AS ILT—AJLFLucasifitix $F, \JLF Quintus | In September 2006, married Cecilia Cheung. They had two sons, the older one is Zhenxuan Xie and the younger one is Zhennan Xie. RABE, RRB AS, 2. . RAB SERA LBD ABET BI ROL RR The Reform Movement of 1898, also known as the Hundred-Day Reform, was a bourgeois reform carried out by the reformists such as ___ and Qichao Liang through Emperor Guangxu. BMBVShF_ _ UR RAS AIR, NAAR SE. RRA HFEN SI, SAMAR, HAEAR BL DE, mB. HARE. WR. Hyperglycemia is caused by defective PAE secretion or impaired biological function, or both. Long-term hyperglycemia in diabetes leads to chronic damage and dysfunction of various tissues, especially eyes, kidneys, heart, blood vessels and nerves. RAANLA-THERANAAENER, BBA. (FAM ERATE RENARASRH 2ABHK, SREBARFP ROR, RAS HT HOBSRE—MBR. Australia is a highly developed capitalist country with __ its capital. As the most developed country in the Southern as Hemisphere, the 12th largest economy in the world and the fourth largest exporter of agricultural products in the world, the world's largest exporter of various minerals. ___ PRR MBS, ANT ARKR RBS ime, 5 (SBR) OKT) (LES) HHRAPHARAAAS. is a classic novel of Chinese gods and demons, which reaching the peak of ancient Romantic novels. It is also known as the four classical works of China with Romance of the Three Kingdoms, Water Margin and Dream of Red Mansions. MCAATN SMS AA, $B lls. it is also Relativity is a theory about space-time and gravity, which was founded by Predict by ERNIE Tingfeng Xie RAA Youwei Kang RSE Insulin Bra Melbourne wiFic The Journey to the West RAM Einstein Predict by BERT itt Zhenxuan Xie Phtts Shichang Sun BEA (Not a word in Chinese) BEA (Not a city name) qh) (Not a word in Chinese) ERAT (Not a word in Chinese) Answer WER Tingfeng Xie RBA Youwei Kang REE Insulin RBSH Canberra (the capital of Australia) cepatey The Journey to the West Bam Einstein Figure 4: Cloze test edge about relationship mentioned in the article. In cases 2 and Case 5, BERT can successfully learn the patterns according to the contexts, there- fore correctly predicting the named entity type but failing to fill in the slot with the correct entity. on the contrary, ERNIE can fill in the slots with the correct entities. In cases 3, 4, 6, BERT fills in the slots with several characters related to sen- tences, but it is hard to predict the semantic con- cept. ERNIE predicts correct entities except case 4. Although ERNIE predicts the wrong entity in Case 4, it can correctly predict the semantic type and fills in the slot with one of an Australian city. In summary, these cases show that ERNIE per- forms better in context-based knowledge reason- ing. # 5 Conclusion In this paper, we presents a novel method to inte- grate knowledge into pre-training language model. Experiments on 5 Chinese language processing tasks show that our method outperforms BERT over all of these tasks. We also confirmed that both the knowledge integration and pre-training on het- erogeneous data enable the model to obtain better language representation. In future we will integrate other types of knowl- edge into semantic representation models, such as using syntactic parsing or weak supervised signals from other tasks. In addition We will also validate this idea in other languages. # References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137–1155. Daniel Cer, Yinfei Yang, Sheng Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, and Chris Tar. 2018. Universal sentence encoder. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, In and Sanja Fidler. 2015. Skip-thought vectors. Advances in neural information processing systems, pages 3294–3302. Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching In Proceedings of the 27th International corpus. Conference on Computational Linguistics, pages 1952–1962. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6294–6305. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. URL https://s3- us-west-2. amazonaws. com/openai-assets/research- under- covers/languageunsupervised/language standing paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. TAN Song-bo. Chnsenticorp. cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998–6008. Ellen M Voorhees. 2001. Overview of the trec 2001 question answering track. In TREC, pages 42–51. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng Yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun Hsuan Sung, and Brian Strope. 2018. Learning semantic textual similarity from conversations.
{ "id": "1810.04805" }
1904.09107
Code-Switching for Enhancing NMT with Pre-Specified Translation
Leveraging user-provided translation to constrain NMT has practical significance. Existing methods can be classified into two main categories, namely the use of placeholder tags for lexicon words and the use of hard constraints during decoding. Both methods can hurt translation fidelity for various reasons. We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations. Our method does not change the MNT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words. Extensive experiments show that our method achieves consistent improvements over existing approaches, improving translation of constrained words without hurting unconstrained words.
http://arxiv.org/pdf/1904.09107
Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
cs.CL
null
null
cs.CL
20190419
20190516
9 1 0 2 y a M 6 1 ] L C . s c [ 4 v 7 0 1 9 0 . 4 0 9 1 : v i X r a # Code-Switching for Enhancing NMT with Pre-Specified Translation Kai Song1,2, Yue Zhang3, Heng Yu2, Weihua Luo2, Kun Wang1, Min Zhang1 1 Soochow University, Suzhou, China 2 Alibaba DAMO Academy, Hangzhou, China 3 School of Engineering, Westlake University, Hangzhou, China [email protected], [email protected] {yuheng.yh,weihua.luowh}@alibaba-inc.com [email protected], [email protected] # Abstract Leveraging user-provided translation to con- strain NMT has practical significance. Exist- ing methods can be classified into two main categories, namely the use of placeholder tags for lexicon words and the use of hard con- straints during decoding. Both methods can hurt translation fidelity for various reasons. We investigate a data augmentation method, making code-switched training data by replac- ing source phrases with their target transla- tions. Our method does not change the NMT model or decoding algorithm, allowing the model to learn lexicon translations by copy- ing source-side target words. Extensive exper- iments show that our method achieves consis- tent improvements over existing approaches, improving translation of constrained words without hurting unconstrained words. # Introduction One important research question in domain- specific machine translation (Luong and Manning, 2015) is how to impose translation constraints (Crego et al., 2016; Hokamp and Liu, 2017; Post and Vilar, 2018). As shown in Figure 1 (a), the word “breadboard” can be translated into “切面 包板 (a wooden board that is used to cut bread on)” in the food domain, but “电 è·¯ 板 (a con- struction base for prototyping of electronics)” in the electronic domain. To enhance translation quality, a lexicon can be leveraged for domain- specific or user-provided words (Arthur et al., 2016; Hasler et al., 2018). We investigate the method of leveraging pre-specified translation for NMT using such a lexicon. user-provided or domain-specific dictionary: breadboard —> set Input: I want a breadboard A Input: I want a breadboard Output: $& A —7 VI AZ ‘Code-switched: I want a !1) {iz 1 want a Constrained: $& ABE —7 Hii TP want a (a) Constrained NMT Output: we ABE PB want a (b) Constrained NMT: Our Method Figure 1: Constrained NMT 2014) on both the source and target sides during training, so that a model can translate such words by learning to translate placeholder tags. For ex- ample, the i-th named entity in the source sentence is replaced with “tagi”, as well as its correspond- ing translation in the target side. Placeholder tags in the output are replaced with pre-specified trans- lation as a post-processing step. One disadvantage of this approach, however, is that the meaning of the original words in the pre-specified translation is not fully retained, which can be harmful to both adequacy and fluency of the output. Another approach (Hokamp and Liu, 2017; Post and Vilar, 2018) imposes pre-specified transla- tion via lexical constraints, making sure such con- straints are satisfied by modifying NMT decod- ing. This method ensures that pre-specified trans- lations appear in the output. A problem of this method is that it does not explicitly explore the correlation between pre-specified translations and their corresponding source words during decod- ing, and thus can hurt translation fidelity (Hasler et al., 2018). There is not a mechanism that allows the model to learn constraint translations during training, which the placeholder method allows. For leveraging pre-specified translation, one ex- isting approach uses placeholder tags to substitute named entities (Crego et al., 2016; Li et al., 2016; Wang et al., 2017b) or rare words (Luong et al., We investigate a novel method based on data augmentation, which combines the advantages of both methods above. The idea is to construct syn- thetic parallel sentences from the original paral- lel training data. The synthetic sentence pairs re- semble code-switched source sentences and their translations, where certain source words are re- placed with their corresponding target transla- tions. The motivation is to make the model learn to “translate” embedded pre-specified translations by copying them from the modified source. During decoding, the source is similarly modified as a pre- processing step. As shown in Figure 1 (b), trans- lation is executed over the code-switched source, without further constraints or post-processing. to the placeholder method, our method keeps lexical semantic information (i.e. target words v.s. placeholder tags) in the source, which can lead to more adequate translations. Compared with the lexical constraint method, pre- specified translation is learned because such in- formation is available both in training and de- coding. As a data augmentation method, it can In addi- be used on any NMT architecture. tion, our method enables the model to translate code-switched source sentences, and preserve its strength in translating un-replaced sentences. To further strengthen copying, we propose two model-level adjustments: First, we share target- side embeddings with source-side target words, so that target vocabulary words have a unique embed- ding in the NMT system. Second, we integrate pointer network (Vinyals et al., 2015; Gulcehre et al., 2016; Gu et al., 2016; See et al., 2017) into the decoder. The copy mechanism was firstly pro- posed to copy source words. In our method, it is further used to copy source-side target words. Results on large scale English-to-Russian (En- Ru) and Chinese-to-English (Ch-En) tasks show that our method outperforms both placeholder and lexical constraint methods over a state-of-the-art Transformer (Vaswani et al., 2017) model on var- ious test sets across different domains. We also show that shared embedding and pointer network can lead to more successful applications of the copying mechanism. We release four high-quality En-Ru e-commerce test sets translated by Russian language experts, totalling 7169 sentences with an average length of 211. # 2 Related Work Using placeholders. Luong et al. (2014) use an- notated unk tags to present the unk symbols in 1To best of our knowledge, this is the first public e- commerce test set. training corpora, where the correspondence be- tween source and target unk symbols are obtained from word alignment (Brown et al., 1993). Output unk tags are replaced through a post-processing stage by looking up a pre-specified dictionary or copying the corresponding source word. Crego (2016) extended unk tags symbol to spe- et al. cific symbols that can present name entities. Wang et al. (2017b) and Li et al. (2016) use a similar method. This method is limited when constrain NMT with pre-specified translations consisting of more general words, due to the loss of word mean- ing when representing them with placeholder tags. In contrast to their work, word meaning is fully kept in modified source in our work. Lexical constraints. Hokamp and Liu (2017) propose an altered beam search algorithm, namely grid beam search, which takes target-side pre- specified translations as lexical constraints during beam search. A potential problem of this method is that translation fidelity is not specifically con- sidered, since there is no indication of a match- ing source of each pre-specific translation. In addition, decoding speed is significantly reduced (Post and Vilar, 2018). Hasler et al. (2018) use alignment to gain target-side constraints’ corre- sponding source words, simultaneously use finite- state machines and multi-stack (Anderson et al., 2016) decoding to guide beam search. Post and Vilar (2018) give a fast version of Hokamp and Liu (2017), which limits the decoding complex- ity linearly by altering the beam search algorithm through dynamic beam allocation. In contrast to their methods, our method does not make changes to the decoder, and therefore decoding speed remains unchanged. Translation fidelity of pre-specified source words is achieved through a combination of training and decod- ing procedure, where replaced source-side words still contain their target-side meaning. As a soft method of inserting pre-specified translation, our method does not guarantee that all lexical con- straints are satisfied during decoding, but has bet- ter overall translation quality compared to their method. Using probabilistic lexicons. Aiming at mak- ing use of one-to-many phrasal translations, the following work is remotely related to our work. Tang et al. (2016) use a phrase memory to pro- vide extra information for their NMT encoder, dy- namically switching between word generation and phrase generation during decoding. Wang et al. (2017a) use SMT to recommend prediction for NMT, which contains not only translation opera- tions of a SMT phrase table, but also alignment in- formation and coverage information. Arthur et al. (2016) incorporate discrete lexicons by converting lexicon probabilities into predictive probabilities and linearly interpolating them with NMT proba- bility distributions. Our method is similar in the sense that external translations of source phrases are leveraged. How- ever, their tasks are different. In particular, these methods regard one-to-many translation lexicons as a suggestion. In contrast, our task aims to con- strain NMT translation through one-to-one pre- specified translations. Lexical translations can be used to generate code-switched source sentences during training, but we do not modify NMT mod- els by integrating translation lexicons. In addition, our data augmentation method is more flexible, because it is model-free. (2018) simulate a dictionary- guided translation task to evaluate NMT’s align- ment extraction. A one-to-one word translation dictionary is used to guide NMT decoding. In their method, a dictionary entry is limited to only one word on both the source and target sides. In addi- tion, a pre-specified translation can come into ef- fect only if the corresponding source-side word is successfully aligned during decoding. On translating named entities, Currey et al. (2017) augment the training data by copying target-side sentences to the source-side, resulting in augmented training corpora where the source and the target sides contain identical sentences. The augmented data is shown to improve transla- tion performance, especially for proper nouns and other words that are identical in the source and tar- get languages. # 3 Data augmentation Our method is based on data augmentation. Dur- ing training, augmented data are generated by re- placing source words or phrases directly with their corresponding target translations. The motivation is to sample as many code-switched translation pairs as possible. During decoding, given pre- specified translations, the source sentence is mod- ified by replacing phrases with their pre-specified translations, so that the trained model can directly copy embedded target translations in the output. # 3.1 Training Given a bilingual training corpus, we sample aug- mented sentence pairs by leveraging a SMT phrase table, which can be trained over the same bilin- gual corpus or a different large corpus. We extract source-target phrase pairs2 from the phrase table, replacing source-side phrases of source sentences using the following sampling steps: 1. Indexing between source-target phrase pairs and training sentences: (a) For each source- target phrase pair, we record all the match- ing bilingual sentences that contain both the source and target. Word alignment can be used to ensure the phrase pairs that are mu- tual translation. (b) We also sample bilin- gual sentences that match two source-target phrase pairs. In particular, given a combina- tion of two phrase pairs, we index bilingual sentences that match both simultaneously. 2. Sampling: (a) For each source-target phrase pair, we keep at most k1 randomly selected matching sentences. The source-side phrase is replaced with its target-side translation. (b) For each combination of two source-target phrase pairs, we randomly sample at most k2 matching sentences. Both source-side matching phrases are replaced with their tar- get translations.3 The sampled training data is added to the origi- nal training data to form a final set of training sen- tences. # 3.2 Decoding We impose target-side pre-specified translations to the source by replacing source phrases with their translations. Lexicons are defined in the form of one-to-one source-target phrase pairs. Different from training, the number of replaced phrases in a source sentence is not necessarily restricted to one or two, which will be discussed in Section 5.5. In practice, pre-specified translations can be provided by customers or through user feedback, which contains one identified translation for spec- ified source segment. # 4 Model Transformer (Vaswani et al., 2017) uses self- attention network for both encoding and decod- 2Source-side phrase is at most trigram. 3We set k1 = 100, k2 = 30 empirically. ing. The encoder is composed of n stacked neu- ral layers. For time step i in layer j, the hidden state hi,j is calculated by employing self-attention over the hidden states in layer j − 1, which are {h1,j−1, h2,j−1, ..., hm,j−1}, where m is the num- ber of source-side words. In particular, h;,; is calculated as follows: First, a self-attention sub-layer is employed to encode the context. Then attention weights are computed as scaled dot product between the current query hij-1 and all keys {hi j—1, haji, hm ji}, normalized with a softmax function. Af- ter that, the context vector is represented as weighted sum of the values projected from hid- den states in the previous layer, which are {hij-1, ho j-1, wey hmg-1}- The hidden state in the previous layer and the context vector are then connected by residual connection, followed by a layer normalization function (Ba et al., 2016), to produce a candidate hidden state hy, ;- Finally, an- other sub-layer including a feed-forward network (FEN) layer, followed by another residual connec- tion and layer normalization, are used to obtain the hidden state h; ;. In consideration of translation quality, multi- head attention is used instead of single-head at- tention as mentioned above, positional encoding is also used to compensate the missing of position information in this model. The decoder is also composed of n stacked layers. For time step ¢ in layer j, a self- attention sub-layer of hidden state s;; is calcu- lated by employing self-attention mechanism over hidden states in previous target layer, which are {51j-1, $2j-1, --, 5:-1-1}, resulting in candi- date hidden state Shy Then, a second target-to- source sub-layer of hidden state s;; is inserted above the target self-attention sub-layer. In par- ticular, the queries(Q) are projected from 8), j» and the keys(/v) and values(V) are projected from the source hidden states in the last layer of encoder, which are {h1.n, h2,n,---;tm.n}. The output state is another candidate hidden state Sip Finally, a last feed-forward sub-layer of hidden state s; ; is calculated by employing self-attention over S. i A softmax layer based on decoder’s last layer st,n is used to gain a probability distribution Ppredict over target-side vocabulary. p(yt|y1, ..., yt−1, x) = softmax(st,n ∗ W), where W is the weight matrix which is learned, x (1) probability distribution over source-side words and target-side vocabulary Linear & Softmax wee | wee ee mito TUR b want a i = 8 yrea)* Pay | ] Srvc * Prredie Pry : target-to-source attention weights Pies aR Vocabulary probability astbution n= ae | aes 7,4 Ty ry 7 a} Pore ws “Encoder leeelleeellecelleee] i want aH ] le want HBR Source Embeddings ‘Target Embeddings Figure 2: Shared embeddings and pointer network represent the source sentence, {y1, y2, ..., yt} rep- resent target words. # 4.1 Shared Target Embeddings Shared target embeddings enforces the correspon- dence between source-side and target-side expres- sions on the embedding level. As shown in Fig- ure 2, during encoding, source-side target word embeddings are identical to their embeddings in the target-side vocabulary embedding matrix. This makes it easier for the model to copy source-side target words to the output. # 4.2 Pointer Network To strengthen copying through locating source- side target words, we integrate pointer network (Gulcehre et al., 2016) into the decoder, as shown in Figure 2. At each decoding time step t, the target-to-source attention weights αt,1, ..., αt,m are utilized as a probability distribution Pcopy, which models the probability of copying a word from the i-th source-side position. The i-th source-side position may represent a source-side word or a source-side target word. Pcopy is added to Ppredict, the probability distribution over target- side vocabulary, to gain a new distribution over both the source and the target side vocabulary4: P = (1 − gpred) ∗ Pcopy + gpred ∗ Ppredict , where gpred is used to control the contribution of two probability distributions. For time step t, gpred is calculated from the context vector ct and the current hidden state of the decoder’s last layer st,n: 4For the words which belong to the source-side vocab- ulary but are not appeared in the source-side sentence, the probabilities are set to 0. gpred = σ(ct ∗ Wp + st,n ∗ Wq + br), (3) where W,, W,, and b; are parameters trained and a is the sigmoid function. In addition, the context vector c; is calculated as c, = yin ati * hin where a;,; is attention weight mentioned earlier. {hin, hans; hm} are the source-side hidden states of the encoder’s last layer. # 5 Experiments We compare our method with strong baselines on large-scale En-Ru and Ch-En tasks on var- ious test sets across different domains, using a strongly optimized Transformer (Vaswani et al., 2017). BLEU (Papineni et al., 2002) is used for evaluation. # 5.1 Data Our WMT2018 news translation task. training corpora are taken from the En-Ru. We use 13.88M sentences as base- line training data, containing both a real bilin- gual corpus and a synthetic back-translation cor- pus (Sennrich et al., 2015a). The synthetic corpus is translated from “NewsCommonCrawl”, which The can be obtained from the WMT task. news domain contains four different test sets published by WMT2018 over the recent years, namely “news2015”, “news2016”, “news2017”, and “news2018”, respectively, each having one reference. The e-commerce domain contains four files totalling 7169 sentences, namely “sub- ject17”, “desc17”, “subject18”, and “desc18”, re- spectively, each having one reference. The sen- tences are extracted from e-commerce websites, in which “subject”s are the goods names shown on a listing page. “desc”s refer to information in a commodity’s description page. “subject17” and “desc17” are released5. Our development set is “news2015”. Ch-En. We use 7.42M sentences as our base- line training data, containing both real bilingual corpus and synthetic back-translation corpus (Sen- nrich et al., 2015a). We use seven public devel- opment and test data sets, four in the news do- main, namely “NIST02”, “NIST03”, “NIST04”, “NIST05”, respectively, each with four references, and three in the spoken language domain, namely 5https://github.com/batman2013/ e-commerce_test_sets “CSTAR03”, “IWSLT2004”, “IWLST2005”, re- spectively, each with 16 references. “NIST03” is used for development. # 5.2 Experimental Settings We use six self-attention layers for both the en- coder and the decoder. The embedding size and the hidden size are set to 512. Eight heads are used for self-attention. A feed-forward layer with 2048 cells and Swish (Ramachandran et al., 2018) is used as the activation function. Adam (Kingma and Ba, 2014) is used for training; warmup step is 16000; the learning rate is 0.0003. We use label smoothing (Junczys-Dowmunt et al., 2016) with a confidence score of 0.9, and all the drop-out (Gal and Ghahramani, 2016) probabilities are set to 0.1. We extract a SMT phrase table on the bilin- gual training corpus by using moses (Koehn et al., 2007) with default setting, which is used for matching sentence pairs to generate augmented training data. We apply count-based pruning (Zens et al., 2012) to the phrase table, the thresh- old is set to 10. During decoding, to Hasler et al. (2018), Alkhouli et al. (2018) and Post and Vi- lar (2018), we make use of references to obtain gold constraints. Following previous work, pre- specified translations for each source sentence are sampled from references and used by all systems for fair comparison. In all the baseline systems, the vocabulary size is set to 50K on both sides. For “Data augmenta- tion”, to allow the source-side dictionary to cover target-side words, the target- and source-side vo- cabularies are merged for a new source vocabu- lary. For “Shared embeddings”, the source vo- cabulary remains the same as the baselines, where the source-side target words use embeddings from target-side vocabulary. # 5.3 System Configurations We use an in-house reimplementation of Trans- former, similar to Google’s Tensor2Tensor. For the baselines, we reimplement Crego et al. (2016), as well as Post and Vilar (2018). BPE (Sennrich et al., 2015b) is used for all experiments, the oper- ation is set to 50K. Our test sets cover news and e- commerce domains on En-Ru, and news and spo- ken language domains on Ch-En. Baseline 1: Using Placeholder. We combine Luong et al. (2014) and Crego et al. (2016). For news15 news16 news17 news18 A |subject!7 desc17 subjectl8 descl8 A Marian 33.27 31.91 36.18 32.11 -0.15; 803 23.21 11.02 27.94 -0.46 Transformer | 33.29 31.95 36.57 32.27 - 8.56 23.53 11.95 27.90. - + Placeholder | 33.14 32.07 36.24 32.03 -0.15| 9.81 24.04 13.84 29.34 +1.27 + Lexi. Cons. | 33.50 32.62 36.65 32.88 +0.39} 9.24 23.67 13.1 29.83 +0.98 Data Aug. 34.71 33.69 38.43 33.51 +1.57} 10.63 25.56 14.26 30.92 +2.36 + Share 35.28 34.37 39.02 34.44 +2.26; 10.82 25.84 15.20 30.97 +2.72 + Share&Point) 36.44 35.31 40.23 35.43 +3.33) 11.58 26.53 16.08 32.17 +3.61 Table 1: Results on En-Ru, one or two source phrases of each sentence have pre-specified translation. “Trans- former” is our in-house vanilla Transformer baseline. “Marian” is the implementation of Transformer by Junczys- Dowmunt et al. (2018), which is used as a reference of our Transformer implementation. | CSTARO3 IWSLT04 IWSLTOS A | NISTO2 NISTO3 NISTO4 NISTOS A Transformer 53.03 56.52 64.72 - 40.52 37.85 40.12 39.26 - + Placeholder | 52.51 56.15 64.44 -0.39| 40.01 37.16 39.96 38.87 -0.44 + Lexi. Cons. | 53.30 56.95 65.63 +0.54| 40.36 38.02 40.44 39.72 +0.20 Data Aug. 53.82 57.28 65.54 +0.79| 40.85 38.41 40.81 40.29 +0.65 +Share 53.90 57.67 65.59 +0.96| 41.06 38.57 41.22 40.38 +0.87 +Share&Point|} 53.79 57.29 65.65 +0.82| 41.11 38.7 41.3 40.4 +0.94 Table 2: Results on Ch-En, one or two source phrases of each sentence have pre-specified translation. generating placeholder tags during training, fol- lowing Crego et al. (2016), we use a named en- tity translation dictionary which is extracted from Wikidata6. The dictionary is released together with e-commerce test sets, which is mentioned be- fore. For Ch-En, the dictionary contains 285K per- son names, 746K location names and 1.6K orga- nization names. For En-Ru, the dictionary con- tains 471K person names, 254K location names and 1.5K organization names. Additionally, we manually corrected a dictionary which contains 142K brand names and product names translation for En-Ru. By further leveraging word alignment in the same way as Luong et al. (2014), the place- holder tags are annotated with indices. We use FastAlign (Dyer et al., 2013) to generate word alignment. The amount of sentences containing placeholder tags is controlled to a ratio of 5% of the corpus. During decoding, pre-specified trans- lations described in Section 5.2 are used. Baseline 2: Lexical Constraints. We re- implement Post and Vilar (2018), integrating their algorithm into our Transformer. Target-side words or phrases of pre-specified translations mentioned in Section 5.2 are used as lexical constraints. Our System. During training, we use the method described in Section 3.1 to obtain the augmented training data. The SMT phrase table mentioned in Section 5.2 is used for “Indexing” and “Sampling”. During decoding, pre-specified translations mentioned in Section 5.2 are used. The augmented data contain sampled sentences with one or two replacements on the source side. By applying the two sampling steps described in Section 3.1, about 10M and 6M augmented Ch-En and En-Ru sentences are generated, respectively. The final training corpora consists of both the aug- mented training data and the original training data. # 5.4 Results Comparison with Baselines. Our Transformer implementation can give comparable performance with state-of-the-art NMT (Junczys-Dowmunt et al., 2018), see “Transformer” and “Marian” in Table 1, which also shows a comparison of dif- ferent methods on En-Ru. The lexical constraint method gives improvements on both the news and the e-commerce domains, compared with the Transformer baseline. The placeholder method also gives an improvement on the e-commerce 6https://www.wikidata.org Pre-Specified Translation: };|-\i!| 4: 7 —> planned parenthood Source: k +A, BRE 36 We T SEA SE He AY — EN, HE BOR mK 11: 1 I Hii HE KAA fa & SEI GERD Hk PY Pe HAGE SG WERE MIR A HE ARE A ARK, IE RGAE HA a VE RG PH Transformer: last month , the democratic party of the republic rejected a proposal by the republican party to grant $ 1.1 billion to fight against the zika but contained provisions prohibiting the family planning association from providing virus - related contraceptive conditions and the zika virus could be transmitted sexually . Baseline1 (Placeholder): last month , the party rejected a proposal from the republican party that would require $ 1.1 billion to fight zika , but would include provisions that prohibit program planned parenthood from providing virus - related contraceptive conditions , and zika could be transmitted sexually . Baseline2(Lexi. Cons.): last month , the democratic party of the republic rejected a proposal by the republican party to grant $ 1.1 billion to fight against the zika but contained provisions that prohibit the family planning association from providing contraceptive conditions related to the virus and the zika virus could be transmitted through a sexual route . th national neil ha: ‘n bann from funding for the planned parenthood . Ours: last month , the democratic party rejected a republican proposal calling for $ 1.1 billion to fight zika , which contains provisions that prohibit the association of planned parenthood from providing virus - related contraceptive conditions , and the zika virus could be transmitted sexually . Reference: last month , democrats blocked consideration of a republican measure that would have allocated $ 1.1 billion to fight zika but included provisions that would have banned funding for planned parenthood to provide contraception related to the virus , which can be sexually transmitted . Figure 3: Sample outputs. domain. The average improvement is calculated over all the test set results in each domain. In the news domain, the average improvement of our method is 3.48 BLEU higher compared with placeholder, and 2.94 over lexical constraints. In the e-commerce domain, the average improvement of our method is 1.34 BLEU compared with place- holder, and 2.63 with lexical constraints. Both shared embedding and pointer network are effec- tive. Table 2 shows the same comparison on Ch- En. In the spoken language domain, the average improvement is 1.35 BLEU compared with place- holder, and 0.42 with lexical constraints. In the news domain, the average improvement is 1.38 BLEU compared with placeholder, and 0.74 with lexical constraints. Beam Size 5 10 20 30 Unconstrained & Ours 416 312 199 146 50 Lexical Constraint 102 108 74 Table 3: Decoding speed (words/sec), Ch-En dev set. tuition introduced earlier. Sample Outputs. Figure 3 gives a comparison of different system’s translations. Given a Chi- nese source sentence, the baseline system fails to translate “计划生育” adequately, as “family plan- ning” is not a correct translation of “计划生育”. In the pre-specified methods, the correct trans- lation (“计划生育” to “planned parenthood”) is achieved through different ways. We find that the placeholder method can only bring improvements on the En-Ru e-commerce test sets, since the pre-specified translations of the four e-commerce test sets are mostly entities, such as brand names or product names. Using place- holder tags to represent these entities leads to rel- atively little loss of word meaning. But on many of the other test sets, pre-specified translations are mostly vocabulary words. The placeholder tags fail to keep their word meaning during translation, leading to lower results. The speed contrast between unconstrained NMT, lexical constraint and our method is shown in Table 3. The decoding speed of our method is equal to unconstrained NMT, and faster than the lexical constraint method, which confirms our in- For the placeholder method, the source phrase “计划生育” is replaced with the placeholder tag “tag1” during pre-processing. After translation, output “tag1” is replaced with “planned parent- hood” as a post-processing step. However, the underlined word “program” is generated before “planned parenthood”, which has no relationship with any source-side word. The source-side word “协会”, which means “association”, is omitted in translation. Through deeper analysis, the specific phrase “program tag1” occurs frequently in the training data. During decoding, using the hard tag leads to the loss of the source phrase’s origi- nal meaning. As a result, the word “program” is incorrectly generated along with “tag1”. The lexical constraints method regards the tar- “@~ Ours newsdev2017 -@ Ours newstest2017 — ours newstest2018 Â¥~ Placeholder newsdev2017 -¥» Placeholder newstest2017 # Placeholder newstest2018 Increased BLEU Number of replacement Figure 4: Increased BLEU on Ch-En test sets. 1.00 -@- Ours_newsdev2017 --@- Ours_newstest2017 0.95 —® Ours_newstest2018 -Â¥- Placeholder_newsdev2017 “+ Placeholder_newstest2017 —# Placeholder_newstest2018 Copy success rate 0.70 Number of replacement Figure 5: Copy success rate on Ch-En test sets. get side of the pre-specified translation as a lex- ical constraint. Here the altered beam search al- gorithm fails to predict the constraint “planned parenthood” during previous decoding steps. Al- though the constraint finally comes into effect, over translation occurs, which is highlighted by the underlined words. This is because the method enforces hard constraints, preventing decoding to stop until all constraints are met. Our method makes use of pre-specified transla- tion by replacing the source-side phrase “计划生 育” with the target-side translation “planned par- enthood”, copying the desired phrase to the out- put along with the decoding procedure. The trans- lation “association of planned parenthood from providing” is the exact translation of the source- side phrase “计划(planned) 生育(parenthood) 协 会(association) 提供(providing)”, and agrees with the reference, “planned parenthood to provide”. # 5.5 Analysis Effect of Using More Pre-specified Transla- tions. Even though the augmented training data have only one or two replacements on the source side, the model can translate a source sentence with up to five replacements. Figure 4 shows that compared with unconstrained Transformer, the translation quality of our method keeps in- creasing when the number of replacements in- creases, since more pre-specified translations are used. We additionally measure the effect on the Ch- En WMT test sets, namely “newsdev2017”, “new- stest2017”, “newstest2018”, respectively, each having only one reference instead of four. The baseline BLEU scores on these three test sets are 18.49, 20.01 and 19.05, respectively. Our method gives BLEU scores of 20.56, 22.3, 21.08, respec- tively, when using one or two pre-specified trans- lations for each sentence. The increased BLEU when utilizing different number of pre-specified translations is shown in Figure 4. We found that the improvements on WMT test sets are more sig- nificant than on NIST, since pre-specified transla- tions are sampled from one reference only, enforc- ing the output to match this reference. The place- holder method does not give consistent improve- ments on news test sets, due to the same reason as mentioned earlier. As shown in Figure 5, the copy success rate of our method does not decrease significantly when the number of replacements grows. Here, a copy success refers a pre-specified target translation that can occur in the output. The placeholder method achieves a higher copy success rate than ours when the number of replacements is 1, but the copy suc- cess rate decreases when using more pre-specified translations. The copy success rate of the lexi- cal constraint method is always 100%, since it im- poses hard constraints rather than soft constraints. However, as discussed earlier, overall translation quality can be harmed as a cost of satisfying de- coding constraints by their method. In the presented experiment results, the highest copy success rate of our method is 90.54%, which means a number of source-side target words or phrases are not successfully copied to the trans- lation output. This may be caused by the lack of training samples for certain target-side words or phrases. In En-Ru, we additionally train a model with augmented data that is obtained by matching NIST02 NIST03 NIST04 NIST05 83.89% 85.71% 86.71% 87.45% +Share&Point 87.72% 88.31% 89.18% 90.54% Data Aug. Table 4: Copy success rate on Ch-En test sets. news15 news16 news17 news18 Baseline Ours 33.29 33.53 31.95 32.29 36.57 36.54 32.27 32.47 Table 5: BLEU scores of non code-switched (original) input on En-Ru test sets. an SMT phrase table without any pruning strategy. The copy success rate can reach 98%, even with- out using “shared embedding” and “pointer net- work” methods. Effect of Shared Embeddings and Pointer Network. The gains of shared embeddings and pointer network are reflected in both the copy suc- cess rate and translation quality. As shown in Ta- ble 4, when using one pre-specified translation for each source sentence, the copy success rate im- proves on various test sets by integrating shared embeddings and pointer network, demonstrating that more pre-specified translations come into ef- fect. Table 1 and Table 2 earlier show the improve- ment of translation quality. Translating non Code-Switched Sentences. Our method preserves its strength on translating non code-switched sentences. As shown in Ta- ble 5, the model trained on the augmented cor- pus has comparable strength on translating un- replaced sentences as the model trained on the original corpus. In addition, on some test sets, our method is slightly better than the baseline when translating non code-switched source sentences. This can be explained from two aspects: First, the augmented data make the model more robust to perturbed inputs; Second, the pointer network makes the model better by copying certain source- side words (Gulcehre et al., 2016), such as non- transliterated named entities. # 6 Conclusion We investigated a data augmentation method for constraining NMT with pre-specified translations, utilizing code-switched source sentences and their translations as augmented training data. Our method allows the model to learn to translate source-side target phrases by “copying” them to improvements the output, achieving consistent over previous lexical constraint methods on large NMT test sets. To the best of our knowledge, we are the first to leverage code switching for NMT with pre-specified translations. # 7 Future Work In the future, we will study how the copy suc- cess rate and the BLEU scores interact when dif- ferent sampling strategies are taken to obtain aug- mented training corpus and when the amount of augmented data grows. Another direction is to validate the performance when applying this ap- proach to language pairs that contain a number of identical letters in their alphabets, such as English to French and English to Italian. # Acknowledgments We thank the anonymous reviewers for their de- tailed and constructed comments. Yue Zhang is the corresponding author. The research work is supported by the National Natural Science Foun- dation of China (61525205). Thanks for Shao- hui Kuang, Qian Cao, Zhongqiang Huang and Fei Huang for their useful discussion. # References Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. arXiv preprint arXiv:1809.03985. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. Guided open vocabulary image captioning with constrained beam search. CoRR, abs/1612.00576. Philip Arthur, Graham Neubig, and Satoshi Naka- mura. 2016. Incorporating discrete translation lexi- cons into neural machine translation. arXiv preprint arXiv:1606.02006. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- arXiv preprint ton. 2016. Layer normalization. arXiv:1607.06450. Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational linguistics, 19(2):263–311. Jungi Kim, Guillaume Klein, An- abel Rebollo, Kathy Yang, Jean Senellart, Egor Akhanov, Patrice Brunelle, Aurelien Coquard, Yongchao Deng, et al. 2016. Systran’s pure neu- arXiv preprint ral machine translation systems. arXiv:1610.05540. Anna Currey, Antonio Valerio Miceli Barone, and Ken- neth Heafield. 2017. Copied monolingual data im- proves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148–156. Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameteriza- In Proceedings of the 2013 tion of ibm model 2. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648. Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019–1027. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Incorporating copying mechanism in arXiv preprint Li. 2016. sequence-to-sequence learning. arXiv:1603.06393. Caglar Gulcehre, Sungjin Ahn, Ramesh Nallap- ati, Bowen Zhou, and Yoshua Bengio. 2016. arXiv preprint Pointing the unknown words. arXiv:1603.08148. Eva Hasler, Adri`a De Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine translation de- coding with terminology constraints. arXiv preprint arXiv:1805.03750. Chris Hokamp and Qun Liu. 2017. Lexically con- strained decoding for sequence generation using grid beam search. arXiv preprint arXiv:1704.07138. Marcin Junczys-Dowmunt, Tomasz Dwojak, and Rico Sennrich. 2016. The amu-uedin submission to the wmt16 news translation task: Attention-based nmt models as feature functions in phrase-based smt. arXiv preprint arXiv:1605.04809. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116– 121, Melbourne, Australia. Association for Compu- tational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source In Pro- toolkit for statistical machine translation. ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177–180. Association for Computational Linguis- tics. Xiaoqing Li, Jiajun Zhang, and Chengqing Zong. 2016. Neural name translation improves neural machine translation. arXiv preprint arXiv:1607.01856. Minh-Thang Luong and Christopher D Manning. 2015. Stanford neural machine translation systems for spo- In Proceedings of the In- ken language domains. ternational Workshop on Spoken Language Transla- tion, pages 76–79. Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proc. ACL, pages 311–318, Philadelphia, Pennsylvania, USA. lexically constrained decoding with dynamic beam alloca- tion for neural machine translation. arXiv preprint arXiv:1804.06609. Prajit Ramachandran, Barret Zoph, and Quoc V Le. 2018. Searching for activation functions. Abigail See, Peter J Liu, and Christopher D Man- to the point: Summarization arXiv preprint ning. 2017. Get with pointer-generator networks. arXiv:1704.04368. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation mod- els with monolingual data. Computer Science. Rico Sennrich, Barry Haddow, and Alexandra Birch. rare arXiv preprint 2015b. words with subword units. arXiv:1508.07909. Neural machine translation of Yaohua Tang, Fandong Meng, Zhengdong Lu, Hang Li, and Philip LH Yu. 2016. Neural machine transla- tion with external phrase memory. arXiv preprint arXiv:1606.01792. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural In- formation Processing Systems, pages 2692–2700. Xing Wang, Zhengdong Lu, Zhaopeng Tu, Hang Li, Deyi Xiong, and Min Zhang. 2017a. Neural machine translation advised by statistical machine translation. In AAAI, pages 3330–3336. Yuguang Wang, Shanbo Cheng, Liyang Jiang, Jia- jun Yang, Wei Chen, Muze Li, Lin Shi, Yanfeng Wang, and Hongtao Yang. 2017b. Sogou neural ma- In Proceed- chine translation systems for wmt17. ings of the Second Conference on Machine Transla- tion, pages 410–415. Richard Zens, Daisy Stanton, and Peng Xu. 2012. A systematic comparison of phrase table pruning tech- In Proceedings of the 2012 Joint Confer- niques. ence on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 972–983. Association for Compu- tational Linguistics.
{ "id": "1606.02006" }
1904.09080
Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
We consider networks, trained via stochastic gradient descent to minimize $\ell_2$ loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared $\ell_2$ norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards "simple" models.
http://arxiv.org/pdf/1904.09080
Guy Blanc, Neha Gupta, Gregory Valiant, Paul Valiant
cs.LG, stat.ML
null
null
cs.LG
20190419
20200722
0 2 0 2 l u J 2 2 ] ] G L . s c [ 2 v 0 8 0 9 0 . 4 0 9 1 : v i X r a # Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process Guy Blanc∗1, Neha Gupta†1, Gregory Valiant‡1, and Paul Valiant§2 # 1Stanford University 2Purdue University; Institute for Advanced Study February 3, 2022 # Abstract We consider networks, trained via stochastic gradient descent to minimize £2 loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared é2 norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. This holds for networks of any connectivity, width, depth, and choice of activation function. We interpret this implicit regularization term for three simple settings: matrix sensing, two layer ReLU networks trained on one-dimensional data, and two layer networks with sigmoid activations trained on a single datapoint. For these settings, we show why this new and general implicit regularization effect drives the networks towards “simple” models. 1 # Introduction This work is motivated by the grand challenge of explaining—in a rigorous way—why deep learning performs as well as it does. Despite the explosion of interest in deep learning, driven by many practical successes across numerous domains, there are many basic mysteries regarding why it works so well. Why do networks with orders of magnitude more parameters than the dataset size, trained via stochastic gradient descent (SGD), often yield trained networks with small generalization error, despite the fact that such networks and training procedures are capable of fitting even randomly labeled training points [19]? Why do deeper networks tend to generalize better, as opposed to worse, as one might expect given their increased expressivity? Why does the test performance of deep networks often continue to improve after their training loss plateaus or reaches zero? In this paper, we introduce a framework that sheds light on the above questions. Our analysis focuses on deep networks, trained via SGD, but where the gradient updates are computed with respect to noisy training labels. Specifically, for a stochastic gradient descent update for training data point x and corresponding label y, the gradient is computed for the point (x, y + Z) for some zero-mean, bounded random variable Z, chosen independently at each step of SGD. We analyze # ∗[email protected] †[email protected] ‡[email protected] §[email protected] 1 this specific form of SGD with independent label noise because such training dynamics seem to reliably produce “simple” models, independent of network initialization, even when trained on a small number of data points. This is not true for SGD without label noise, which has perhaps hindered attempts to rigorously formalize the sense in which training dynamics leads to “simple” models. In Section 1.2, however, we discuss the possibility that a variant of our analysis might apply to SGD without label noise, provided the training set is sufficiently large and complex that the randomness of SGD mimics the effects of the explicit label noise that we consider. Our main result, summarized below, characterizes the zero-training-error attractive fixed points of the dynamics of SGD with label noise and @, loss, in terms of the local optima of an implicit regularization term. Theorem 1 (informal). Given training data (x1,y1),---,(@n, Yn), consider a model h(x,6) with bounded derivatives up to 3" order, and consider the dynamics of SGD, with independent bounded label noise of constant variance, and l2 loss function OE (h(a, 0) — y;)?.. A parameter vector & with 0 training error will be an attractive fixed point of the dynamics if and only if 6* is a local minimizer of the “implicit regularizer” i ; reg(@) = ~» || Voh(as, )|I5, (1) i=l when restricted to the manifold of 0 training error. While the exact dynamics are hard to rigorously describe, the results here are all consistent with the following nonrigorous caricature: SGD with label noise proceeds as though it is optimizing not the loss function, but rather the loss function plus the implicit regularizer times the product of the learning rate (η) and the standard deviation of the label noise. Thus, for small learning rate, SGD with label noise proceeds in an initial phase where the training loss is optimized to 0, followed by a second phase where the implicit regularizer is optimized within the manifold of training error 0. # Implications and Interpretations We illustrate the implications of our general characterization in three basic settings for which the implicit regularization term can be analyzed: matrix sensing as in [11], two-layer ReLU networks trained on one-dimensional data, and 2-layer networks with logistic or tanh activations trained on a single labeled datapoint. In all three cases, empirically, training via SGD with label noise yields “simple” models, where training without label noise results in models that are not simple and that depend on the initialization. The intuitive explanation for this second point is clear: there is a large space of models that result in zero training error. Once optimization nears its first 0-training- error hypothesis, optimization halts, and the resulting model depends significantly on the network initialization. In the following three examples, we argue that the combination of zero training error and being at a local optima of the implicit regularizer reduces the set of models to only “simple” ones. Matrix sensing: Convergence to ground truth from any initialization. [11] consider the problem of “matrix sensing”: given a set of linear “measurements” of an unknown matrix X ∗— namely, inner products with randomly chosen matrices Ai, can one find the lowest-rank matrix X consistent with the data? They found, quite surprisingly, that gradient descent when initialized to an overcomplete orthogonal matrix of small Frobenius norm implicitly regularizes by, essentially, 2 the rank of X, so that the lowest-rank X consistent with the data will be recovered provided the optimization is not allowed to run for too many steps. Intriguingly, our implicit regularizer allows SGD with label noise to reproduce this behavior for arbitrary initialization, and further, without eventually overfitting the data. We outline the main ingredients here and illustrate with empirical results. As in [II], we take our objective function to be minimizing the squared distance between each label y; and the (Frobenius) inner product between the data matrix A; and the symmetrized hypothesis X = UU': min)? (vi — (Aj, vut))’ In our notation, this corresponds to having a hypothesis—as a function of the parameters U and the data A;—of h(Ai,U) = (4;,UU'). Thus, taking data drawn from the iid. normal distribution, the implicit regularizer, Equation [I] is seen to be reg) = 4, Brg plll(A+ AUF (2) Because A has mean 0 and covariance equal to the identity matrix, for d x d matrices this expectation is calculated to be 2(d + 1)||U||%. Further, expressed in terms of the overall matrix X = UU", the squared Frobenius norm of U equals the nuclear norm of X, ||U||% = ||X||«, where he nuclear norm may be alternatively defined as the convex envelope of the rank function on matrices of bounded norm. See e.g. [3] for discussion of conditions under which minimizing the nuclear norm under affine constraints guarantees finding the minimum rank solution to the affine system. In our setting, each data point (A;,Â¥;) induces an affine constraint on UU! that must e satisfied for the training error to be 0, and thus the implicit regularizer will tend to find the minimum of Equation[2|sub ject to the constraints, and hence the minimum rank solution subject to he data, as desired. We note that the above intuitive analysis is only in expectation over the A;’s, and we omit an analysis of the concentration. However, empirical results, in Figure {1} illustrate he success of this regularizing force, in the natural regime where n = 5- rank - dimension. 2-Layer ReLU networks, 1-d data: Convergence to piecewise linear interpolations. Consider a 2-layer ReLU network of arbitrary width, trained on a set of 1-dimensional real-valued datapoints, (x1, y1), . . . , (xn, yn). Such models are not differentiable everywhere, and hence Theo- rem 1 does not directly apply. Nevertheless, we show that, if one treats the derivative at the “kink” in the ReLU function as being 0, then local optima of the implicit regularization term correspond to “simple” functions that have the minimum number of convexity changes necessary to fit the training points. The proof of the following theorem is given in Appendix B. Theorem 2. Consider a dataset of 1-dimensional data, corresponding to (x1, y1), . . . , (xn, yn) with xi < xi+1. Let θ denote the parameters of a 2-layer ReLU network (i.e. with two layers of trainable weights) and where there is an additional constant and linear unit leading to the output. Let θ correspond to a function with zero training error. If the function, restricted to the interval (x1, xn), has more than the minimum number of changes of convexity necessary to fit the data, then there exists an infinitesimal perturbation to θ that 1) will preserve the function value at all training points, and 2) will decrease the implicit regularization term of Equation 1, provided we interpret the derivative of a ReLU at its kink to be 0 (rather than undefined). The above theorem, together with the general characterization of attractive fixed points of the dynamics of training with label noise (Theorem 1), suggest that we should expect this noisy SGD 3 train error (SGD, no noise) ----test error (SGD, no noise) ‘ —train error (SGD with label noise) ----test error (SGD with label noise) 10° 10” 10° 5 Number of iterations 10° 104 10 10° Figure 1: Illustration of the implicit regularization of SGD with label noise in the matrix sensing setting (see [T1]). Here, we are trying to recover a rank r dxd matrix X* = U*U*! from n = 5dr linear measurements Aj, (Ai, X*),..., An, (An, X*), via SGD both with and without label noise, with r = 5 and d = 100, and entries of A; chosen i.i.d. from the standard Gaussian. Plots depict the test and training error for training with and without iid. N(0,0.1) label noise, initializing Up = Ig. (Similar results hold when Up is chosen with iid. Gaussian entries.) For both training dynamics, the training error quickly converges to zero. The test error without label noise plateaus with large error, whereas the test error with label noise converges to zero, at a longer timescale, inversely proportional to the square of the learning rate, which is consistent with the theory. training to lead to “simple” interpolations of the datapoints; specifically, for any three co-linear points, the interpolation should be linear. This behavior is supported by the experiments depicted in Figure 2, which also illustrates the fact that training without label noise produces models that are not simple, and that vary significantly depending on the network initialization. Models trained via SGD (without noise) Models trained via SGD, with label noise Figure 2: Both plots depict 2-layer ReLU networks, randomly initialized and trained on the set of 12 points depicted. The left plot shows the final models resulting from training via SGD, for five random initializations. In all cases, the training error is 0, and the models have converged. The right plot shows the models resulting from training via SGD with independent label noise, for 10 random initializations. Theorem 2 explains this behavior as a consequence of our general characterization of the implicit regularization effect that occurs when training via SGD with label noise, given in Theorem 1. Interestingly, this implicit regularization does not occur (either in theory or in practice) for ReLU networks with only a single layer of trainable weights. 2-Layer sigmoid networks, trained on one datapoint: Convergence to sparse models. Finally, we consider the implicit regularizer in the case of a two layer network (with arbitrary width) with logistic or hyperbolic tangent activations, when trained on a dataset that consists of a single labeled d-dimensional point. The proof of this result is given in Appendix C. Theorem 3. Consider a dataset consisting of a single d-dimensional labeled point, (x,y). Let 6 = ({ci},{wi}) denote the parameters of a 2-layer network with arbitrary width, representing the function fo(x) =~", co(wix), where the activation function o is either tanh or the logistic 4 SGD (No Label Noise) 3° 1.00 = o 54 0.756 = = 33 0.50£ c o = al J 2 025 > i“ di 0.00 101 103 10° Training Iteration (Log Scale) SGD with Label Noise g? 1.00 = a %4 0.758 = = 33 0.50£ c @ = al © 2 0.255 > i“ «1 0.00 101 103 10° Training Iteration (Log Scale) SGD (No Label Noise) SGD with Label Noise 3° 1.00 g? 1.00 = o = a 54 0.756 %4 0.758 = = = = 33 0.50£ 33 0.50£ c c o = @ = al J al © 2 025 2 0.255 > > i“ i“ di 0.00 «1 0.00 101 103 10° Training Iteration (Log Scale) 101 103 10° Training Iteration (Log Scale) Figure 3: Plots depicting the training loss (red) and length of the curve corresponding to the trained model (blue) as a function of the number of iterations of training for 2-layer ReLU trained on one-dimensional labeled data. The left plot corresponds to SGD without the addition of label noise, and converges to a trained model with curve length ≈ 5.2. The right plot depicts the training dynamics of SGD with independent label noise, illustrating that training first finds a model with close to zero training error, and then—at a much longer timescale—moves within the zero training error manifold to a “simpler” model with significantly smaller curve length of ≈ 4.3. Our analysis of the implicit regularization of these dynamics explains why SGD with label noise favors simpler solutions, as well as why this “simplification” occurs at a longer timescale than the initial loss minimization. activation. If θ corresponds to a model with zero training error for which the implicit regularizer of Equation 1 is at a local minimum in the zero training error manifold, then there exists α1, α2 and β1, β2 such that for each hidden unit i, either ci = α1 and σ(wt ix) = β1, or ci = α2 and σ(wt The above theorem captures the sense that, despite having arbitrarily many hidden units, when trained on an extremely simple dataset consisting of a single training point, the stable parameters under the training dynamics with label noise correspond to simple models that do not leverage the full expressive power of the class of networks of the given size. # 1.2 Future Directions There are a number of tantalizing directions for future research, building off the results of this work. One natural aim is to better understand what types of stochasticity in the training dynamics lead to similar implicit regularization. In our work, we consider SGD with independently perturbed labels. These training dynamics are equivalent to standard SGD, performed over a dataset where each original datapoint (x, y) has two “copies”, corresponding to (x, y + δ) and (x, y − δ). In this setting with two perturbed copies of each data point, the implicit regularization can be viewed as arising from the stochasticity of the choice of datapoint in SGD, together with the fact that no model can perfectly fit the data (since each x-value has two, distinct, y values). Motivated by this view, one natural direction would be to rigorously codify the sense in which implicit regularization arises from performing SGD (without any additional noise) over “difficult-to-fit” data. Figure 2 illustrates the importance of having difficult-to-fit data, in the sense that if the training loss can be driven close to zero too quickly, then training converges before the model has a chance to forget its initialization or “simplify”. One hope would be to show that, on any dataset for which the magnitude of each SGD update remains large for a sufficiently large number of iterations, a similar characterization to the implicit regularization we describe, applies. 5 In a different direction, it seems worthwhile characterizing the implications of Theorem 1 beyond the matrix sensing setting, or the 1-dimensional data, 2-layer ReLU activation setting, or the single datapoint tanh and sigmoid settings we consider. For example, even in the setting of 1-dimensional data, it seems plausible that the characterization of Theorem 1 can yield a result analogous to Theorem 2 for ReLU networks of any depth greater than 2 (and any width), as opposed to just the 2-layer networks we consider (and empirically, the analogous claim seems to hold). For 2-layer networks with tanh or sigmoid activations, it seems likely that our proof could be generalized to argue that: any non-repellent set of parameters for a dataset of at most k points has the property that there are only O(k) classes of activations. The question of generalizing the characterization of non-repellent parameters from the 1-dimensional data setting of Theorem 2 to higher dimensional data seems particularly curious. In such a higher dimensional setting, it is not even clear what the right notion of a “simple” function should be. Specifically, the characterization that the trained model has as few changes in convexity as is required to fit the data does not seem to generalize in the most natural way beyond one dimension. Finally, it may also be fruitful to convert an understanding of how “implicit regularization drives generalization” into the development of improved algorithms. Figure 3 and our results suggest that the implicit regulization which drives generalization occurs at a significantly longer time scale than the minimization of the objective function: the training dynamics rapidly approach the zero training error manifold, and then very slowly traverse this manifold to find a simpler model (with better generalization). It seems natural to try to accelerate this second phase, for example, by making the regularization explicit. More speculatively, if we understand why certain implicit (or explicit) regularizations yield models with good generalization, perhaps we can directly leverage a geometric understanding of the properties of such models to directly construct functions that interpolate the training set while having those desirable properties, entirely circumventing SGD and deep learning: for example, Theorem 2 and Figure 2 show a setting where SGD becomes essentially nearest-neighbor linear interpolation of the input data (where the distance metric can be viewed as a kernel analogous to the “neural tangent kernel” of [9] ), a simple model that can be both justified and computed without reference to SGD. # 1.3 Related Work There has been much recent interest in characterizing which aspects of deep learning are associated with robust performance. We largely restrict our discussion to those works with provable results, though the flavor of those results is rather different in each case. An influential paper providing a rigorous example of how gradient descent can be effective despite more trainable parameters than training examples is the work of Li et al. on matrix sensing [11]. In their setting (which is closely related to 2-layer neural networks with 1 out of the 2 layers of weights being trainable), they optimize the coefficients of an n x n matrix, subject to training data that is consistent with a low-rank matrix. What they show is that, for sufficiently small initial data, the coefficients essentially stay within the space of (approximately) low-rank matrices. And thus, while the number of trainable parameters is large (n x n), gradient descent can effectively only access a space of dimension k x n, where k <n is the rank of the training data. This paper marks a key example of provable “algorithmic regularization”: the gradient descent algorithm leads to more felicitous optima than are typical, given the parameterization of the model. A few high- level differences between these results and ours include: 1) their results show that the high number of parameters in their setting is essentially an illusion, behind which their model behaves essentially like a low-parameter model, while evolution in our model is a high-dimensional phenomenon; 2) their model is closely related to a neural network with one layer of trainable weights, while we cover 6 much deeper networks, revealing and relying on a type of regularization that cannot occur with only one trainable layer. [12] also proceeds by showing that, when initialized to a parameter vector of small norm, the training dynamics of “simple” data converge to a simple hy- pothesis. They empirically observe that the final function learned by a 2-layer ReLU network on 1-dimensional data, with parameters initialized to have small norm, is a piecewise linear interpola- tion. For the special case when the n datapoints lie in a line, they prove that the resulting trained functions would have at most 2n + 1 changes in the derivative. Several recent papers have shown generalization bounds for neural networks by first describing how different settings lead to an implicit or explicit maximization of the margin separating correct predictions from mispredictions. These papers are in a rather different setting from our current work, where data is typically labeled by discrete categories, and the neural network is trained to rate the correct category highly for each training example, while rating all incorrect categories lower by a margin that should be as large as possible. The paper by [17] showed that, under any of several conditions, when the categories are linearly separable, gradient descent will converge to the max- margin classifier. More generally, [18] showed that optimizing the cross-entropy loss is extremely similar to optimizing the maximum margin, in that, after adding an additional weak regularization term, the global optimum of their loss function provably maximizes the margin. This line of work both leverages and expands the many recent results providing generalization bounds in terms of the margin. We caution, however, that the margin is still essentially a loss function on the training data, and so this alone cannot defend against over-parameterization and the often related problems of overfitting. (Our regularizer, by contrast, depends on a derivative of the hypothesis, and thus unlike the margin, can discriminate between parameter vectors expressing identical functions on the training data.) There are also quite different efforts to establish provable generalization, for example [8, 10], which argue that if networks are trained for few epochs, then the final model is “stable” in that it does not depend significantly on any single data point, and hence it generalizes. Such analyses seem unlikely to extend to the realistic regimes in which networks are trained for large numbers of iterations over the training set. There is also the very recent work tightening this connection between stable algorithms and generalization [6]. In a different vein, recent work [2] establishes generalizability under strong (separability) assumptions on the data for overcomplete networks; this analysis, however, only trains one of the layers of weights (while keeping the other fixed to a carefully crafted initialization). There has been a long line of work, since the late 1980’s, studying the dynamics of neural network training in the presence of different types of noise (see, e.g. [1, 5, 7, 13, 15, 16]). This line of work has considered many types of noise, including adding noise to the inputs, adding noise to the labels (outputs), adding noise to the gradient updates (“Langevin” noise), and computing gradients based on perturbed parameters. Most closely related to our work is the paper of [1], which explicitly analyzes label noise, but did not analyze it in enough detail to notice the subtle 2nd-order regularization effect we study here, and thus also did not consider its consequences. There have also been several efforts to rigorously analyze the apparent ability of adding noise in the training to avoid bad local optima. For example, in [20], they consider Langevin noise—noise added to the gradient updates themselves—and show that the addition of this noise (provably) results in the model escaping from local optima of the empirical risk that do not correspond to local optima of the population-risk. In a slightly different direction, there is also a significant effort to understand the type of noise induced by the stochasticity of SGD itself. This includes the recent work [4] which empirically observes a peculiar non-stationary behavior induced by SGD, and [21] which describes how this stochasticity allows the model to tend towards more “flat” local minima. 7 # 2 Formal statement of general characterization Our general result, Theorem |1| applies to any network structure—any width, any depth, any set of (smooth) activation functions. The characterization establishes a simple condition for whether the training dynamics of SGD with label noise, trained under the @2 loss, will drive the parameters away from a given zero training error solution 6. Our characterization is in terms of an implicit regularization term, proportional to the sum over the data points, of the squared @2 norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. Specifically, letting h(x;,@) denote the prediction at point x; corresponding to parameters 6, the implicit regularizer is defined as lJ . reg(@) = —S)||Voh(ai,6)||3- (3) i=l We show that for a zero training error set of parameters, θ0, if there is a data point xi and a direction (within the subspace that, to first order, preserves zero training error) where the regularizer has a nonzero gradient, then for any sufficiently small learning rate η, if the network is initialized near θ0 (or passes near θ0 during the training dynamics), then with probability 1 − exp(−1/poly(η)), the dynamics will drive the parameters at least distance Dη from θ0 after some time Tη, and the value of the implicit regularization term will decrease by at least Θ(poly(η)). On the other hand, if θ0 has zero gradient of the regularizer in these directions, then with probability 1 − exp(−1/poly(η)), when initialized near θ0, the network will stay within distance dη = o(Dη) up through time Tη. This characterization corresponds to saying that the training dynamics will be expected to stay in the vicinity of a zero training error point, θ0, only if θ0 has zero gradient of the implicit regularizer within the zero training error manifold about θ0; for the particular time window Tη, this characterization is strengthened to “if and only if”. To quantify the above characterization, we begin by formally defining the sense in which training dynamics are “repelled” from points, θ, which are not local optima of the implicit regularizer within the manifold of zero training error, and defining the sense in which the dynamics are not repelled in the case where the implicit regularizer has zero gradient in these directions. Definition 1. Let 6(t) denote the set of parameters of a learning model, trained via t steps of SGD under squared lz loss with independent label noise of unit variance. We say that 0*, is a “strongly-repellent” point, if there is a constant c > 0 such that for any sufficiently small learning rate, n > 0, for a network initialized to 0(0) satisfying \|0(0) — 6*|| < °°, then with probability at least 1— exp(—1/poly(n)), for t = n7'* : e |/A(t) — 4*|| > cn?4, namely the training dynamics lead away from 6*. • reg(θ(0)) − reg(θ(t)) = cη0.4 + o(η0.4), namely, the value of the implicit regularization term decreases significantly. Definition 2. Given the setup above, we say that 0*, is a “non-repellent” point, if, for any suffi- ciently small learning rate, n > 0, for a network initialized to 0(0) satisfying ||9(0)—6*|| < °°, then with probability at least 1 — exp(—1/poly(n)), for any t <n, it holds that ||O(t) — 6*|| < 1°4. The following theorem quantifies the sense in which the implicit regularizer characterizes the dynamics of training, in the vicinity of parameters with zero training error. 8 Theorem Consider the dynamics of the parameters, 0, of a deep network, trained via SGD to minimize 2 loss, with independent bounded label noise of unit variance. Let parameters 6* correspond to a model f(0*,x) with zero training error, namely f(6*,x;) = y; for alli =1,...,n. If the implicit regularizer has zero gradient in the span of directions where f(0*,x;) has zero gradient, for alli, then 0* is “non-repellent” in the sense of Definition|], (meaning the dynamics will remain near 0* with high probability for a sufficiently long time). Otherwise, if the implicit regularizer has non-zero gradient in the directions spanned by the zero error manifold about 6*, then 0* is “strongly-repellent” in the sense of Definition [i] (implying that with high probability, the dynamics will lead away from 6* and the value of the implicit regularizer will decrease significantly). # Intuition of the implicit regularizer, via an Ornstein-Uhlenbeck like analysis The intuition for the implicit regularizer arises from viewing the SGD with label noise updates as an Ornstein-Uhlenbeck like process. To explain this intuition, we begin by defining the notation and setup that will be used throughout the proof of Theorem 1, given in Section A. # 3.1 Preliminaries and Notation We consider training a model under stochastic gradient descent, with a quadratic loss function. Explicitly, we fit a parameter vector @ given training data consisting of pairs (2;,y;) where 2; is the i*® input and y; € R is the corresponding label; a hypothesis function h(2;,) describes our hypothesis at the i*” training point. The resulting objective function, under @, loss, is (h(xi, θ) − yi)2 (4) # i For convenience, we define the error on the ith training point to be ei(xi, θ) = h(xi, θ) − yi. We consider stochastic gradient descent on the objective function expressed by Equation 4, with training rate η, yielding the following update rule, evaluated on a randomly chosen data point i: θ ← θ − η∇θ(ei(xi, θ)2) (5) Our analysis will examine a power series expansion of this SGD update rule with respect to 6, centered around some point of interest, 6*. Without loss of generality, and to simplify notation, we will assume 6* = 0 and hence the power series expansions we consider will be centered at the origin. For notational convenience, we use h; to denote h(x;,0) and e; to denote e(x;,0). To denote derivatives along coordinate directions, we use superscript letters, separated by commas for multiple derivatives: h} denotes the derivative of h; with respect to changing the j*" parameter of 8, anc nik represents the analogous 2nd derivative along coordinates j and k. (All derivatives in this paper are with respect to 0, the second argument of h, since the input data, {x;} never changes.) As a final notational convenience for derivatives, we represent a directional derivative in the direction of vector v with a superscript v, so thus h? = > j vjhi , where v; denotes the jth coordinate of ) v; analogously, h;” is a 3rd derivative along directions v, v, and coordinate j, defined to equal Vee opel In our proof of Theorem || we will only ever be considering directional derivatives in the direction of parameter vector 6. Our proof of Theorem 1 will rely on an expansion of the SGD update rule (Equation 5) expanded to 3rd order about the origin. Explicitly, the jth coordinate of θj updates according to this equation 9 by η times the derivative in the jth direction of ei(xi, θ)2. The kth order term in the power series expansion of this expression will additionally have a kth order directional derivative in the direction θ, and a factor of 1 k! . Thus the kth order term will have one j derivative and k θ derivatives distributed across two copies of ei; since ei(xi, θ) = h(xi, θ) − yi and yi has no θ dependence, any derivatives of ei will show up as a corresponding derivative of hi. Combining these observations yields the 3rd order expansion of the gradient descent update rule: i ) − η(hj where the final big-O term bounds all terms of 4th order and higher. Throughout, we consider the asymptotics in terms of only the learning rate, η < 1, and hence regard θ, the number and dimension of the datapoints, the size of the network, and all derivatives of h at the origin as being bounded by Oη(1). We are concerned with the setting where the label error, ei, has an i.i.d. random component, and assume that this error is also bounded by O(1). Additionally, since we are restricting our attention to the neighborhood of a point with zero training error, we have that for each i, the expectation of ei is 0. # 3.2 Diagonalizing the exponential decay term The 2nd term after 6; on the right hand side of the update rule in Equation (g is —2nhohd = —2n>, Ophk hd . Ignoring the —2n multiplier, this expression equals the vector product of the 6 vector with the j* column of the (symmetric) positive semidefinite matrix whose (j,k) or (k,j) entry equals hi nk. The expectation of this term, over a random choice of 7 and the randomness of the label noise, can be expressed as the positive semidefinite matrix E,(h! hk], which will show up repeatedly in our analysis. Since this matrix is positive semidefinite, we choose an orthonormal coordinate system whose axes diagonalize this matrix. Namely, without loss of generality, we take E,[h{h*] to be a diagonal matrix. We will denote the diagonal entries of this matrix as W= E,[h! hi ] = 0. Thus, this term of the update rule for 6; reduces to —2n7;6; in expectation, and hence this terms corresponds to an exponential decay towards 0 with time constant 1/(27);). And for directions with y; = 0, there is no decay. Combined with the 1st term after θj on the right hand side of the update rule in Equation 6, namely −2ηeihj i , whose main effect when ei has expectation near 0 is to add noise to the updates, we have what is essentially an Ornstein-Uhlenbeck process; the 2nd term, analyzed in the previous paragraph, plays the role of mean-reversion. However, because of the additional terms in the update rule, we cannot simply apply standard results, but must be rather more careful with our bounds. However the (multi-dimensional) Ornstein-Uhlenbeck process can provide valuable intuition for the evolution of θ. # Intuition behind the implicit regularizer Recall that we defined the implicit regularizer of Equation 3 to be the square of the length of the gradient of the hypothesis with respect to the parameter vector, summed over the training data. Hence, in the above notation, it is proportional to: Ei[hk i hk i ] k (7) The claim is that stochastic gradient descent with label noise will act to minimize this quantity once the optimization has reached the training error 0 regime. The mechanism that induces this 10 implicit regularization is subtle, and apparently novel. As discussed in Section 3.2, the combination of the first 2 terms of the θj update in Equation 6 acts similarly to a multidimensional Ornstein- Uhlenbeck process, where the noise added by the first term is countered by the exponential decay of the second term, converging to a Gaussian distribution of fixed radius. The singular values γj (defined in Section 3.2) control this process in dimension j, where—ignoring the remaining update terms for the sake of intuition—the Ornstein-Uhlenbeck process will converge to a Gaussian of radius Θ( η) in each dimension for which γj > 0. Crucially, this limiting Gaussian is isotropic! The variance in direction j depends only on the variance of the label noise and does not depend on γj, and the different dimensions become uncorrelated. The convergence time, for the dynamics in the jth direction to converge to a Gaussian, however, is Θ( # η Crucially, once sufficient time has passed for our quasi-Ornstein-Uhlenbeck_process to appro- priately converge, the expectation of the 5th term of the update in Equation E[—2h?*h?] = —2nE[>>), ¢ O,0ch?* hf], takes on a very special form. Assuming for the sake of intuition that con- vergence occurs as described in the previous paragraph, we expect each dimension of @ to be uncorrelated, and thus the sum should consist only of those terms where k = ¢, in which case E(62] should converge to a constant (proportional to the amount of label noise) times 7. Namely, the expected value of the 5th term of the Equation 6] upda e for 0; should be proportional to the aver- age over data points i of —2n? Ye abn’, and this expression is seen to be exactly —7? times the j derivative of the claimed regularizer of Equation [7] n short, subject to the Ornstein-Uhlenbeck intuition, the 5th term of the update for 0; behaves, in expectation, as though it is performing gradient descent on the regularizer, though with a training rate an additional 7 times slower than ae rate of the overall optimization. To complete the intuition, note that the rank of the matrix E,[h/h*] is at most the number of datapoints, and hence for any over-parameterized network, there will be a large number of directions, j, for which y; = 0. For sufficiently small 7—any value that is significantly smaller han the smallest nonzero 7;—the update dynamics will look roughly as follows: after >> 1/n updates, for any directions k and ¢ with 7%,7% > 0, we have E[62] ~ E[6?] = O(n), and for k # l, we have E[6,4:| = 0. The update term responsible for the regularization, onthe will not 1ave a significant effect for the directions, j, for which 7; > 0, as these directions have significant damping/mean-reversion force and behave roughly as in the Ornstein-Uhlenbeck process, as argued above. However, for a direction j with y; = 0, there is no restoring force, and the effects of this erm will add up, driving @ consistently in the direction of the implicit regularizer, restricted to the span of dimensions, j, for which yj; = 0. The full proof of Theorem [I] is stated in Appendix [A] # Acknowledgements We would like to thank Hongyang Zhang for suggesting the matrix sensing experiment. The contributions of Guy, Neha, and Gregory, were supported by NSF awards AF-1813049 and CCF- 1704417, an ONR Young Investigator Award, and DOE Award DE-SC0019205. Paul Valiant is partially supported by NSF award IIS-1562657. # References [1] Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. Neural computation, 8(3):643–674, 1996. [2] Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz. SGD learns over- 11 parameterized networks that provably generalize on linearly separable data. arXiv preprint arXiv:1710.10174, 2017. [3] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009. [4] Pratik Chaudhari and Stefano Soatto. Stochastic gradient descent performs variational infer- ence, converges to limit cycles for deep networks. arXiv preprint arXiv:1710.11029, 2017. [5] Reed D Clay and Carlo H Sequin. Fault tolerance training improves generalization and ro- bustness. In Neural Networks, 1992. IJCNN., International Joint Conference on, volume 1, pages 769–774. IEEE, 1992. [6] Vitaly Feldman and Jan Vondr´ak. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. CoRR, abs/1902.10710, 2019. [7] Stephen Jos´e Hanson. A stochastic version of the delta rule. Physica D: Nonlinear Phenomena, 42(1-3):265–272, 1990. [8] Moritz Hardt, Benjamin Recht, and Yoram Singer. Train faster, generalize better: Stability of stochastic gradient descent. In International Conference on Machine Learning (ICML), 2016. [9] Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018. [10] Ilja Kuzborskij and Christoph Lampert. Data-dependent stability of stochastic gradient de- scent. arXiv preprint arXiv:1703.01678, 2017. Algorithmic regularization in over- parameterized matrix sensing and neural networks with quadratic activations. In Conference on Learning Theory (COLT), 2018. [12] Hartmut Maennel, Olivier Bousquet, and Sylvain Gelly. Gradient descent quantizes relu net- work features. arXiv preprint arXiv:1803.08367, 2018. [13] Alan F Murray and Peter J Edwards. Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training. IEEE Transactions on Neural Networks, 5(5):792– 802, 1994. [14] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010. [15] Salah Rifai, Xavier Glorot, Yoshua Bengio, and Pascal Vincent. Adding noise to the input of a model trained with a regularized objective. arXiv preprint arXiv:1104.3250, 2011. [16] Jocelyn Sietsma and Robert JF Dow. Neural net pruning-why and how. In IEEE International Conference on Neural Networks, volume 1, pages 325–333. IEEE San Diego, 1988. [17] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 19(1):2822–2878, 2018. 12 [18] Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. On the margin theory of feedforward neural networks. arXiv preprint arXiv:1810.05369, 2018. [19] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Under- standing deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016. [20] Yuchen Zhang, Percy Liang, and Moses Charikar. A hitting time analysis of stochastic gradient langevin dynamics. arXiv preprint arXiv:1702.05575, 2017. [21] Zhanxing Zhu, Jingfeng Wu, Lei Wu, Jinwen Ma, and Bing Yu. The regularization effects of anisotropic noise in stochastic gradient descent. arXiv preprint arXiv:1803.00195, 2018. 13 # A Proof of Theorem 1 This section contains the proofs of our general characterization of stable neighborhoods of points with zero training error, under the training dynamics of SGD with label noise. See Section 3 for notation, intuition, and preliminaries. In particular, the evolution of parame- ters θ is governed by the updates of Equation 6, which is a 3rd-order power series expansion about the origin. We analyze the regime where optimization has already yielded a parameter vector θ that is close to a parameter vector with 0 training error (in expectation, setting aside the mean-0 label noise). Without loss of generality and for ease of notation, we take this 0-error parameter vector to be located at the origin. Our first lemma may be viewed as a “bootstrapping” lemma, saying that, if the parameter vector θ has remained loosely bounded in all directions, then it must be rather tightly bounded in those directions with γj > 0. Each of the following lemmas applies in the setting where θ evolves according to stochastic gradient descent with bounded i.i.d. label noise. Lemma 4. Given constant « > 0, and T > 0, if it is the case that |6| < nilate for allt < T, then for any j s.t. 7; > 0, it holds that with probability at least 1 — exp(—poly(1/n)) at time T, |8;| < |[@(0)|| ee UIT) 4 nile, where 0(0) denotes the value of 0 at time t = 0. Proof. For convenience, we restate the update formula for θ, given in Equation 6, where i is the randomly chosen training data index for the current SGD update: i + 2hj,θ θj ← θj − 2ηeihj i hj i + eihj,θ i ) − η(hj i hθ,θ i + eihj,θ,θ i − 2η(hθ i hθ ) + O(ηθ3). i We will reexpress the update of θj as (8) θj(t) = (1 − 2ηγj)θj(t − 1) + zt−1 + wt−1, (9) where zt−1 will be a mean zero random variable (conditioned on θ(t − 1)), whose magnitude is bounded by O(η), and wt−1 is an error term that depends deterministically on θ(t − 1). ; To this end, consider the expectation of the third term of the update in Equations} E,(h?hi] = Yo, Ei (O,.nen! ] = 6;7j, as we chose our basis such that E,[hÂ¥hi] is either 0 if k A j and 4; if k = j. Hence this third term, together with the first term, gives rise to the (1 — 277 ;)0;(t — 1) portion of the update, in addition to a contribution to z,_1 reflecting the deviation of this term from its expectation. For |6| = O(1), this zero mean term will trivially be bounded in magnitude by O(n). The remaining contributions in the update to z_1 all consist of a factor of 7 multiplied by some combination of e;, powers of 0, and derivatives of h, each of which are constant, yielding an overall bound of 2-1 = O(n). Finally, we bound the magnitude of the error term, wy—1 n hi nee + oni?) + O(n). The first two terms are trivially bounded by O(7|6|?), and hence since |@(t — 1)| < n'/4+¢, we have that \wr-1] = O(n- i/2+26) _ O(n?/2+2¢), Given the form of the update of Equation 9, we can express θ(T ) as a weighted sum of the θ(0), z0, . . . , zT −1, and w0, . . . , wT −1. Namely letting α = (1 − 2ηγj), we have T-1 6(T) = a7 (0) + al 14 + w). t=0 We begin by bounding the contribution of the error term, T-1 1 1 gyou. atl (w,) = Os, = max |we|) = ome) = O(nt/? +26), t=0 14 # where the 1 1−α term is due to the geometrically decaying coefficients in the sum. To bound the contribution of the portion of the sum involving the 2 ’s we apply a basic martingale concentration bound. Specifically, note that by defining Z; = wit i=0 1 Qt-i- 1z,, we have that {Z;} is a martingale with respect to the sequence {0(t)}, since the expectation of z% is 0, conditioned on @(t). We now apply the Azuma-Hoeffding martingale tail bound that asserts that, = 2 = 2 22+e # t c2 provided |Z; — Z-1| < cz, Pr[|Zr| > ane < ye 22+e . In our setting, ce = a? —*"|%_1|, and hence do? = O(1/(1 — a?) max, |z?|) = = O(n). Hence for any c > 0, by taking \ = en'/? we have that Pr{|Zr| > cn'/2] < 2e° > ne "king c = 1/n*, our proof is concluded. # A.1 Analysis of concentration of the time average of θjθk in the γj > 0 directions The following lemma shows that, at time scales >> 1/1, the average value of the empirical covariance, 9,0, concentrates for directions k, ¢ satisfying 7; +7 > 0. The proof of this lemma can be viewed as rigorously establishing the high-level intuition described in Section [3] that for each directions k with 7, > 0, the behavior of 6, is as one would expect in an Ornstein-Uhlenbeck process with time constant @(1/7). Lemma 5. Let T > 1/n!?> denote some time horizon, and assume that, at allt < T, we have that (t)| < R& 7°58, and for every direction j for which yj; > 0, we have |;(t)| < Ryso = °°-*, for some constants 4 >B>e>0. Then for any pair of directions 7 # k such that at least one of Yj OT Ye ws positive, we have that 1 T Tr > 96% (t) t=0 > press =O(e"). Similarly for any direction j with γj > 0, we have that 7 n Vartei| FR rye 67 (t) > aaa = Ole"). ). Proof. Given the update described by Equation 8, we derive the following update for the evolution of the second moments of θ : θjθk ← θjθk − θkη(2eihj + O(ηθ(η + θ2)). i hj i + eihj,θ i + 2(hθ i )) − θjη(2eihk i + eihk,θ i + 2(hθ i hk i )) + 4η2e2 i hj As in the proof of Lemma 4, we will reexpress this as the sum of three terms: a mean-reversion term, a term with zero expectation (conditioned on the previous value of θ), and an error term. To this end we analyze each of the above terms. Each term that has an ei but not an e2 i will have expectation 0, and the magnitude of these terms is trivially bounded by O(η|θ|). The first nontrivial term is 2ynhoh). Splitting this into a mean zero portion, and its expectation, we see that E; [nO,h9 hd ] = 10, E;[h?h}] = O.Ei [doe Oohéhi ]. Since E;[hfh?] is 0 unless 0 = 7, we simplify the above expression to 7;,E;[9; ht h? 4] = 0;,0;7;. Hence this term contributes —2n6),0;7; to the “mean reversion” term, and [2.nh?h!| = = O(n|6|?) to the bound on the zero mean term. An analogous argument holds for the symmetric term, 20; inh? hk, which together account for the full mean reversion portion of the update: 0;6, <— (1 — only; + YK) )OjO% +... Other than the zero expectation terms and the final big “O” term, the only remaining term in i term. Since the error ei is i.i.d. mean 0 label noise, we have that the 15 # i hk i expectation of this term is Ein2e2h2 h*] = 17°Varlei]y;, if j =k, and 0 if j Ak. The magnitude of this term is trivially bounded by O(7). Summarizing, we have the following expression for the update of the variance in the case that j#k: θjθk(t) = (1 − 2η(γj + γk))θjθk(t − 1) + zt−1 + wt−1, (10) and in the case that j = k, we have the following update: j (t) = (1 − 4ηγj)θ2 θ2 j (t − 1) + 4η2γjVar[ei] + zt−1 + wt−1, (11) where the stochastic term zt−1 given θ(t−1), has expectation 0 and magnitude bounded by |zt−1| = O(η|θ(t − 1)| + η2), and the deterministic term wt−1 has magnitude bounded by |wt−1| = O(η|θ(t − 1)|3 + η2|θ(t − 1)|). We now turn to showing the concentration in the average value of these covariance terms. The argument will leverage the martingale concentration of the Doob martingale corresponding to this time average, as the values of @ are revealed. A naive application, however, will not suffice, as we will not be able to bound the martingale differences sufficiently tightly. We get around this obstacle by considering the martingale corresponding to revealing entire batches of S >> 1/n!** updates at once. Hence each step of the martingale will correspond to S updates of the actual dynamics. The utility of this is that the mean-reversion of the updates operates on a timescale of roughly 1/7—namely after O(1/7) timesteps, the updates have mostly “forgotten” the initial value 6(0). The martingale differences corresponding to these large batches will be fairly modest, due to this mean reversion, and hence we will be able to successfully apply an Azuma-Hoeffding bound to this more granular martingale. Given some time horizon T > S > 0, we consider the Doob martingale Z0, Z1, . . . , ZT /S defined by # T T Z, =E| Â¥~ 0;04(t)|0(0), 0(1),---, 00- 5) . t=0 In words, Z; is the expected average value of 6;0, over the first T’ steps, conditioned on having already seen iS updates of the dynamics. To analyze the martingale differences for this Doob martingale, it will be helpful to understand what Equations[10]and {imply about the expectation of 6;0,(t"), given the value of 6(t) at some t < t’. Letting a denote the mean reversion strength, namely a := 2n(7j +x), or a := 417; in the case that we are considering 65, we have the following expressions for the expectations respectively: E(0;0;(t')|0(¢)] = (0), (¢)) (A - a)’ ++0 (ince —t, ~) -(n/o|> + v?()) . 1-(1—a)" E(07(t')|0(t)] = (0 (t)) (La)! "+ (49? Varlei)) A 1 +O (inc —t, » - (nlal> + Pd) For any constant ¢ > 0 and t/ > t+ 1/n'**, and any pair of directions, j, k where Vi + 7K > 0, assuming that |@| < R until time t’, we have that the the above two equations simplify to: BU, 0u(U)|0(0] =O (Lenn +1?R)) (12) E(6}(¢')/4(0)] any Varie +0 (. (nR? + PR) (18) 16 . Equipped with the above expressions for the conditional expectations, we now bound the mar- tingale differences of our Doob martingale {Z;}. Revealing the values of @ at times t = 1+7-S to t = (i+1)-S, affects the value of Z;4; in three ways: 1) This pins down the exact contributions of 6 at these timesteps to the sum 4 + > 9j0,, namely it fixes r Lynne 6;9.(¢); 2) This alters the Leis O48 0;0),(@); and 3) it alters 1+(i+1)S 79 the expected contribution of the remaining terms, 7 bo a1(i42)8 60x (L). We now bound the contribution to the martingale differences of each of these three effects of revealing 0(1+iS),...,0((1+%)S). Assuming that, until time T we have |@;| < Ryso for any j with 7; > 0, and |6;,| < R for every direction, k, we can trivially bound the contribution of 1) and 2) towards the martingale differences by O(SRys0R/T), and O(SRo 59 /T), in the respective cases where we are considering 6;6; where both y; and 7 are positive, and the case where exactly one of them is nonzero. This is because at each of the 2S timesteps that cases 1) and 2) are considering, expected contribution of the next batch of S terms, namely 4 a > t= R2 γ>0 T and Rγ>0R each of the terms in the sum is absolutely bounded by in the respective cases. For the third case, we leverage Equations 12 and 13, which reflect the fact that, conditioning on θ((i + 1)S) has relatively little effect on θ(t) for t ≥ (i + 2)S. Namely, the total effect over these at most T terms is at most O(T 1 T T 1 α (ηR3 + η2R)) = O(R3 + ηR). Hence the overall martingale differences for {Zi} are bounded by SR? o( eR ran) 0 (Spe 5 18 +9), depending on whether we are considering a term 0;0, corresponding to 7j,y > 0, or not (and note that the martingale difference does not include a contribution from the variance term in Equation [13] since this term has no dependence on @). Hence, as our martingale has T/S updates, by standard martingale concentration, letting d denote a bound on the martingale differences, for any c > 0, the probability that 0;6;,(T) deviates from its expectation by more than O (ca/T/ 5) decreases inverse exponentially with c?. In the case of 0% for a direction j with 7; > 0, we have SR? that the differences d = O (Se + R34 uk). Hence for R = 7°°-*, and Ryso = 7°>-*, we have d/T/S=O (n'2\/S/T + 79-38 T/S) . Equating the two terms inside the big “O” results in choosing S$ such that \/$/T = n025-L58+¢ in which case martingale bounds yield d/T/S=O (n'2\/S/T + 79-38 T/S) . Equating the two terms inside the big choosing S$ such that \/$/T = n025-L58+¢ in which case martingale bounds yield TIBI] Pe [aris ~ Zo = P2199] Pe [lenyg ~ Zal > MY TIBI] < 2000" — O10") # (Se In the case of 6;6; where either 7; or 7, is nonzero, we have that the differences d = O (Se + Hence for R = 7°°-8, and Ryso = n°°~*, we have d\/T/S = O (n-<4 /3/T + 75-38 T/S) . Equating these two terms results in choosing S such that \/$/T = 725—8+0.5€ in which case Hence for R = 7°°-8, and Ryso = n°°~*, we have d\/T/S = O (n-<4 /3/T + 75-38 Equating these two terms results in choosing S such that \/$/T = 725—8+0.5€ in which case Pr [|Zrys — Zo] = 0-54) < Pr || Zpy3 — Zol 2 7 *- A(aVT/S)] < Ole") # Pr ). To conclude the proof of the lemma, note that Equations [12] implies that, in the case of 0;4,, Zy = E[; ran 0;0;,(t)] = O(R® + nR) = o(n 5), and hence provided at least one of 0; or Ox is positive, we have: 1 T 7 do 4) (¢) t=0 > arerss < O(e"). 17 ). + R34 nR) . Similarly in the case of 03, Equation [13] implies that Zo = E[# > 7R) = nVarlei] + o(n!°), and hence P 02(t) i= 97 (t)] = nVarle] + OCR? + Pr T 1 nVar{e;] — T > t=0 aaa < Ole"). # A.2 Proof of Theorem 1 The Proof of Theorem [I] will follow easily from the following lemma, which characterizes the evolu- tion of 0; for directions 7 for which y; = 0. This evolution crucially leverages the characterization of the average value of 0;0¢ given in Lemma [5] Given this characterization of the evolution of 6;, Lemma [4] shows that the directions j, for which y; > 0, will stay bounded by ~ the proof. V7, completing √ Lemma 6. [f ||4|| = O(,/7) at time 0, then least 1 — O(exp(—1/poly(n))), after T < n7*6 for each direction j with y; = 0, with probability at updates, we have 0;(T) = 0;(0) — 2T 7? Varfe;] S2 B{h?*nk] | +O). k:y,>0 In the case that T = η−1.6, this expression becomes 4;(0) — 2n°* Varfe;] S2 By{h?*nk} | +O). k:y,>0 Proof. The proof will proceed by induction on time t, in steps of size 7 constant strictly between 0 and dos: Assume |Ox(t)| < 7°-4-€ < 5+¢ for all t < to. Hence, Yr > 0, we have the tighter bound |6;(t)| < 7 Consider advancing to some time t; € [t, to 4 cannot have moved far, even using very weak all derivatives are bounded by O(1), for direc steps of SGD we have |6;(t1)| < |(to)| + O( —01 Let € be an arbitrary hat, up to some time to, for all directions k, we have by Lemma || for all t < to, for any direction k with 1/2-€ with all but inverse exponential probability. + nO], Since only < n~-! time steps have passed, @ bounds on the @ update. Explicitly, by assumption, ions k with 7, > 0 and thus after < 7~°! additional n- no) < Ql2—«, with all but inverse exponential probability. Analogously, by our assumption, we also have that |9;(t1)| < 27°4~¢, for every direction Jj, including those with y; = 0. We now analyze the evolution of 6; from time 0 throug leveraging the above bounds on |6;| across al 1 time fy, dimensions, k, to bootstrap even tighter bounds. We consider the update given in Equation which hi as = 0 for all 7, there is no mean reversion term. Let rj,¢ := E; [ni® h: 4. Note that with 7 = 0, rg,¢ = 0. We can reexpress the expectation of the corresponding portion of t. S| [8] and again, since we are considering a direction for for any ¢ 1e update By 7h8] = So r40 60. ke Analogously with the martingale analysis in Equations 9, 10, or 11, we express the updates of θj as: A(t) = 0;(t — 1) )— 2nd ot 1)O¢(t (14) L)rpe + 24-1 + Wi-1, 18 where E[zt−1|θ(t − 1)] = 0 is a mean zero term, defined as t-1= ein (- 2hi At-1) a 2n (i? ACEI), 80-1) Ey) . 4 Hence |%—1| = O(7|6|) = 7''4-©. The error term satisfies |w,_1| = O (n|@(¢ — 1)|°) = O(@' 8-4-9), where the above analysis follows from inspection of the update rule in Equation[8} simplifying using the fact that, in our context, h? = 0 for all i. From our bound on |%|, the Azuma-Hoeffding martingale concentration bounds give that, Pr|| are al > t/a] < 2e-°”"**. Additionally, vi Tw, = O(tin?2-*). Tf th < 7}, Lemma |5| does not apply, but we have that 6;(t)0¢(t) = O(n j/2=e+0.4— ©) as long as either 7, > 0 or ye > 0, and hence 7 iho ren (EOe(t) = O(trnh®*9) = O(n 154192) = O(n!/?), since ree = O(1). If t1 < n71?5, the martingale concentration also gives a bound of | Tito #| = O(7'/?) with probability 1 — O(exp(—1/poly(n))). Hence if t; < n~!?, then with probability 1 — O(exp(—1/poly(n))), at all times t < ti, we have that |0;| < 7°4~*. Thus inductively applying this argument, and taking a union bound over these poly(1/7) steps, yields that this conclusion holds up through time t; = 77!?>. We now consider the case when t € [77 ]. In this case, we may apply Lemma |5| with 8 =0.1+.€, which guarantees that for a direction k with Yr > 0, with all but exp(—1/poly(n)) probability, 1.25 nh 6 ty-1 pect 1 5— A > O2(t) = nVarle:] + O(n") and 7 Lt )Oc(t = O(n 1.05— fe) (15) SL) % =_O(n'4-?*V&) From above, we have that SL) % =_O(n'4-?*V&) = O(n°®*) = O(n), and iL, 9M = O(ti7??-**) = O(n 2). From Equation [14] we plug in the two bounds from Equation “taltiptied by 7 to conclude that 6; (t1) = 0;(0) — 2n*t1 Var[ei| > rep + Ot?) + O(n/?). k Note that the “cross terms,” Vee rre do not explicitly appear in the previous sum, and instead contribute to the first big “O” term, due to our bound on the time average of 6,6, from Lemma|5] Applying the above conclusions Where. (as we did in the first half of the proof for the case ty < 77} 5) yields that, with all but exp(—1/poly(n)) probability, |0;(¢)| < 7°47 at all times t<n1, and at time T < 77", we have that 0;(T) = 0;(0)—2n?TVar[ei] >, ek +O(T 49) + O(n?) = 6;(0) — 2T'n?Varfe;] x, rkk + O(n? io wdc) yielding the lemma, as desired. # B Proof of Theorem 2 Before proving Theorem 2, we formalize the notation that will be used throughout this section. We consider a network with two layers of trainable weights, with an additional linear and bias unit leading to the output. The network takes as input a one dimensional datapoint, x, and a constant, which we can assume wlog to be 1. For the ith neuron in the middle layer, there are three associated parameters: ai, the weight to input x, bi the weight to the constant input, and ci, the weight from neuron i to the output. Hence the parameters θ = ({ai}, {bi}, {ci}, a, b) represent the following function: fθ(x) = ciσ(aix + bi) + ax + b i 19 where σ is the ReLU non-linearity i.e. σ(x) = max(0, x). The implicit regularization term for a dataset (x1, y1), . . . , (xn, yn), evaluated at parameters θ, simplifies as follows: RO) = S>||Vofo(as)Ib j = SV gay foes 3 + 1V 0.3 fo D3 + IV fe.3 fo@a)I12 + [IVa.ofo(ws)|I2) j = > (Tletaas aon? + lassen? + (o(aya; 4 m0) | x; + 1 j a # R(θ) Defining the contribution of the ith ReLU neuron and jth datapoint to be Ri,j(θ) := (σ(aixj + bi))2 + c2 j )Iaixj +bi>0, e(1+ 5) ; Rij(@) +L; the regularization expression simplifies to R(0) = 37; ; Rij(@) +L; 1 +23, where the last sum does not depend on @, thus has no @ gradient, and thus does not contribute to regularization. Definition 3. The ith ReLU unit fi(x) = ciσ(aix + bi) has an intercept at location x = − bi , and ai we say this unit is convex if ci > 0 and is concave if ci < 0. If ci = 0, then fi(x) = 0 and the unit has no effect on the function. Proof of Theorem 2. The proof will proceed by contradiction, considering a set of parameters, θ, and set of consecutive datapoints (xi, yi), (xi+1, yi+1), (xi+2, yi+2) that violates the claim, and then exhibiting a direction in which θ could be perturbed that preserves the values of the hypothesis function at the data points, but decreases the implicit regularizer proportionately to the magnitude of the perturbation. Assume, that the piecewise linear interpolation of (xi, yi), (xi+1, yi+1), (xi+2, yi+2) is convex (i.e. concave up). An analogous argument will apply to the case where it is convex down. If f (θ, x) fits the three points, but is not also convex, then it must have a change of convexity, and hence there must be at least two “kinks” in the interval (xi, xi+2), each corresponding to a ReLU unit whose intercept lies in this interval, and with one of the units corresponding to a “convex” unit (with c > 0) and the other a “concave” unit (with c < 0). We will consider the case where the intercept of the concave unit, k1 is less than the intercept of the convex unit, k2 and the argument in the alternate case is analogous. There are now three cases to consider: 1) the point xi+1 lies between the intercepts, xi+1 ∈ (k1, k2); 2) xi+1 = k1 or k2; and 3) there is no point in the interval [k1, k2]. In each case, we will exhibit a perturbation of the two units in question that simultaneously preserves the function values at all data points {xi}, while decreasing the implicit regularizer. The proof in the first case will trivially also apply to the third case. We begin with the first case, when xi+1 ∈ (k1, k2). For notational convenience, we will henceforth use x0 to denote xi+1. Let a1, b1, c1 denote the parameters of the first unit, and a2, b2, c2 denote the parameters of the second unit in question. Figure 4 depicts the setting where x0 ∈ (k1, k2), along with the four possible configurations of the units, according to the four possible configurations of the signs of a1 and a2. In each case, the dotted lines in the figure indicate the direction of perturbation of these two units which 1) preserves the function value at all data points, and 2) decreases the value of the implicit regularizer. We note that in several of the cases, the bias unit, b, and and linear unit, a, which are directly connected to the output, must also be adjusted to accomplish this. We will never perturb the weights, c1, c2, leading to the output neuron. 20 (16) Riki zo Kaka iuky 20 hokey inky 20 kak, kyki 20 koky iyky 70 kak, © e<0,0,>0 Case 1: a> 0, a,>0 Case 2: a;>0,a,<0 | Case3:a,<0,a,>0 | Case 4: a,<0,a,<0 before, and after perturbation Figure 4: The leftmost figure depicts the case where the middle datapoint lies between the intercepts of the ReLU units with opposing convexities. The solid line depicts the original function, and the dotted line depicts the function after the perturbation, which preserves the function values at all datapoints and decreases the regularization expression. The rightmost four plots depict the four possible types of ReLU units that could give rise to the function depicted in the left pane, together with the perturbations that realize the effect depicted in the left pane. For cases 2 and 3, the linear and bias units must also be adjusted to preserve the function values at the datapoints. Let a, by and a2, bo be the parameters of the two perturbed ReLU units and ky and ko be the new location of the corresponding intercepts. The perturbations will be in terms of an arbitrarily small quantity « > 0, and hence we will assume that, for all 7 Ai+1, x; ¢ (ka, ka]. Let Rij, Rj denote the contributions to the regularization expression for units 1 and 2 corresponding to the jth datapoint, after the perturbation. Case 1 (a1 > 0, c1 < 0, a2 > 0, c2 > 0): We first give an intuitive argument of how the perturbation is chosen to preserve the function values while decreasing the regularization. As depicted in the second pane of Figure 4, we change the parameters of the first ReLU unit a1 and b1 such that the intercept k1 moves towards the left to a position ˜k1 and the slope a1 decreases. The changes in a1 and b1 are chosen such that the value at the point x0 remains the same. The second ReLU unit’s parameters are perturbed such that for all datapoints xj ≥ ˜k2, the change in the function values due to the changes in the parameters of the first ReLU unit are balanced by them. Hence, the function values are preserved for all datapoints. To see that the regularization decreases by the same order of magnitude as the perturbation, recall that the regularization term for a ReLU unit i and datapoint j is proportional to (σ(aixj + bi))2 if the value of ci is kept unchanged. From Figure 4, the value of (σ(aixj + bi))2 for both units remains the same for all datapoints xj ≤ x0 and strictly decreases (proportionately to the magnitude of the perturbation) for all datapoints xj ≥ ˜k2. This realizes the intuition that the implicit regularizer promotes small activations in the network. A nearly identical argument applies in the other three cases depicted in Figure 4, with the slight modification in cases 2 and 3 that we need to perturb the linear and bias units to preserve the function values, and the regularization term is independent of the values of those parameters. Now, we explicitly describe the case analysis mentioned above, and explicitly state the perturba- tions, and compute the improvement in the regularizer for all four cases, and the cases corresponding to the setting where the data point x0 lies at one of the intercepts, k1 or k2 are analogous. For clarity, Figure 5 depicts the function before the perturbation, and after, for both the case when x0 lies between the intercepts k1, k2, and when x0 = k1. We begin by computing the perturbations for each of the four cases depicted in Figure 4. When the values of linear and bias units a, b are not 21 (a) When the datapoint is between the kinks. (b) When the datapoint is on one of the kinks. kik, 0 kok, zo ky koko Figure 5: The plots show the change such that the function values at the datapoints are preserved and the regularization term strictly decreases. mentioned, we assume there is no change in them. Case 1 (a1 > 0, c1 < 0, a2 > 0, c2 > 0) : ay ay(1 €) by by + xoaie c ~ cys a2 = ag — (a — a1) bo = bo — + (b, — bi) C2 c2 First, observe that the intercept for ReLU 1 moves to the left since i. k by f by €(a129 + bi) 1 1 ay ay a,(1—€) <0 The last inequality follows since 0 < « < 1 and aj > O and ajxo + bi > O since x > ky and ayk, + b} = 0. Similarly, the intercept for ReLU 2 moves to the right ko — ko bo + bz _ aycye(azxo + b2) G2 ag.— Ag (Ca + ca) The last inequality follows because ci < 0, a1,a2 > 0, agv%o + be < 0 and cga2 + c1aie > O for sufficiently small «. Now, we will verify that f(#j) = f(aj) V xj,j € [n] and the total regularization term R decreases by O(¢). We will analyze the three cases separately where 7; < ki, #j = xo and t= ko. Since both the units were not active for xj ≤ ˜k1 and are not active after the change, xj ≤ ˜k1: there is no change in the function value. Similarly, since the units were not active before the change and did not become active after the change, the regularization term for xj ≤ ˜k1 does not change. First, calculating the value of ˜a1x0 + b1, we get that G19 + by = a1 (1 — €)ap +b) + ayexo = A129 + bo (17) 22 The function value for x0 does not change since the contribution of the first unit does not change by (17) and the second unit remains off before and after the change. This is by design as we decreased the slope a1 and moved the intercept k1 to the left such that function value at point x0 is preserved. ˜f (x) − f (x) = c1σ(˜a1x0 + ˜b1) + c2σ(˜a2x0 + ˜b2) − c1σ(a1x0 + b1) − c2σ(a2x0 + b2) = 0 Calculating the change in regularization value with the perturbed parameters, we see there is no change since ˜a1x0 + ˜b0 = a1x0 + b0 by (17) and c does not change. ˜R10 − R10 = (σ(˜a1x0 + ˜b1))2 + c2 0)I˜a1x0+˜b1>0 − (σ(˜a1x0 + ˜b1))2 − c2 1(1 + x2 1(1 + x2 0)I˜a1x0+˜b1>0 = 0 Since the second unit remains off for x0 before and after the change, the regularization value does not change. ˜R20 − R20 = (σ(˜a2x0 + ˜b2))2 + c2 2(1 + x2 0)I˜a2x0+˜b2>0 − (σ(a2x0 + b2))2 − c2 2(1 + x2 0)Ia2x0+b2>0 = 0 Thus, we see that both the function value and the regularization term do not change for x0. rj ke: Now for this case, both the units are active before and after the change. So, we need to look at the how the total contribution changes to both the output value and the regularization for both the units. First, calculating a,x; + by - (ax; + b;), we see that it is strictly negative since €>0,a1 > 0 and aj > ke > ko > x0. aaj + by — (ara; + b1) = ai (1 — ©)aj + by + aexo — aa; — by = €a1 (x9 — 2;) < 0 Similarly, calculating ˜a2xj + ˜b2 − (a2xj + b2), we see that it is also strictly negative since c1 < 0 and c2 > 0. ~ ~ Cy ~ c G20; +b — (aga; +b2) = (a2 —az)x; + b2—be ((@1 — a1) a; +61 — bi) + cay (xo xj) (19) c2 c2 This can also be readily seen from the figure 4. Now, calculating the change in function value due to the perturbed parameters, we get ˜f (xj) − f (xj) = c1σ(˜a1xj + ˜b1) + c2σ(˜a2xj + ˜b2) − c1σ(a1xj + b1) − c2σ(a2xj + b2) = c1((˜a1 − a1)xj + ˜b1 − b1) + c2((˜a2 − a2)xj + ˜b2 − b2) Now, substituting the changes computed in equation (18) and equation (19), we get that f (aj) — f(@;) = crare(xo — #5) 4 a( canto *))) 0 Hence, we see that the function values are preserved for datapoints in this range. This is because the changes in the parameters a2 and b2 were chosen in such a way so that the change in function value introduced due to the change in parameters of unit 1 can be balanced. Calculating the change in regularization value with the perturbed parameters, we get that the regularization term strictly decreases since 0 < ˜a1xj + ˜b1 < a1xj + b1 by (18) which we have already argued before. Rij Rij (o(a12; + b1))? G+ @7)Ig 245,50 (o(a12; bi))? ai t 5 )Taxe;-+b1>0 < —O(e) Similarly, since 0 ≤ ˜a2xj + ˜b2 < a2xj + b2 by equation (19), the regularization value for unit 2 strictly decreases for this range of datapoints. Roj — Ro = (a (aaj +b2))? +3 (1 +23 )Tayn;45,50 (o(aza;4 be))? -— (14 ©} )Tasee;-+ba>0 < —O(e). 23 (18) Case 2 (a1 > 0, c1 < 0, a2 < 0, c2 > 0) : This case corresponds to the third pane in Figure 4. a =a\(1—e) b, =b, 4 Taye & = a + 2 (a — a) by = by + (5, — by) C2 c2 a ci (a1 — a1) b ci(bi bi) Similarly to the previous case, we can argue that the function value at the datapoints remain same and regularization decreases by O(c). Case 3 (a1 < 0, c1 < 0, a2 > 0, c2 > 0) : Figure 4: This case corresponds to the fourth pane in ay a,(1 €) by by Taye @2 =ag+ a (a a1) by bg “1 (by bi) C2 C2 a c1(@,—a,) b= —cy(b, — by) Similarly to the previous case, we can argue that the function value at the datapoints remain same and regularization decreases by O(c). Case 4 (a1 < 0, c1 < 0, a2 < 0, c2 > 0) : This case corresponds to the right pane in Figure 4: Cc . C2 + a, =a, — = (ay — az) by = by — = (by — bg) CL C1 Gg = ag(1—e) by = by 4 Ty ag€ Similarly to the previous case, we can argue that the function value at the datapoints remains the same and regularization decreases by O(e). # C Tanh and Logistic Activations (Proof of Theorem 3) Here, we discuss the implications of our characterization of stable points in the dynamics of SGD with label noise, for networks with either hyperbolic tangent activations or logistic activations. In particular, we will consider networks with two layers, of arbitrary width, that are trained on a single d-dimensional data point (x, y). We find that, at “non-repellent” points, the neurons can be partitioned into a constant number of essentially equivalent neurons, and thus the network provably emulates a constant-width network on “simple” data. Throughout this section we denote our single training point by (a, y), where « € R¢ and y € R, and we assume x # 0. Our network is a two layer network, parameterized by a length n vector c and ad x n matrix w, and represents the function n n f(ajc,w) = Yo cio(w} x) i=1 i=1 where c € R” and w},...,Wn are the columns of w € R¢*"_ In Section below, the activation function o will be the logistic function, while in Section|[C.2|we analyze the tanh activation function. Since we are only concerned with the network’s behavior on a single data point (a, y), unlike in the body of the paper where the subscript 7 typically denoted a choice of data point, here we use the subscript i to index the hidden units of the network. We let hj = o(w!x) denote the value of the i*® hidden unit and let 0; = c;h; denote the output (after scaling) of the it” hidden unit. Then, we simply have that f(a;c,h) = 77_, 0. 24 # C.1 “Non-repellent” points for logistic activation We prove the following proposition, establishing the portion of Theorem 3 concerning logistic activation functions: Proposition C.1. Let 6 = (c,w) parameterize a two-layer network with logistic activations. If 6 is “non-repellent” according to Definition [Q for the dynamics of training with a single d-dimensional datapoint (x,y) where x #0, then there exists a1,a2 and 81,82 such that for each hidden unit i, either c; = a, and h; = Py or G = ag and hg = Bo. Proof. First, we derive the implicit regularizer, R, for a two layer network with logistic activations. We compute: ∇cif (x; c, w) = hi ∇wij f (x; c, w) = cihi(1 − hi)xj Thus, , 2 2) 2p2 2 2 R=||Vuef (ase, w)||- = > [h? + cFhF(1 — hi)? ||x||7] i Recall that a choice of parameters with zero-error is “non-repellant” iff the implicit regularizer has zero gradient in the span of directions with zero function gradient. Thus, we want to consider directions that do not change the error, up to first order. Recall that we defined 0; = c;h; and that the networks output is just )>j_, 0;. Any change to the parameters that leaves all the 0; the same must leave the network output the same, and thus the error unchanged as well. First, we investigate for what choices of parameters do there not exist any directions that leave all 0; constant but decrease the regularization term. We rewrite the regularization term using 0;: nD R=||Vuef (x36, w)||? = > [h? + 07(1 — hi)? |Jar| (20) a Suppose for some i that the derivative of the above expression with respect to hi is nonzero. Then, we can change wi in the direction that slightly increases hi while also decreasing ci just enough to keep oi constant. That direction would keep the error at 0 but the implicit regularization term would have nonzero directional derivative in it. Thus, for “non-repellent” points, we must have that the following is 0 for all i: ∂ ∂hi R = 2hi + 2(hi − 1)o2 i ||x||2 = 0 We solve the above equation for hi to determine that at all “non-repellent” points: hi = i ||x||2 o2 1 + o2 i ||x||2 (21) We can plug this back into equation 20 to determine that at “non-repellent” points the following must be true: 2 2 2 2 2 2 ollall yo oo oFlel lo) 9 oF ||| R=) (> y+ o7(1 Yell] = * dI 1+ 03||x||2 . 1+ 03 ||2||2 i elle 2 2 ‘ > : For convenience, we define R,(z) = ae Then, we have that at “non-repellent” points, R=) iL, R.(0;). The function Ro, as well as its derivative, is depicted in Figure [6 25 R,(0) R'(o) (o) 0.0 a —0.5 0.00 —5.0 —2.5 0.0 a) to 0.0 oO 5.0 —5.0 fon he R,(0) R,{o) 0.00 —5.0 —2.5 0.0 a) to fon R'(o) (o) 0.0 a —0.5 0.0 oO 5.0 —5.0 he Figure 6: Plots depicting the function R, on the left and its derivative on the right, for ||a|| = 1. From the plots, we see that the equation R/,(0) = a has at most two solutions for any choice of a. Other choices of ||z|| would only stretch the plots, which does not affect that conclusion Next, we consider the effect of changing two units at a time. We claim that if there are units i,j where Ri(o;) # Ri(o;), then we are not at a “non-repellent” point. Consider moving the parameters in a direction that increases 0; by € and decreases 0; by €. That direction will leave the network output constant, and therefore also the error. Furthermore, we can choose the direction so that it additionally modifies h; and h; so that they satisfy equation [J] wit o; and 0;. Altogether, this means that R changes by (R,(0; + €) — Ro(a)) The result is that, after a change by ¢, the new regularization penalty will 1 respect to the modified — (Ro(0; + €) — Ro(0;)). change (up to first-order approximation) by e(R5(0;) — R/(0;)), which is nonzero. Thus, R decreases linearly in the direction we constructed, implying we are not at a “non-repellent” point, yielding t. 1e desired contradiction. Thus, at a “non-repellent” point we must have that R{(0;) is the same for all 0;. Thus the number of different values of 0; is upper bounded by the number of solutions to the equation Ri(o0) = a where a is some scalar. See Figure Gi for a plot illustrating t most 2 solutions. To prove this, we first compute the derivative and set it 1at this equation has at equal to a 2o||ar||? (1 + 0° || ||?) Ri(o) = a(1 + o|||?)? — 2o||x||/? = 0 =a Since ||a|| 4 0, the function a(1 + 0?||x||?)? — 2o||x||? is a strictly convex function of o for a > 0, is strictly concave for a < 0, and a linear function when a = 0, and thus in all cases has at most 2 solutions for ||a|| 4 0. Thus, at a “non-repellent” point, there are at most two distinct values for 01,.--;0,- Furthermore, we have already shown that at “non-repellent” points, h; is a function of o;. It also follows that c; = 0;/h; is a function of 0;. Thus, if 0; oj; then ¢ = cj and h; = hj, so al units with the same output (0;) also share the same value for c; and h;. Hence, there are at most two possible values for ¢;,h;, which we can name qj, 3; and ag, 82, proving this proposition. # C.2 “Non-repellent” points for tanh activation The following proposition characterizes the portion of Theorem 3 concerning tanh activations. Proposition C.2. Let 6 = (c,w) parameterize a two-layer network with tanh activations. If 0 is “non-repellent” according to Definition [Q for the dynamics of training with a single d-dimensional datapoint (x,y) where x # 0, then there exists a and 8 such that for each hidden unit i, either a and hy = 8 or G Borqg=h =0. Ci a and hg 26 R,(0) Rilo) 1.00 2 0.75 1 0.50 x (OO 0.25 -1 0.00 —2 Rilo) 2 1 x (OO -1 —2 R,(0) 1.00 0.75 0.50 R,(o) 0.25 0.00 Figure 7: Plots depicting the function R, on the left and its derivative on the right, for ||a|| = 1. From the plots, we see that R/,(0) is injective and undefined at o = 0. Other choices of ||2|| would only stretch the plots, which does not affect that conclusion. The proof of this proposition is mostly the same as the proof of proposition However, instead of proving that every point in the range of R/(o) is attained by at most two points in the domain, we will prove that the function is injective. The other difference is that R/(o) is undefined at o = 0, so in addition to the units that are h; and c¢; (up to sign), there can also be units with 0 output. Due to the highly repetitive logic, we go through this proof at a faster pace than [C1] Proof. For a two layer network with tanh activations, the implicit regularizer is 2. oF R= » [h? + e(1 — h?)?|Ia||"] = » [h? + pt — h?)?|Ia||"] (22) aL aL At “non-repellent” points, we must have that the below derivative is 0 for all i ∂ ∂hi R = 2h4 i + 2h4 i o2 i ||x||2 − 2o2 h3 i i ||x||2 = 0 We solve the above equation for h2 i to determine that at all “non-repellent” points: h2 i = o2 i ||x||2 o2 i ||x||2 + 1 (23) We plug this back into equation 22 and simplify to determine that, at “non-repellent” points, 27 the following must be true: 2 0; R= So [hi + pa — hi)? a7] a a 5 MCU ole 2) + ofa — 2h) h? a 5 ell? + ole 24 = 28) a 1 = 0 20a!" ~ Y) a u oflle||? +1 = Leila D =)2 «|? (o}|la||? + 1) — oF ||x||?)] a —~ We define Ro(oi) = 2[(4/0?||||?(0?||x||? + 1) — 0? ||a||?)]. Recall that we showed in the proof of Proposition [G.1] that if there exists two units, i,j, such that R5(0;) 4 Ri(o;), then we cannot be at a “non-repellent” point. In this case, it turns out that R/,(o) is undefined at o = 0, which means any number of units can have zero output. However, at all other points, R/(0) is injective. This means that all units that don’t have 0 output must share the same output. See Figure |7| for illustrative plots. To show that Rj, is injective, we first take |||] = 1 without loss of generality, since the argument of Ro always appears multiplied by ||a||. Next, we differentiate and simplify to obtain √ P+y-VeA te Veit 2 , Ri(z) = 2z which is easily seen to have the same sign as z (and is undefined when z = 0). Further, the 2nd derivative—ignoring its value at 0—simplifies to the following expression: 2 3 wr 5 Ua _ 72 Ril) = ele 2 which is seen to be negative everywhere. Thus for positive z, we have Rj(z) is positive and decreasing, while for negative z it is negative and decreasing, implying R/,(z) is injective, as desired. We thus conclude that at “non-repellent” points, all units have either the same output (0;) or have output 0. From Equation we know that at “non-repellent” points h? is a function of 0;. Furthermore, lo = o? / h?, so lor s also a function of 0;. Thus, all units that don’t have output 0 must have the same h; and c; (up to sign), and since they have the same 0; = h;c;, the signs must match up as well. This means that, at “non-repellent” points, there is a, so that for each hidden unit 7 where 0; 4 0, either c; = a and h; = 6 or ¢; a and h; B. Finally, we show that at “non-repellent” points, if 0; = 0 then h; = c; = 0. This means that not only does the i” unit not affect the output of the network at this particular choice of 2, but it also does not affect the output of the network for any input. If 0; = 0 then from equation we know hj = 0. Recall that the networks output is )77_, cihi, so if hy = 0, then changing c¢ does not affect the error. Therefore, we could only be at a “non-repellent” point if oR = 0. Taking the 28 derivative of equation 22 we see ∂R ∂ci then hi = ci = 0. = 2ci(1 − h2 i )2||x||2 which is zero only if ci = 0. Thus, if oi = 0 29
{ "id": "1803.08367" }
1904.09286
Unifying Question Answering, Text Classification, and Regression via Span Extraction
Even as pre-trained language encoders such as BERT are shared across many tasks, the output layers of question answering, text classification, and regression models are significantly different. Span decoders are frequently used for question answering, fixed-class, classification layers for text classification, and similarity-scoring layers for regression tasks, We show that this distinction is not necessary and that all three can be unified as span extraction. A unified, span-extraction approach leads to superior or comparable performance in supplementary supervised pre-trained, low-data, and multi-task learning experiments on several question answering, text classification, and regression benchmarks.
http://arxiv.org/pdf/1904.09286
Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, Richard Socher
cs.CL
updating paper to also include regression tasks
null
cs.CL
20190419
20190920
9 1 0 2 p e S 0 2 ] L C . s c [ 2 v 6 8 2 9 0 . 4 0 9 1 : v i X r a # Unifying Question Answering, Text Classification, and Regression via Span Extraction # Nitish Shirish Keskar∗ Bryan McCann∗ Caiming Xiong Richard Socher Salesforce Research {nkeskar,bmccann,cxiong,rsocher}@salesforce.com # Abstract Even as pre-trained language encoders such as BERT are shared across many tasks, the out- put layers of question answering, text clas- sification, and regression models are signifi- cantly different. Span decoders are frequently used for question answering, fixed-class, clas- sification layers for text classification, and similarity-scoring layers for regression tasks, We show that this distinction is not necessary and that all three can be unified as span ex- traction. A unified, span-extraction approach leads to superior or comparable performance in supplementary supervised pre-trained, low- data, and multi-task learning experiments on several question answering, text classification, and regression benchmarks. # Introduction and cosine similarity are employed. These task- specific inductive biases are unnecessary. On sev- eral tasks predominantly treated as text classifi- cation or regression, we find that reformulating them as span-extraction problems and relying on a span-decoder yields superior performance to us- ing a task-specific layers. For text classification and regression problems, pre-trained NLP systems can benefit from sup- plementary training on intermediate-labeled tasks (STILTs) (Phang et al., 2018), i.e. supplementary supervised training. We find this is similarly true for question answering, classification, and regres- sion when reformulated as span-extraction. Be- cause we rely only on the span-extractive inductive bias, we are able to further explore previously un- considered combinations datasets. By doing this, we find that question answering tasks can bene- fit from text classification tasks and classification tasks can benefit from question answering ones. language processing (NLP) Pre-trained natural systems (Radford et al., 2019; Devlin et al., 2018; Radford et al., 2018; Howard and Ruder, 2018; Peters et al., 2018; McCann et al., 2017) have been shown to transfer remarkably well on down- stream tasks including text classification, question answering, machine translation, and summariza- tion (Wang et al., 2018; Rajpurkar et al., 2016; Conneau et al., 2018). Such approaches involve a pre-training phase followed by the addition of task-specific layers and a subsequent re-training or fine-tuning of the conjoined model. Each task- specific layer relies on an inductive bias related to the kind of target task. For question answering, a task-specific span-decoder is often used to ex- tract a span of text verbatim from a portion of the input text (Xiong et al., 2016). For text classifica- tion, a task-specific classification layer with fixed classes is typically used instead. For regression, similarity-measuring layers such as least-squares The success of pre-training for natural language processing systems affords the opportunity to re- examine the benefits of our inductive biases. Our results on common question answering, text clas- sification, and regression benchmark tasks sug- gest that it is advantageous to discard the induc- tive bias that motivates task-specific, fixed-class, classification and similarity-scoring layers in favor of the inductive bias that views all three as span- extraction problems. # 1.1 Contributions Summarily, we demonstrate the following: 1. Span-extraction is an effective approach for unifying question answering, text classifica- tion, and regression. ∗Equal contribution. as much from intermediate-task training as more traditional text classification and regression methods. Input Extracted span Output [ squap } Nikola Tesla (10 July 1856 — 7 January 1943) was a Serbian American 10 July 1856 inventor ... [SEP] What year was Tesla born? Positive or negative? [SEP] The movie is slow, very very slow. The new rights are nice enough. Entailment, contradiction or neutral? [SEP] The rights recently put in place are nowhere near enough A woman is riding a horse. 0.0 0.25 0.5 ... 4.75 5.0 [SEP] A man is playing a guitar. Figure 1: Illustration of our proposed approach using the BERT pre-trained sentence encoder. Text classification tasks are posed as those of span extraction by appending the choices to the input. Similarly, regression tasks are posed by appending bucketed values to the input. For question answering, no changes over the BERT approach are necessary. The figure includes four examples from the SQuAD, SST, MNLI, and STS datasets, respectively. 3. Intermediate-task training can be extended to span-extractive question answering. 4. Span-extraction allows for combinations of question answering and text classification datasets in intermediate-task training that outperform using only one or the other. 5. Span-extractive learning but single-task models compared to multi-task yield weaker intermediate-task training. stronger multi-task models, 6. Span-extraction with intermediate-task train- ing proves more robust in the presence of limited training data than the corresponding task-specific versions. # 2 Related Work Transfer Learning. The use of pre-trained en- coders for transfer learning in NLP dates back to (Collobert and Weston, 2008; Collobert et al., 2011) but has had a resurgence in the recent past. BERT (Devlin et al., 2018) employs the recently proposed Transformer layers (Vaswani et al., 2017) in conjunction with a masked lan- guage modeling objective as a pre-trained sen- tence encoder. Prior to BERT, contextualized word vectors (McCann et al., 2017) were pre- trained using machine translation data and trans- ferred to text classification and question answer- ing tasks. ELMO (Peters et al., 2018) improved contextualized word vectors by using a language modeling objective instead of machine transla- tion. ULMFit (Howard and Ruder, 2018) and GPT (Radford et al., 2018) showed how tradi- tional, causal language models could be fine-tuned directly for a specific task, and GPT-2 (Radford et al., 2019) showed that such language models can indirectly learn tasks like machine translation, question answering, and summarization. Intermediate-task and Multi-task Learning. The goal of unifying NLP is not new (Collobert and Weston, 2008; Collobert et al., 2011). In (Phang et al., 2018), the authors explore the ef- ficacy of supplementary training on intermediate tasks, a framework that the authors abbreviate as STILTs. Given a target task T and a pre-trained sentence encoder, they first fine-tune the encoder on an intermediate (preferably related) task I and then finally fine-tune on the task T . The authors showed that such an approach has several bene- fits including improved performance and better ro- bustness to hyperparameters. The authors primar- ily focus on the GLUE benchmark (Wang et al., 2018). Liu et al. (2019) explore the same task and model class (viz., BERT) in the context of multi- tasking. Instead of using supplementary training, the authors choose to multi-task on the objectives and, similar to BERT on STILTs, fine-tune on the specific datasets in the second phase. Further improvements can be obtained through heuristics such as knowledge distillation as demonstrated in (Anonymous, 2019). All of these approaches require a different classifier head for each task, e.g., a two-way classifier for SST and a three-way classifier for MNLI. Two recent approaches: de- caNLP (McCann et al., 2018) and GPT-2 (Rad- ford et al., 2019) propose the unification of NLP as question answering and language modeling, re- spectively. As investigated in this work, the task description is provided in natural language instead of fixing the classifier a-priori. Task Dataset Source Text Auxiliary Text Sentence Clas- sification SST positive or negative? it’s slow – very, very slow Sentence Classification Pair MNLI I don’t know a lot about camping. entailment, contradiction, or neutral? I know exactly. Sentence Classification Pair RTE The capital of Slovenia is Ljubljana, with 270,000 inhabitants. entailment or not? Slovenia has 270,000 inhabi- tants. Sentence Regression Pair STS-B A woman is riding a horse. 0.0 0.25 0.5 0.75 1.0 · · · 5.0. A man is playing a guitar. Question Answering SQuAD Nikola Tesla (10 July 1856 – 7 January 1943) was a Serbian American inventor ... When was Tesla born? Table 1: Treating different examples as forms of span-extraction problems. For sentence pair classification datasets, one sentence is present in each of the source text and auxiliary text. The possible classification labels are appended to the source text. For single sentence classification datasets, the source text only contains the possible classification labels. For question answering datasets, no changes to the BERT formulation is required; the context is presented as source text and the question as auxiliary text. # 3 Methods We propose treating question answering, text clas- sification, and regression as span-extractive tasks. Each input is split into two segments: a source text which contains the span to be extracted and an auxiliary text that is used to guide extraction. Question answering often fits naturally into this framework by providing both a question and a context document that contains the answer to that question. When treated as span-extraction, the question is the auxiliary text and the context doc- ument is the source text from which the span is extracted. Text classification input text most of- ten does not contain a natural language descrip- tion of the correct class. When it is more natural to consider the input text as one whole, we treat it as the auxiliary text and use a list of natural lan- guage descriptions of all possible classification la- bels as source text. When the input text contains two clearly delimited segments, one is treated as auxiliary text and the other as source text with ap- pended natural language descriptions of possible classification labels. For regression, we employ a process similar to classification; instead of predict- ing a floating-point number, we bucket the possi- ble range and classify the text instead. Our proposal is agnostic to the details of most common preprocessing and tokenization schemes for the tasks under consideration, so for ease of ex- position we assume three phases: preprocessing, encoding, and decoding. Preprocessing includes any manipulation of raw input text; this includes tokenization. An encoder is used to extract fea- tures from the input text, and an output layer is used to decode the output from the extracted fea- tures. Encoders often include a conversion of to- kens to distributed representation followed by ap- plication of several layers of LSTM, Transformer, convolutional neural network, attention, or pool- ing operations. In order to properly use these ex- tracted features, the output layers often contain more inductive bias related to the specific task. For many question answering tasks, a span-decoder uses the extracted features to select a start and end token in the source document. For text classifi- cation, a linear layer and softmax allow for clas- sification of the extracted features. Similarly, for regression, a linear layer and a similarity-scoring objective such as cosine distance or least-squares is employed. We propose to use span-decoders as the output layers for text classification and regres- sion in place of the more standard combination of linear layer with task-specific objectives. # 3.1 Span-Extractive BERT (SpEx-BERT) In our experiments, we start with a pre-trained BERT as the encoder with preprocessing as de- scribed in Devlin et al. (2018). This preprocess- ing takes in the source text and auxiliary text and outputs a sequence of p = m + n + 2 tokens: a special CLS token, the m tokens of the source text, a separator token SEP, and the n auxiliary tokens. The encoder begins by converting this se- quence of tokens into a sequence of p vectors in Rd. Each of these vectors is the sum of a to- ken embedding, a positional embedding that repre- sents the position of the token in the sequence, and a segment embedding that represents whether the token is in the source text or the auxiliary text as described in Devlin et al. (2018). This sequence is stacked into a matrix X0 ∈ Rp×d so that it can be processed by several Transformer layers (Vaswani et al., 2017). The ith layer first computes αp(Xi) by applying self-attention with k heads over the previous layer’s outputs: a®(X;) = [hi;+++ she]Wo qd) where hy = a(X;W;, XiW7, XiW}') T XY a(X,Y, Z) = softmax ( vd \z (2) A residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) merge information from the input and the multi-head attention: # Hi = LayerNorm(αp(Xi) + Xi) This is followed by a feedforward network with ReLU activation (Nair and Hinton, 2010; Vaswani et al., 2017), another residual connection, and a final layer normalization. With parameters U ∈ Rd×f and V ∈ Rf ×d: Xi+1 = LayerNorm(max(0, HiU )V + Hi)) (4) Let Xsf ∈ Rm×d represent the final output of these Transformer layers. At this point, a task- specific head usually uses some part of Xsf to classify, regress, or extract spans. Our proposal is to use a span-decoder limited to Xsf whenever a classification or similarity-scoring layer is typi- cally used. In this case, we add only two trainable parameter vectors dstart and dend following De- vlin et al. (2018), and we compute start and end distributions over possible spans by multiplying these vectors with Hf and applying a softmax function: pstart = softmax(Xsf dstart) pend = softmax(Xsf dend) (5) (6) During training, we are given the ground truth answer span (a∗, b∗) as a pair of indices into the source text. The summation of cross-entropy losses over the start and end distributions then gives an overall loss for a training example: Lstart = − I{a∗ = i} log pstart(i) (7) # a Lend = − I{b∗ = i} log pend(i) (8) # i L = Lstart + Lend (9) (3) At inference, we extract a span (a, b) as a = arg max pstart(i) i (10) b = arg max pend(i) i (11) # 4 Experimental Setup # 4.1 Tasks, Datasets and Metrics We divide our experiments into three categories: classification, regression, and question answer- ing. For classification and regression, we evalu- ate on all the GLUE tasks (Wang et al., 2018). This includes the Stanford Sentiment Treebank (SST) (Socher et al., 2013), MSR Paraphrase Cor- pus (MRPC) (Dolan and Brockett, 2005), Quora Question Pairs (QQP), Multi-genre Natural Lan- guage Inference (MNLI) (Williams et al., 2017), Recognizing Textual Entailment (RTE) (Dagan et al., 2010; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), Question- answering as NLI (QNLI) (Rajpurkar et al., 2016), and Semantic Textual Similarity (STS-B) (Cer et al., 2017). The Winograd schemas challenge as NLI (WNLI) (Levesque et al., 2012) was ex- cluded during training because of known issues with the dataset. As with most other models on the GLUE leaderboard, we report the majority class label for all instances. With the exception of STS- B, which is a regression dataset, all other datasets are classification datasets. For question answer- ing, we employ 6 popular datasets: the Stan- ford Question Answering Dataset (SQuAD) (Ra- jpurkar et al., 2016), QA Zero-shot Relationship Extraction (ZRE; we use the 0th split and append the token unanswerable to all examples so it can be extracted as a span) (Levy et al., 2017), QA Semantic Role Labeling (SRL) (He et al., 2015), Commonsense Question Answering (CQA; we use version 1.0) (Talmor et al., 2018) and the two versions (Web and Wiki) of TriviaQA (Joshi et al., 2017). Unless specified otherwise, all scores are on development sets. Concrete examples for several datasets can be found in Table 1. # 4.2 Training Details For training the models, we closely follow the original BERT setup (Devlin et al., 2018) and (Phang et al., 2018). We refer to the 12-layer model as BERTBASE and the 24-layer model as BERTLARGE. Unless otherwise specified, we train all models with a batch size of 20 for 5 epochs. For the SQuAD and QQP datasets, we train for 2 epochs. We coarsely tune the learning rate but beyond this, do not carry out any significant hy- perparameter tuning. For STILTs experiments, we re-initialize the Adam optimizer with the in- troduction of each intermediate task. For smaller datasets, BERT (especially BERTLARGE) is known to exhibit high variance across random initializa- In these cases, we repeat the experiment tions. 20 times and report the best score as is common in prior work (Phang et al., 2018; Devlin et al., 2018). The model architecture, including the final layers, stay the same across all tasks and datasets – no task-specific classifier heads or adaptations are necessary. # 4.3 Models and Code Pre-trained models and code can be found at MASKED. We rely on the BERT training library1 available in PyTorch (Paszke et al., 2017). # 5 Results Next, we present numerical experiments to but- tress the claims presented in Section 1.1. Span-extraction is similar or superior to task- specific heads (classification or regression). Table 2 shows our results comparing BERT (with and without STILTs) with the corresponding vari- ant of SpEx-BERT on the GLUE tasks (Wang et al., 2018). For almost all datasets, the perfor- mance for SpEx-BERT is better than that of BERT, which is perhaps especially surprising for the re- gression task (STS-B). One can reasonably expect model performance to improve by converting such problems into a span-extraction problem over nat- ural language class descriptions. SpEx-BERT improves on STILTs. As in the case of Phang et al. (2018), we find that using supplementary tasks for pre-training improves the performance on the target tasks. We follow the setup of Phang et al. (2018) and carry out a two- stage training process. First, we fine-tune the BERT model with a span-extraction head on an intermediate task. Next, we fine-tune this model on the target task with a fresh instance of the op- timizer. Note that Phang et al. (2018) require a new classifier head when switching between tasks that have different numbers of classes or task, but 1https://github.com/huggingface/ pytorch-pretrained-BERT/ no such modifications are necessary when SpEx- BERT is applied. SpEx-BERT also allows for seamless switching between question answering, text classification, and regression tasks. In Table 7, we present the results for SpEx- BERT on STILTs. In a majority of cases, the performance of SpEx-BERT matches or out- performs that of BERT. This is especially pro- nounced for datasets with limited training data, such as MRPC and RTE with SpEx-BERTLARGE and BERTLARGE: 85.2 vs 83.4 for RTE, and 90.4 vs 89.5 for MRPC). We hypothesize that this in- crease is due to the fact that the class choices are provided to the model in natural language, which better utilizes the pre-trained representations of a large language model like BERT. Finally, we note, perhaps surprisingly, that question answer- ing datasets (SQuAD and TriviaQA) improve per- formance of some of the classification tasks. No- table examples include SST (pre-trained from the Wiki version of TriviaQA) and RTE (pre-trained from any of the three datasets). STILTs improves question answering as well. Table 3 shows similar experiments on popular question answering datasets. The transferabil- ity of question answering datasets is well-known. Datasets such as TriviaQA, SQuAD and ZRE have been known to improve each other’s scores and have improved robustness to certain kinds of queries (Devlin et al., 2018; McCann et al., 2018). We further discover that through the formulation of SpEx-BERT, classification datasets also help question answering datasets. In particular, MNLI improves the scores of almost all datasets over their baselines. For SQuAD, the benefit of STILTs with the classification dataset MNLI is almost as much as the question answering dataset TriviaQA. STILTs can be chained. Pre-training models using intermediate tasks with labeled data has been shown to be useful in improving perfor- mance. (Phang et al., 2018) explored the possi- bility of using one intermediate task to demon- strate this improvement. We explore the possibil- ity of chaining multiple intermediate tasks in Ta- ble 3. Conceptually, if improved performance on SQuAD during the first stage of fine-tuning leads to improved performance for the target task of CQA, improving performance of SQuAD through in turn pre-training it on MNLI would improve the eventual goal of CQA. Indeed, our experiments # Train Ex. SST MRPC QQP MNLI RTE QNLI CoLA STS 7k 67k 3.7k 364k 393k 2.5k 105k 8.5k GLUE Leaderboard Score Development Set Scores BERTLARGE →MNLI →SNLI 92.5 93.2 92.7 89.0 89.5 88.5 91.5 91.4 90.8 86.2 86.2 86.1 92.3 70.0 92.3 83.4 80.1 — 62.1 59.8 57.0 90.2 90.9 90.7 — — — 93.7 SpEx-BERTLARGE →SQuAD 93.7 →TriviaQA (Web) 93.3 →TriviaQA (Wiki) 94.4 →MNLI 94.4 →MNLI→SQuAD 93.7 88.9 86.5 85.0 86.5 90.4 89.5 91.0 90.9 90.5 90.6 91.3 91.1 86.4 86.0 85.7 85.6 86.4 86.4 69.8 74.7 73.6 71.5 85.2 84.1 91.8 91.8 91.7 91.6 92.0 92.3 64.8 57.8 60.2 59.9 60.6 60.5 89.5 90.1 89.9 90.1 90.9 90.2 — — — — — — Test Set Scores (both on STILTs) BERTLARGE SpEx-BERTLARGE 94.3 94.5 86.6 87.6 89.4 89.5 86.0 86.2 80.1 79.8 92.7 92.4 62.1 63.2 88.5 89.3 82.0 82.3 Table 2: Performance metrics on the GLUE tasks. We use Matthew’s correlation for CoLA, an average of the Pearson and Spearman correlation for STS, and exact match accuracy for all others. Bold marks the best perfor- mance for a task in a section delimited by double horizontal lines. Scores for MNLI are averages of matched and mismatched scores. (→ A) indicates that a model was fine-tuned on A as an intermediate task before fine-tuning on a target task (the task header for any particular column). In cases where A and the target task are the same, no additional fine-tuning is done. The phrase on STILTs indicates that test set scores on the target task are the result of testing with the best (→ A) according to development scores. suggest the efficacy of chaining intermediate tasks in this way. CQA obtains a score of 63.8 when fine-tuned from a SQuAD model (of score 84.0) and obtains a score of 65.7 when fine-tuned on a SQuAD model that was itself fine-tuned using MNLI (of score 84.5) as an intermediate task. Multi-task STILTs yields stronger multi-task models, but weaker single-task models. We also experiment with multi-task learning during intermediate-task training. We present the results for such intermediate-multi-task training on RTE in Table 5. In intermediate-multi-task training, we cycle through one batch for each of the tasks un- til the maximum number of iterations is reached. No special consideration is made for the optimizer or weighing of objectives. The results show that intermediate-multi-task training improves perfor- mance over the baseline for RTE, but this improve- ment is less than when only MNLI is used for intermediate-task training. Though not desirable if RTE is the only target task, such intermediate- multi-task training yields a better multi-task model that performs well on both datasets: the joint (sin- gle) model achieved 75.0 on RTE and 86.2 on MNLI, both of which are better than their single- task baselines. In some cases, the increased per- formance for one task (MNLI) might be preferable to that on another (RTE). We note that this obser- vation is similar to the one of Phang et al. (2018). # Training Examples SQuAD ZRE SRL CQA 9.5k 87.6k 840k 6.4k SpEx-BERTLARGE → MNLI → ZRE → SQuAD → TriviaQA (Web) → TriviaQA (Wiki) → MNLI → SQuAD 84.0 84.5 84.0 84.0 84.5 84.3 84.5 69.1 71.6 69.1 82.5 75.3 74.2 80.1 90.3 90.7 90.8 91.7 91.3 91.4 91.5 60.3 56.7 61.3 63.8 63.8 64.4 65.7 Table 3: Exact match scores on the development set for a set of question answering tasks. Bold marks the best performance for a task. Note that SpEx-BERT and BERT are equivalent for the question answering task. SpEx-BERT on STILTs is more robust than BERT on STILTs when training data is lim- ited. In Table 4, we present results for the same models (BERT and SpEx-BERT) being fine-tuned with sub-sampled versions of the dataset. For this experiment, we follow (Phang et al., 2018) and subsample 1000 data points at random without re- placement and choose the best development set ac- curacy across several random restarts. The rest of the experimental setup remains unchanged. When used in conjunction with STILTs, the performance improves as expected and, in a majority of cases, significantly exceeds that of the corresponding baseline that does not use span-extraction. SST MRPC RTE At most 1k training examples BERTLARGE →MNLI 91.1 90.5 83.8 85.5 69.0 82.7 SpEx-BERTLARGE →MNLI 91.3 91.2 82.5 86.5 67.1 82.7 Table 4: Development set accuracy scores on three of the GLUE tasks when fine-tuned only on a constrained subset of examples. Bold indicates best score for a task. Model RTE BERTLARGE → RTE BERTLARGE → MNLI → RTE 70.0 83.4 SpEx-BERTLARGE → RTE 69.8 SpEx-BERTLARGE → MNLI → RTE 85.2 SpEx-BERTLARGE → {MNLI, RTE} 75.0 SpEx-BERTLARGE → {MNLI, RTE} → RTE 75.8 Table 5: Development set accuracy on the RTE dataset with STILTs and multi-tasking. We denote the process of multi-tasking on datasets A and B by {A, B}. For each progression (represented by →), we reset the opti- mizer but retain model weights from the previous stage. # 6 Discussion # 6.1 Phrasing the question As described in Section 3, when converting any of the classification or regression problems into a span-extraction one, the possible classes or buck- eted values need to be presented in natural lan- guage as part of the input text. This leaves room for experimentation. We found that separation of naturally delimited parts of the input text into source and auxiliary text was crucial for best per- formance. Recall that for question answering, the natural delimitation is to assign the given context document as the source text and the question as the auxiliary text. This allows the span-decoder to extract a span from the context document, as expected. For single-sentence problems, there is no need for delimitation and the correct span is typically not found in the given sentence, so it is treated as auxiliary text. Natural language descrip- tions of the classes or allowable regression values are provided as source text for span extraction. For two-sentence problems, the natural delimita- tion suggests treating one sentence as source text and the other as auxiliary. The classification or re- gression choices must be in the source text, but it was also the case that one of the sentences must Natural language description MNLI Proposed Approach - segmentation of input text - terse class descriptions 84.7 83.2 84.4 Table 6: Development set accuracy using the SpEx- BERT approach on three versions of the MNLI dataset: (a) with the hypothesis and premise separated across source and auxiliary text (see Section 3 for details) and terse class descriptions; (b) with both hypothesis and premise treated entirely as auxiliary text; and (c) with segmented input but including a one-sentence descrip- tion of the classes (entailment, contradiction, neutral) based on definitions and common synonyms. also be in the source text. Simply concatenating both sentences and assigning them as the source text was detrimental for tasks like MNLI. For the case of classification, when experi- menting with various levels of brevity, we found that simpler is better. Being terse eases training since the softmax operation over possible start and end locations is over a relatively smaller window. While more detailed explanations might elabo- rate on what the classes mean or otherwise pro- vide additional context for the classes, these po- tential benefits were outstripped by increasing the length of the source text. We present these results on the development set of the MNLI dataset with BERTBASE in Table 6. For regression, there exists a trade-off between brevity and granularity of the regression. We found that dividing the range into 10 – 20 buckets did not appreciably change the re- sulting correlation score for STS-B. # 6.2 A fully joint model without task-specific parameters Unlike similar approaches using task-specific heads (Liu et al., 2019), SpEx-BERT allows for a single model across a broader set of tasks. This makes possible a single, joint model with all pa- rameters shared. We present the results of this ex- periment in Table 7; we multi-task over all datasets considered so far. Multi-task performance exceeds single-task performance for many of the question answering datasets (ZRE, SRL, CQA) as well as the classification dataset RTE. In some cases, these improvements are drastic (over 9% accuracy). Un- fortunately, the opposite is true for the two tasks that are the greatest source of transfer, MNLI and SQuAD, and the remaining GLUE tasks. Under- standing why such vampiric relationships amongst SST MRPC QQP MNLI RTE QNLI CoLA STS SQuAD ZRE SRL CQA Individual Models BERTLARGE SpEx-BERTLARGE 92.5 93.7 89.0 88.9 91.5 91.0 86.2 86.3 70.0 69.8 92.3 91.8 62.1 64.8 90.9 89.5 84.0 84.0 69.1 69.1 90.3 90.3 60.3 60.3 Multi-task Models (best joint single model) 92.4 SpEx-BERTLARGE →MNLI 93.2 →SQuAD 92.2 →MNLI→SQuAD 92.3 87.5 87.0 87.0 90.9 90.9 90.9 91.0 90.8 85.0 85.6 85.3 85.2 71.1 81.2 80.9 84.1 91.3 91.3 91.2 90.9 58.8 57.9 52.0 52.1 89.2 90.1 90.1 90.2 80.4 80.5 80.6 80.6 75.0 76.6 78.8 75.3 97.7 97.7 97.7 97.8 61.0 61.5 63.4 61.5 Multi-task Models (maximum individual score for each dataset during the course of training) 93.0 SpEx-BERTLARGE →MNLI 93.2 →SQuAD 92.9 →MNLI→SQuAD 92.7 88.5 89.7 89.2 91.4 91.0 90.8 91.1 90.8 85.2 85.7 85.4 85.4 73.3 84.1 84.1 85.2 91.4 91.6 91.4 91.2 59.8 59.9 56.1 57.5 88.9 89.8 90.1 90.2 81.9 81.4 82.8 83.2 77.8 78.2 79.6 77.5 97.7 97.7 97.8 97.8 64.7 63.3 65.3 64.8 Table 7: Development set performance metrics on a single (joint) model obtained by multi-tasking on all included datasets. We include best single-task performances (without STILTs), labeled as individual models, for the sake of easier comparison. We divide the remaining into two parts – in the first, the scores indicate the performance on a single snapshot during training and not individual maximum scores across the training trajectory. In the second, we include the best score for every dataset through the training; note that this involves inference on multiple model snapshots. For the models trained with STILTs, the SpEx-BERT model is first fine-tuned on the intermediate task by itself after which the model is trained in multi-tasking fashion. Bold implies best in each column (i.e., task). datasets manifest, why any particular dataset ap- pears beneficial, neutral, or detrimental to the per- formance of others, and why question answering tasks appear more amenable to the fully-joint set- ting remain open questions. Nonetheless, a purely span-extractive approach has allowed us to ob- serve such relationships more directly than in set- tings that use multiple task-specific heads or fine- tune separately on each task. Because some tasks benefit and others suffer, these results present a trade-off. Depending on which tasks and datasets are more pertinent, multi-task learning might be the right choice, especially given the ease of de- ploying a single architecture that does not require any task-specific modifications. Joint models for NLP have already been stud- ied (Collobert et al., 2011; McCann et al., 2018; Radford et al., 2019) with a broad set of tasks that may require text generation and more general architectures. These approaches have yet to per- form as well as task-specific models on common benchmarks, but they have demonstrated that large amounts of unsupervised data, curriculum learn- ing, and task sampling strategies can help mitigate the negative influence multitasking tends to have on datasets that are especially good for transfer learning. This work represents a connection be- tween those works and work that focuses on task- specific fine-tuning of pre-trained architectures. # 7 Conclusion With the successful training of supervised and un- supervised systems that rely on increasingly large amounts of data, more of the natural variation in language is captured during pre-training. This suggests that less inductive bias in the design of task-specific architectures might be required when approaching NLP tasks. We have proposed that the inductive bias that motivates the use task- specific layers is no longer necessary. Instead, a span-extractive approach, common to question answering, should be extended to text classifi- cation and regression problems as well. Exper- iments comparing the traditional approach with BERT to SpEx-BERT have shown that the span- extractive approach often yields stronger perfor- mance as measured by scores on the GLUE bench- mark. This reduces the need for architectural mod- ifications across datasets or tasks, and opens the way for applying methods like STILTs to question answering or a combination of text classification, regression, and question answering datasets to fur- ther improve performance. Experiments have fur- ther shown that span-extraction proves more ro- bust in the presence of limited training data. We hope that these findings will promote further ex- ploration into the design of unified architectures for a broader set of tasks. # References Anonymous. 2019. Bam ! born-again multitask net- works for natural language understanding. In Open- Review Anonymous Preprint. Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising tex- tual entailment challenge. In Proceedings of the sec- ond PASCAL challenges workshop on recognising textual entailment, volume 6, pages 6–4. Venice. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and arXiv preprint cross-lingual focused evaluation. arXiv:1708.00055. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep In Pro- neural networks with multitask learning. ceedings of the 25th international conference on Machine learning, pages 160–167. ACM. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from Journal of machine learning research, scratch. 12(Aug):2493–2537. Alexis Conneau, Guillaume Lample, Ruty Rinott, Ad- ina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- arXiv preprint lingual sentence representations. arXiv:1809.05053. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2010. Recognizing textual entailment: Ra- tional, evaluation and approaches–erratum. Natural Language Engineering, 16(1):105–105. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1–9. Association for Computa- tional Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- In Proceedings of the IEEE conference on nition. computer vision and pattern recognition, pages 770– 778. Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2015. Question-answer driven semantic role label- ing: Using natural language to annotate natural lan- guage. In EMNLP. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551. Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke relation extrac- arXiv preprint Zettlemoyer. 2017. Zero-shot tion via reading comprehension. arXiv:1706.04115. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6294–6305. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807–814. Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Jason Phang, Thibault F´evry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Alec Radford, Karthik Narasimhan, Tim Salimans, Improving language and Ilya Sutskever. 2018. understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/ openai-assets/research-covers/langu ageunsupervised/language understand ing paper.pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. URL https://d4mucfpksywv.cloudfront.net /better-language-models/language mo dels are unsupervised multitask learn ers.pdf. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631–1642. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2018. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Curran As- sociates, Inc. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for arXiv sentence understanding through inference. preprint arXiv:1704.05426. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604.
{ "id": "1811.01088" }
1904.08783
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextualized word embeddings have enhanced previous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this conceptual change in the word embedding computation in relation with gender bias. Our analysis includes different measures previously applied in the literature to standard word embeddings. Our findings suggest that contextualized word embeddings are less biased than standard ones even when the latter are debiased.
http://arxiv.org/pdf/1904.08783
Christine Basta, Marta R. Costa-jussà, Noe Casas
cs.CL, cs.LG
null
null
cs.CL
20190418
20190418
9 1 0 2 r p A 8 1 ] L C . s c [ 1 v 3 8 7 8 0 . 4 0 9 1 : v i X r a # Evaluating the Underlying Gender Bias in Contextualized Word Embeddings # Christine Basta Marta R. Costa-juss`a Noe Casas # Universitat Polit`ecnica de Catalunya {christine.raouf.saad.basta,marta.ruiz,noe.casas}@upc.edu # Abstract Gender bias is highly impacting natural lan- guage processing applications. Word embed- dings have clearly been proven both to keep and amplify gender biases that are present in current data sources. Recently, contextual- ized word embeddings have enhanced previ- ous word embedding techniques by computing word vector representations dependent on the sentence they appear in. In this paper, we study the impact of this con- ceptual change in the word embedding compu- tation in relation with gender bias. Our analy- sis includes different measures previously ap- plied in the literature to standard word em- beddings. Our findings suggest that contextu- alized word embeddings are less biased than standard ones even when the latter are debi- ased. # Introduction Social biases in machine learning, in general and in natural language processing (NLP) applications in particular, are raising the alarm of the scien- tific community. Examples of these biases are evidences such that face recognition systems or speech recognition systems works better for white men than for ethnic minorities (Buolamwini and Gebru, 2018). Examples in the area of NLP are the case of machine translation that systems tend to ignore the coreference information in benefit of an stereotype (Font and Costa-juss`a, 2019) or sen- timent analysis where higher sentiment intensity prediction is biased for a particular gender (Kir- itchenko and Mohammad, 2018). woman as king is to queen). Following this prop- erty of finding relations or analogies, one popular example of gender bias is the word association be- tween man to computer programmer as woman to homemaker (Bolukbasi et al., 2016). Pre-trained word embeddings are used in many NLP down- stream tasks, such as natural language inference (NLI), machine translation (MT) or question an- swering (QA). Recent progress in word embed- ding techniques has been achieved with contex- tualized word embeddings (Peters et al., 2018) which provide different vector representations for the same word in different contexts. While gender bias has been studied, detected and partially addressed for standard word embed- dings techniques (Bolukbasi et al., 2016; Zhao et al., 2018a; Gonen and Goldberg, 2019), it is not the case for the latest techniques of contextualized word embeddings. Only just recently, Zhao et al. (2019) present a first analysis on the topic based on the proposed methods in Bolukbasi et al. (2016). In this paper, we further analyse the presence of gender biases in contextualized word embeddings by means of the proposed methods in Gonen and Goldberg (2019). For this, in section 2 we pro- vide an overview of the relevant work on which we build our analysis; in section 3 we state the specific request questions addressed in this work, while in section 4 we describe the experimental framework proposed to address them and in sec- tion 5 we present the obtained and discuss the re- sults; finally, in section 6 we draw the conclusions of our work and propose some further research. In this work we focus on the particular NLP area of word embeddings (Mikolov et al., 2010), which represent words in a numerical vector space. Word embeddings representation spaces are known to present geometrical phenomena mimicking rela- tions and analogies between words (e.g. man is to # 2 Background In this section we describe the relevant NLP tech- niques used along the paper, including word em- beddings, their debiased version and contextual- ized word representations. # 2.1 Words Embeddings Word embeddings are distributed representations in a vector space. These vectors are normally learned from large corpora and are then used in downstream tasks like NLI, MT, etc. Several ap- proaches have been proposed to compute those vector representations, with word2vec (Mikolov et al., 2013) being one of the dominant options. Word2vec proposes two variants: continuous bag of words (CBoW) and skipgram, both consisting of a single hidden layer neural network train on predicting a target word from its context words for CBoW, and the opposite for the skipgram variant. The outcome of word2vec is an embedding table, where a numeric vector is associated to each of the words included in the vocabulary. These vector representations, which in the end are computed on co-occurrence statistics, exhibit geometric properties resembling the semantics of the relations between words. This way, subtract- ing the vector representations of two related words and adding the result to a third word, results in a representation that is close to the application of the semantic relationship between the two first words to the third one. This application of analogical re- lationships have been used to showcase the bias present in word embeddings, with the prototypical example that when subtracting the vector repre- sentation of man from that of computer and adding it to woman, we obtain homemaker. # 2.2 Debiased Word Embeddings Human-generated corpora suffer from social bi- Those biases are reflected in the co- ases. occurrence statistics, and therefore learned into word embeddings trained in those corpora, ampli- fying them (Bolukbasi et al., 2016; Caliskan et al., 2017). Bolukbasi et al. (2016) studied from a geomet- rical point of view the presence of gender bias in word embeddings. For this, they compute the sub- space where the gender information concentrates by computing the principal components of the dif- ference of vector representations of male and fe- male gender-defining word pairs. With the gender subspace, the authors identify direct and indirect biases in profession words. Finally, they mitigate the bias by nullifying the information in the gen- der subspace for words that should not be associ- ated to gender, and also equalize their distance to both elements of gender-defining word pairs. Zhao et al. (2018b) proposed an extension to GloVe embeddings (Pennington et al., 2014) where the loss function used to train the embed- dings is enriched with terms that confine the gen- der information to a specific portion of the embed- ded vector. The authors refer to these pieces of information as protected attributes. Once the em- beddings are trained, the gender protected attribute can be simply removed from the vector representa- tion, therefore eliminating any gender bias present in it. The transformations proposed by both Boluk- basi et al. (2016) and Zhao et al. (2018b) are down- stream task-agnostic. This fact is used in the work of Gonen and Goldberg (2019) to showcase that, while apparently the embedding information is re- moved, there is still gender information remaining in the vector representations. # 2.3 Contextualized Word Embeddings Pretrained Language Models (LM) like ULMfit (Howard and Ruder, 2018), ELMo (Peters et al., 2018), OpenAI GPT (Radford, 2018; Radford et al., 2019) and BERT (Devlin et al., 2018), pro- posed different neural language model architec- tures and made their pre-trained weights avail- able to ease the application of transfer learning to downstream tasks, where they have pushed the state-of-the-art for several benchmarks including question answering on SQuAD, NLI, cross-lingual NLI and named identity recognition (NER). While some of these pre-trained LMs, like BERT, use subword level tokens, ELMo provides word-level representations. Peters et al. (2019) and Liu et al. (2019) confirmed the viability of using ELMo representations directly as features for downstream tasks without re-training the full model on the target task. Unlike word2vec vector representations, which are constant regardless of their context, ELMo representations depend on the sentence where the word appears, and therefore the full model has to be fed with each whole sentence to get the word representations. The neural architecture proposed in ELMo (Pe- ters et al., 2018) consists of a character-level con- volutional layer processing the characters of each word and creating a word representation that is then fed to a 2-layer bidirectional LSTM (Hochre- iter and Schmidhuber, 1997), trained on language modeling task on a large corpus. # 3 Research questions Given the high impact of contextualized word em- beddings in the area of NLP and the social con- sequences of having biases in such embeddings, in this work we analyse the presence of bias in these contextualized word embeddings. In partic- ular, we focus on gender biases, and specifically on the following questions: • Do contextualized word embeddings exhibit gender bias and how does this bias compare to standard and debiased word embeddings? • Do different evaluation techniques identify bias similarly and what would be the best measure to use for gender bias detection in contextualized embeddings? To address these questions, we adapt and con- trast with the evaluation measures proposed by Bolukbasi et al. (2016) and Gonen and Goldberg (2019). # 4 Experimental Framework As follows, we define the data and resources that we are using for performing our experiments. We also motivate the approach that we are using for contextualized word embeddings. We worked with the English-German news cor- pus from the WMT181. We used the English side with 464,947 lines and 1,004,6125 tokens. To perform our analysis we used a set of lists from previous work (Bolukbasi et al., 2016; Go- nen and Goldberg, 2019). We refer to the list of definitional pairs 2 as ’Definitonal List’ (e.g. she- he, girl-boy). We refer to the list of female and male professions 3 as ’Professional List’ (e.g. ac- countant, surgeon). The ’Biased List’ is the list used in the clustering experiment and it consists of biased male and female words (500 female biased tokens and 500 male biased token). This list is generated by taking the most biased words, where the bias of a word is computed by taking its projec- −→ she) (e.g. breast- tion on the gender direction ( feeding, bridal and diet for female and hero, cigar and teammates for male). The ’Extended Biased 1http://data.statmt.org/ wmt18/translation-task/ training-parallel-nc-v13.tgz 2https://github.com/tolga-b/debiaswe/ blob/master/data/definitional_pairs.json 3https://github.com/tolga-b/debiaswe/ blob/master/data/professions.json List’ is the list used in classification experiment , which contains 5000 male and female biased to- kens, 2500 for each gender, generated in the same way of the Biased List4. A note to be considered, is that the lists we used in our experiments (and obtained from Bolukbasi et al. (2016) and Gonen and Goldberg (2019)) may contain words that are missing in our corpus and so we can not obtain contextualized embeddings for them. Among different approaches to contextualized word embeddings (mentioned in section 2), we choose ELMo (Peters et al., 2018) as contextual- ized word embedding approach. The motivation for using ELMo instead of other approaches like BERT (Devlin et al., 2018) is that ELMo provides word-level representations, as opposed to BERT’s subwords. This makes it possible to study the word-level semantic traits directly, without resort- ing to extra steps to compose word-level informa- tion from the subwords that could interfere with our analyses. # 5 Evaluation measures and results There is no standard measure for gender bias, and even less for such the recently proposed contextu- alized word embeddings. In this section, we adapt gender bias measures for word embedding meth- ods from previous work (Bolukbasi et al., 2016) and (Gonen and Goldberg, 2019) to be applicable to contextualized word embeddings. This way, we first compute the gender subspace from the ELMo vector representations of gender- defining words, then identify the presence of direct bias in the contextualized representations. We then proceed to identify gender information by means of clustering and classifications techniques. We compare our results to previous results from debi- ased and non-debiased word embeddings (Boluk- basi et al., 2016) . Detecting the Gender Space Bolukbasi et al. (2016) propose to identify gender bias in word rep- resentations by computing the direction between representations of male and female word pairs −→ she, −−→man-−−−−−→woman) from the Definitional List ( and computing their principal components. In the case of contextualized embeddings, there is not just a single representation for each word, but its representation depends on the sentence it 4Both ’Biased List’ and ’Extended Biased List’ were kindly provided by Hila Gonen to reproduce experiments from her study (Gonen and Goldberg, 2019) he ame re it es Figure 1: (Left) the percentage of variance explained in the PC of definitional vector differences. (Right) The corresponding percentages for random vectors. appears in. This way, in order to compute the gender subspace we take the representation of words by randomly sampling sentences that con- tain words from the Definitional List and, for each of them, we swap the definitional word with its pair-wise equivalent from the opposite gender. We then obtain the ELMo representation of the defin- intional word in each sentence pair, computing their difference. On the set of difference vectors, we compute their principal components to verify the presence of bias. In order to have a reference, we computed the principal components of repre- sentation of random words. Similarly to Bolukbasi et al. (2016), figure 1 shows that the first eigenvalue is significantly larger than the rest and that there is also a sin- gle direction describing the majority of variance in these vectors, still the difference between the percentage of variances is less in case of contextu- alized embeddings, which may refer that there is less bias in such embeddings. We can easily note the difference in the case of random, where there is a smooth and gradual decrease in eigenvalues, and hence the variance percentage. sional List. We excluded the sentences that have both a professional token and definitional gender word to avoid the influence of the latter over the presence of bias in the former. We applied the def- inition of direct bias from Bolukbasi et al. (2016) on the ELMo representations of the professional words in these sentences. 1 IN Ss |cos(w, g)| (1) weN where N is the amount of gender neutral words, g the gender direction, and w the word vector of each profession. We got direct bias of 0.03, com- pared to 0.08 from standard word2vec embeddings described in Bolukbasi et al. (2016). This reduc- tion on the direct bias confirms that the substan- tial component along the gender direction that is present in standard word embeddings is less for the contextualized word embeddings. Probably, this reduction comes from the fact that we are using different word embeddings for the same profes- sion depending on the sentence which is a direct consequence and advantage of using contextual- ized embeddings. A similar conclusion was stated in the recent work (Zhao et al., 2019) where the authors ap- plied the same approach, but for gender swapped variants of sentences with professions. They com- puted the difference between the vectors of occu- pation words in corresponding sentences and got a skewed graph where the first component represent the gender information while the second compo- nent groups the male and female related words. Direct Bias Direct Bias is a measure of how close a certain set of words are to the gender vec- tor. To compute it, we extracted from the training data the sentences that contain words in the Profes- Male and female-biased words clustering. In order to study if biased male and female words cluster together when applying contextualized em- beddings, we used k-means to generate 2 clusters of the embeddings of tokens from the Biased list. Note that we can not use several representations for each word, since it would not make any sense to cluster one word as male and female at the same time. Therefore, in order to make use of the ad- vantages of the contextualized embeddings, we re- peated 10 independent experiments, each with a different random sentence of each word from the list of biased male and female words. Figure 2: K-means clustering, the yellow color repre- sents the female and the violet represents the male Among these 10 experiments, we got a min- imum accuracy of 69.1% and a maximum of 71.3%, with average accuracy of 70.1%, much lower than in the case of biased and debiased word embeddings which were 99.9 and 92.5, respec- tively, as stated in Gonen and Goldberg (2019). Based on this criterion, even if there is still bias in- formation to be removed from contextualized em- beddings, it is much less than in case of standard word embeddings, even if debiased. The clusters (for one particular experiment out of the 10 of them) are shown in Figure 2 after applying UMAP (McInnes et al., 2018; McInnes et al., 2018) to the contextualized embeddings. Classification Approach In order to study if contextualized embeddings learn to generalize bias, we trained a Radial Basis Function-kernel Support Vector Machine classifier on the embed- dings of random 1000 biased words from the Ex- tended Biased List. After that, we evaluated the generalization on the other random 4000 biased to- kens. Again, we performed 10 independent exper- iments, to guarantee randomization of word repre- sentations. Among these 10 experiments, we got a minimum accuracy of 83.33% and a maximum of 88.43%, with average accuracy of 85.56%. This number shows that the bias is learned in these em- beddings with high rate. However, it learns in less rate than the normal embeddings, whose classifi- cation reached 88.88% and 98.25% for biased and debiased versions, respectively. K-Nearest Neighbor Approach To understand more about the bias in contextualized embeddings, it is important to analyze the bias in the profes- sions. The question is whether these embeddings eenager 13.0 @eceptionist @athematician oun @airdresser ny ws “philosopher ervandmanager gartender rehitect nystigioran @aid om gbrarian gomposer qrinister 12.0 boss @hysicist qousewife 1s @ocialite gtudent -5 -4 3 -2 -1 0 1 2 Figure 3: Visualization of contextualized embeddings of professions. stereotype the professions as the normal embed- dings. This can be shown by the nearest neighbors of the female and male stereotyped professions, for example ’receptionist’ and ’librarian’ for fe- male and ’architect’ and ’philosopher’ for male. We applied the k nearest neighbors on the Profes- sional List, to get the nearest k neighbor to each profession. We used a random representation for each token of the profession list, after applying the k nearest neighbor algorithm on each profes- sion, we computed the percentage of female and male stereotyped professions among the k nearest neighbor of each profession token. Afterwards, we computed the Pearson correlation of this per- centage with the original bias of each profession. Once again, to assure randomization of tokens, we performed 10 experiments, each with differ- ent random sentences for each profession, there- fore with different word representations. The min- imum Pearson correlation is 0.801 and the maxi- mum is 0.961, with average of 0.89. All these cor- relations are significant with p-values smaller than 1 × 10−40. This experiment showed the highest influence of bias compared to 0.606 for debiased embeddings and 0.774 for non-debiased. Figure 3 demonstrates this influence of bias by showing that female biased words (e.g. nanny) has higher percent of female words than male ones and vice- versa for male biased words (e.g. philosopher). # 6 Conclusions and further work While our study can not draw clear conclusions on whether contextualized word embeddings aug- ment or reduce the gender bias, our results show more insights of which aspects of the final con- textualized word vectors get affected by such phe- nomena, with a tendency more towards reducing the gender bias rather than the contrary. Contextualized word embeddings mitigate gen- der bias when measuring in the following aspects: 1. Gender space, which is capturing the gender direction from word vectors, is reduced for gender specific contextualized word vectors compared to standard word vectors. 2. Direct bias, which is measuring how close set of words are to the gender vector, is lower for contextualized word embeddings than for standard ones. 3. Male/female clustering, which is produced between words with strong gender bias, is less strong than in debiased and non-debiased standard word embeddings. However, contextualized word embeddings pre- serve and even amplify gender bias when taking into account other aspects: 1. The implicit gender of words can be pre- dicted with accuracies higher than 80% based on contextualized word vectors which is only a slightly lower accuracy than when using vectors from debiased and non-debiased stan- dard word embeddings. 2. The stereotyped words group with implicit- gender words of the same gender more than in the case of debiased and non-debiased standard word embeddings. While all measures that we present exhibit cer- tain gender bias, when evaluating future debias- ing methods for contextualized word embeddings it would be worth it putting emphasis on the lat- ter two evaluation measures that show higher bias than the first three. Hopefully, our analysis will provide a grain of sand towards defining standard evaluation meth- ods for gender bias, proposing effective debiasing methods or even directly designing equitable algo- rithms which automatically learn to ignore biased data. As further work, we plan to extend our study to multiple domains and multiple languages to ana- lyze and measure the impact of gender bias present in contextualized embeddings in these different scenarios. # Acknowledgements We want to thank Hila Gonen for her support dur- ing our research. This work is supported in part by the Cata- lan Agency for Management of University and Research Grants (AGAUR) through the FI PhD Scholarship and the Industrial PhD Grant. This work is also supported in part by the Spanish Ministerio de Econom´ıa y Competitividad, the European Regional Development Fund and the Agencia Estatal de Investigaci´on, through the postdoctoral senior grant Ram´on y Cajal, con- tract TEC2015-69266-P (MINECO/FEDER,EU) and contract PCIN-2017-079 (AEI/MINECO). # References Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4349–4357. Curran Associates, Inc. Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- In Conference on mercial gender classification. Fairness, Accountability and Transparency, FAT 2018, 23-24 February 2018, New York, NY, USA, pages 77–91. and Arvind Joanna J. Bryson, Narayanan. 2017. Semantics derived automatically from language corpora necessarily contain human biases. Science, 356:183–186. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender biases in neural machine trans- lation with word embeddings techniques. CoRR, abs/1901.03116. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. CoRR, abs/1903.03862. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Aus- tralia. Association for Computational Linguistics. Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- In Proceedings of the timent analysis systems. Seventh Joint Conference on Lexical and Com- putational Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguis- tics. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Leland McInnes, John Healy, and James Melville. 2018. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e- prints. Leland McInnes, John Healy, Nathaniel Saul, and Lukas Grossberger. 2018. Umap: Uniform mani- fold approximation and projection. The Journal of Open Source Software, 3(29):861. Tomas Mikolov, Martin Karafit, Luks Burget, Jan Cer- nock, and Sanjeev Khudanpur. 2010. Recurrent In INTER- neural network based language model. SPEECH, pages 1045–1048. ISCA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word In Proceedings of the 2014 Con- representation. ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew Peters, Sebastian Ruder, and Noah A Smith. 2019. To tune or not to tune? adapting pretrained arXiv preprint representations to diverse tasks. arXiv:1903.05987. Alec Radford. 2018. Improving language understand- ing by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Forthcoming in NAACL. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496.
{ "id": "1809.01496" }
1904.08375
Document Expansion by Query Prediction
One technique to improve the retrieval effectiveness of a search engine is to expand documents with terms that are related or representative of the documents' content.From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this observation, we propose a simple method that predicts which queries will be issued for a given document and then expands it with those predictions with a vanilla sequence-to-sequence model, trained using datasets consisting of pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of the art in two retrieval tasks. In a latency-critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster.
http://arxiv.org/pdf/1904.08375
Rodrigo Nogueira, Wei Yang, Jimmy Lin, Kyunghyun Cho
cs.IR, cs.LG
null
null
cs.IR
20190417
20190925
9 1 0 2 p e S 5 2 ] R I . s c [ 2 v 5 7 3 8 0 . 4 0 9 1 : v i X r a # Document Expansion by Query Prediction Rodrigo Nogueira,1 Wei Yang,2 Jimmy Lin,2 and Kyunghyun Cho3,4,5,6 1 Tandon School of Engineering, New York University 2 David R. Cheriton School of Computer Science, University of Waterloo 3 Courant Institute of Mathematical Sciences, New York University 4 Center for Data Science, New York University 5 Facebook AI Research 6 CIFAR Azrieli Global Scholar # Abstract One technique to improve the retrieval effec- tiveness of a search engine is to expand docu- ments with terms that are related or represen- tative of the documents’ content. From the perspective of a question answering system, this might comprise questions the document can potentially answer. Following this obser- vation, we propose a simple method that pre- dicts which queries will be issued for a given document and then expands it with those pre- dictions with a vanilla sequence-to-sequence trained using datasets consisting of model, pairs of query and relevant documents. By combining our method with a highly-effective re-ranking component, we achieve the state of In a latency- the art in two retrieval tasks. critical regime, retrieval results alone (without re-ranking) approach the effectiveness of more computationally expensive neural re-rankers but are much faster. Code to reproduce experiments and trained models can be found at https://github. com/nyu-dl/dl4ir-doc2query. # Introduction The “vocabulary mismatch” problem, where users use query terms that differ from those used in rele- vant documents, is one of the central challenges in information retrieval. Prior to the advent of neu- ral retrieval models, this problem has most often been tackled using query expansion techniques, where an initial round of retrieval can provide use- ful terms to augment the original query. Contin- uous vector space representations and neural net- works, however, no longer depend on discrete one- hot representations, and thus offer an exciting new approach to tackling this challenge. Despite the potential of neural models to match documents at the semantic level for improved ranking, most scalable search engines use exact Input: Document Output: Predicted Query Researchers are finding that cinnamon reduces blood sugar levels —[eceaien > , reas cinnamon naturally when taken Ug daily... Concatenate Researchers are finding that cinnamon reduces: blood sugar levels naturally when taken daily... does cinnamon lower blood sugar? Expanded Doc: Index Better Retrieved Docs Ww User's Query foods and supplements to lower blood sugar Figure 1: Given a document, our Doc2query model predicts a query, which is appended to the document. Expansion is applied to all documents in the corpus, which are then indexed and searched as before. term match between queries and documents to per- form initial retrieval. Query expansion is about en- riching the query representation while holding the document representation static. In this paper, we explore an alternative approach based on enrich- ing the document representation (prior to index- ing). Focusing on question answering, we train a sequence-to-sequence model, that given a docu- ment, generates possible questions that the docu- ment might answer. An overview of the proposed method is shown in Figure 1. We view this work as having several contribu- tions: This is the first successful application of document expansion using neural networks that we are aware of. On the recent MS MARCO dataset (Bajaj et al., 2016), our approach is com- petitive with the best results on the official leader- board, and we report the best-known results on TREC CAR (Dietz et al., 2017). We further show that document expansion is more effective than query expansion on these two datasets. We ac- complish this with relatively simple models using existing open-source toolkits, which allows easy replication of our results. Document expansion 1 also presents another major advantage, since the enrichment is performed prior to indexing: Al- though retrieved output can be further re-ranked using a neural model to greatly enhance effective- ness, the output can also be returned as-is. These results already yield a noticeable improvement in effectiveness over a “bag of words” baseline with- out the need to apply expensive and slow neural network inference at retrieval time. # 2 Related Work Prior to the advent of continuous vector space representations and neural ranking models, in- formation retrieval techniques were mostly lim- ited to keyword matching (i.e., “one-hot” repre- sentations). Alternatives such as latent semantic indexing (Deerwester et al., 1990) and its vari- ous successors never really gained significant trac- tion. Approaches to tackling the vocabulary mis- match problem within these constraints include relevance feedback (Rocchio, 1971), query expan- sion (Voorhees, 1994; Xu and Croft, 2000), and modeling term relationships using statistical trans- lation (Berger and Lafferty, 1999). These tech- niques share in their focus on enhancing query representations to better match documents. In this work, we adopt the alternative approach of enriching document representations (Tao et al., 2006; Pickens et al., 2010; Efron et al., 2012), which works particularly well for speech (Sing- hal and Pereira, 1999) and multi-lingual retrieval, where terms are noisy. Document expansion tech- niques have been less popular with IR researchers because they are less amenable to rapid experi- mentation. The corpus needs to be re-indexed ev- ery time the expansion technique changes (typi- cally, a costly process); in contrast, manipulations to query representations can happen at retrieval time (and hence are much faster). The success of document expansion has also been mixed; for ex- ample, Billerbeck and Zobel (2005) explore both query expansion and document expansion in the same framework and conclude that the former is consistently more effective. A new generation of neural ranking models of- fer solutions to the vocabulary mismatch problem based on continuous word representations and the ability to learn highly non-linear models of rele- vance; see recent overviews by Onal et al. (2018) and Mitra and Craswell (2019a). However, due to the size of most corpora and the impractical- 2 ity of applying inference over every document in response to a query, nearly all implementations today deploy neural networks as re-rankers over initial candidate sets retrieved using standard in- verted indexes and a term-based ranking model such as BM25 (Robertson et al., 1994). Our work fits into this broad approach, where we take ad- vantage of neural networks to augment document representations prior to indexing; term-based re- trieval then happens exactly as before. Of course, retrieved results can still be re-ranked by a state- of-the-art neural model (Nogueira and Cho, 2019), but the output of term-based ranking already ap- pears to be quite good. In other words, our docu- ment expansion approach can leverage neural net- works without their high inference-time costs. # 3 Method: Doc2query Our call “Doc2query”, proceeds as follows: For each document, the task is to predict a set of queries for which that document will be relevant. Given a dataset of (query, relevant document) pairs, we use transformer sequence-to-sequence model (Vaswani et al., 2017) that takes as an input the document terms and produces a query. The document and target query are segmented using BPE (Sennrich et al., 2015) after being tokenized with the Moses tokenizer.1 To avoid excessive memory usage, we truncate each document to 400 tokens and queries to 100 tokens. Architecture and training details of our transformer model are described in Appendix A. Once the model is trained, we predict 10 queries using top-k random sampling (Fan et al., 2018) and append them to each document in the cor- pus. We do not put any special markup to dis- tinguish the original document text from the pre- dicted queries. The expanded documents are in- dexed, and we retrieve a ranked list of documents for each query using BM25 (Robertson et al., 1994). We optionally re-rank these retrieved doc- uments using BERT (Devlin et al., 2018) as de- scribed by Nogueira and Cho (2019). # 4 Experimental Setup To train and evaluate the models, we use the fol- lowing two datasets: MS MARCO (Bajaj et al., 2016) is a passage 1 http://www.statmt.org/moses/ TREC-CAR MS MARCO | Retrieval Time MAP MRR@10 ms/query Test Test Dev Single Duet v2 (Mitra and Craswell, 2019b) - 24.5 24.3 650* Co-PACRR® (MacAvaney et al., 2017) 14.8 - - - BM25 15.3 186 184 50 BM25 + RM3 12.7 - 16.7 250 BM25 + Doc2query (Ours) 18.3 21.8 21.5 90 BM25 + Doc2query + RM3 (Ours) 15.5 - 20.0 350 BM25 + BERT (Nogueira and Cho, 2019) 34.8 35.9 36.5 3400¢ BM25 + Doc2query + BERT (Ours) 36.5 36.8 37.5 35007 Table 1: Main results on TREC-CAR and MS MARCO datasets. * Our measurement, in which Duet v2 takes 600ms per query, and BM25 retrieval takes 50ms. ® Best submission of TREC-CAR 2017. + We use Google’s TPUs to re-rank with BERT. re-ranking dataset with 8.8M passages2 obtained from the top-10 results retrieved by the Bing search engine (from 1M queries). The training set contains approximately 500k pairs of query and relevant documents. Each query has one relevant passage, on average. The development and test sets contain approximately 6,900 queries each, but relevance labels are made public only for the de- velopment set. TREC-CAR (Dietz et al., 2017) is a dataset where the input query is the concatenation of a Wikipedia article title with the title of one of its sections. The ground-truth documents are the paragraphs within that section. The corpus consists of all English Wikipedia paragraphs except the abstracts. The released dataset has five predefined folds, and we use the first four as a training set (approx. 3M queries), and the remaining as a validation set (ap- prox. 700k queries). The test set is the same as the one used to evaluate submissions to TREC-CAR 2017 (approx. 2,250 queries). RM3: To compare document expansion with query expansion, we applied the RM3 query ex- pansion technique (Abdul-Jaleel et al., 2004). We apply query expansion to both unexpanded doc- uments (BM25 + RM3) as well as the expanded documents (BM25 + Doc2query + RM3). BM25 + BERT: We index and retrieve documents as in the BM25 condition and further re-rank the documents with BERT as described in Nogueira and Cho (2019). BM25 + Doc2query + BERT: We expand, in- dex, and retrieve documents as in the BM25 + Doc2query condition and further re-rank the doc- uments with BERT. To evaluate the effectiveness of the methods on MS MARCO, we use its official metric, mean reciprocal rank of the top-10 documents (MRR@10). For TREC-CAR, we use mean av- erage precision (MAP). # 5 Results We evaluate the following ranking methods: BM25: We use the Anserini open-source IR toolkit (Yang et al., 2017, 2018)3 to index the orig- inal (non-expanded) documents and BM25 to rank the passages. During evaluation, we use the top- 1000 re-ranked passages. BM25 + Doc2query: We first expand the docu- ments using the proposed Doc2query method. We then index and rank the expanded documents ex- actly as in the BM25 method above. Results on both datasets are shown in Table 1. BM25 is the baseline. Document expansion with our method (BM25 + Doc2query) improves re- trieval effectiveness by ∼15% for both datasets. When we combine document expansion with a state-of-the-art re-ranker (BM25 + Doc2query + BERT), we achieve the best-known results to date on TREC CAR; for MS MARCO, we are near the state of the art.4 Our full re-ranking condi- tion (BM25 + Doc2query + BERT) beats BM25 + BERT alone, which verifies that the contribution 2 https://github.com/dfcf93/MSMARCO/ tree/master/Ranking 3 http://anserini.io/ 4The top leaderboard entries do not come with system de- scriptions, and so it is not possible to compare our approach with theirs. 3 Input Document: July is the hottest month in Washington DC with an average temperature of 27C (80F) and the coldest is January at 4C (38F) with the most daily sunshine hours at 9 in July. The wettest month is May with an average of 100mm of rain. Predicted Query: weather in washington dc Target query: what is the temperature in washington Input Document: The Delaware River flows through Philadelphia into the Delaware Bay. It flows through and aqueduct in the Roundout Reservoir and then flows through Philadelphia and New Jersey before emptying into the Delaware Bay. Predicted Query: what river flows through delaware Target Query: where does the delaware river start and end Input Document: sex chromosome - (genetics) a chromosome that determines the sex of an individual; mammals normally have two sex chromosomes chromosome - a threadlike strand of DNA in the cell nucleus that carries the genes in a linear order; humans have 22 chromosome pairs plus two sex chromosomes. Predicted Query: what is the relationship between genes and chromosomes Target Query: which chromosome controls sex characteristics Table 2: Examples of query predictions on MS MARCO compared to real user queries. of Doc2query is indeed orthogonal to that from post-indexing re-ranking. top of the ranked list, thus improving the overall MRR. Where exactly are these better scores coming from? We show in Table 2 examples of queries produced by our Doc2query model trained on MS MARCO. We notice that the model tends to copy some words from the input document (e.g., Wash- ington DC, River, chromosome), meaning that it can effectively perform term re-weighting (i.e., in- creasing the importance of key terms). Neverthe- less, the model also produces words not present in the input document (e.g., weather, relationship), which can be characterized as expansion by syn- onyms and other related terms. As a contrastive condition, we find that query expansion with RM3 hurts in both datasets, whether applied to the unexpanded corpus (BM25 + RM3) or the expanded version (BM25 + Doc2query + RM3). This is a somewhat surpris- ing result because query expansion usually im- proves effectiveness in document retrieval, but this can likely be explained by the fact that both MS MARCO and CAR are precision oriented. This re- sult shows that document expansion can be more effective than query expansion, most likely be- cause there are more signals to exploit as docu- ments are much longer. To quantify this analysis, we measured the pro- portion of predicted words that exist (copied) vs. not-exist (new) in the original document. Exclud- ing stop words, which corresponds to 51% of the predicted query words, we found that 31% are new while the rest (69%) are copied. If we expand MS MARCO documents using only new words and re- trieve the development set queries with BM25, we obtain an MRR@10 of 18.8 (as opposed to 18.4 when indexing with original documents). Expand- ing with copied words gives an MRR@10 of 19.7. We achieve a higher MRR@10 of 21.5 when doc- uments are expanded with both types of words, showing that they are complementary. la- Finally, for production retrieval systems, tency is often an important factor. Our method without a re-ranker (BM25 + Doc2query) adds a small latency increase over baseline BM25 (50 ms vs. 90 ms) but is approximately seven times faster than a neural re-ranker that has a three points higher MRR@10 (Single Duet v2, which is pre- sented as a baseline in MS MARCO by the orga- nizers). For certain operating scenarios, this trade- off in quality for speed might be worthwhile. # 6 Conclusion Further analyses show that one source of im- provement comes from having more relevant doc- uments for the re-ranker to consider. We find that the Recall@1000 of the MS MARCO devel- opment set increased from 85.3 (BM25) to 89.3 (BM25 + Doc2query). Results show that BERT is indeed able to identify these correct answers from the improved candidate pool and bring them to the We present the first successful use of document ex- pansion based on neural networks. Document ex- pansion holds substantial promise for neural mod- els because documents are much longer and thus contain richer input signals. Furthermore, the gen- eral approach allows developers to shift the com- putational costs of neural network inference from retrieval to indexing. Our implementation is based on integrating 4 three open-source toolkits: OpenNMT (Klein et al., 2017), Anserini, and TensorFlow BERT. The relative simplicity of our approach aids in the re- producibility of our results and paves the way for further improvements in document expansion. # Acknowledgments KC thanks support by NVIDIA and CIFAR and was partly supported by Samsung Advanced Insti- tute of Technology (Next Generation Deep Learn- ing: from pattern recognition to AI) and Samsung Electronics (Improving Deep Learning using La- tent Structure). JL thanks support by the Nat- ural Sciences and Engineering Research Council (NSERC) of Canada. # References Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Don- ald Metzler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004). Gaithersburg, Maryland. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Ti- wary, and Tong Wang. 2016. MS MARCO: A Hu- man Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268 (2016). Adam Berger and John Lafferty. 1999. Information Re- trieval as Statistical Translation. In Proceedings of the 22nd Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 1999). 222–229. Bodo Billerbeck and Justin Zobel. 2005. Document Expansion versus Query Expansion for Ad-hoc Re- trieval. In Proceedings of the 10th Australasian Doc- ument Computing Symposium. 34–41. Scott Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by Latent Semantic Analysis. Jour- nal of the Association for Information Science 41, 6 (1990), 391–407. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv:1810.04805 (2018). Laura Dietz, Manisha Verma, Filip Radlinski, and Nick Craswell. 2017. TREC Complex Answer Retrieval Overview. In Proceedings of the Twenty-Sixth Text REtrieval Conference (TREC 2017). 5 Miles Efron, Peter Organisciak, and Katrina Fenlon. 2012. Improving Retrieval of Short Texts Through Document Expansion. In Proceedings of the 35th in- ternational ACM SIGIR conference on Research and development in information retrieval (SIGIR 2012). 911–920. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical Neural Story Generation. 2018. arXiv:1805.04833 (2018). 2014. and Adam: A Method for Stochastic Optimization. arXiv:1412.6980 (2014). Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-Source Toolkit for Neural Machine Translation. In Proc. ACL. https://doi.org/ 10.18653/v1/P17-4012 Sean MacAvaney, Andrew Yates, and Kai Hui. 2017. Contextualized PACRR for complex answer re- trieval. In Proceedings of TREC. Bhaskar Mitra and Nick Craswell. 2019a. An Introduc- tion to Neural Information Retrieval. Foundations and Trends in Information Retrieval 13, 1 (2019), 1–126. Bhaskar Mitra and Nick Craswell. 2019b. An for Passage Re-ranking. Updated Duet Model arXiv:1903.07666 (2019). Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv:1901.04085 (2019). Kezban Dilek Onal, Ye Zhang, Ismail Sengor Al- tingovde, Md Mustafizur Rahman, Pinar Karagoz, Alex Braylan, Brandon Dang, Heng-Lu Chang, Henna Kim, Quinten Mcnamara, Aaron Angert, Edward Banner, Vivek Khetan, Tyler Mcdonnell, An Thanh Nguyen, Dan Xu, Byron C. Wallace, Maarten Rijke, and Matthew Lease. 2018. Neu- ral Information Retrieval: At the End of the Early Information Retrieval 21, 2-3 (June 2018), Years. 111–182. and Gene Golovchinsky. 2010. Reverted Indexing for Feedback and Expansion. In Proceedings of the 19th ACM International Conference on Informa- tion and Knowledge Management (CIKM 2010). 1049–1058. Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at TREC-3. In Proceedings of the 3rd Text REtrieval Conference (TREC-3). Gaithersburg, Maryland, 109–126. Joseph John Rocchio. 1971. Relevance Feedback in In The SMART Retrieval Information Retrieval. System—Experiments in Automatic Document Pro- cessing, Gerard Salton (Ed.). Prentice-Hall, Engle- wood Cliffs, New Jersey, 313–323. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural Machine Translation of Rare Words with Subword Units. arXiv:1508.07909 (2015). Amit Singhal and Fernando Pereira. 1999. Document Expansion for Speech Retrieval. In Proceedings of the 22nd Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 1999). 34–41. Tao Tao, Xuanhui Wang, Qiaozhu Mei, and ChengX- iang Zhai. 2006. Language Model Information Re- trieval with Document Expansion. In Proceedings of the main conference on Human Language Technol- ogy Conference of the North American Chapter of the Association of Computational Linguistics. 407– 414. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems. 5998–6008. Ellen M. Voorhees. 1994. Query Expansion Using Lexical-Semantic Relations. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Re- trieval (SIGIR 1994). 61–69. Improving the Effectiveness of Information Retrieval with Local Context Analysis. ACM Transactions on Informa- tion Systems 18, 1 (2000), 79–112. Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the Use of Lucene for Information Re- trieval Research. In Proceedings of the 40th Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval (SIGIR 2017). 1253–1256. Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality 10, 4 (2018), Article 16. 6 22 0 1 @ R R M 21.5 21 Beam Search Top-k Random Sampling 20.5 1 10 # of queries produced (beam size) 5 20 Figure 2: Retrieval effectiveness on the development set of MS MARCO when using different decoding methods to produce queries. On the x-axis, we vary the number of predicted queries that are appended to the original documents. # Appendix A Architecture and Training Details The architecture of our transformer model is iden- tical to the base model described in Vaswani et al. (2017), which has 6 layers for both encoder and decoder, 512 hidden units in each layer, 8 at- tention heads and 2048 hidden units in the feed- forward layers. We train with a batch size of 4096 tokens for a maximum of 30 epochs. We use Adam (Kingma and Ba, 2014) with a learn- ing rate of 10−3, β1 = 0.9, β2 = 0.998, L2 weight decay of 0.01, learning rate warmup over the first 8,000 steps, and linear decay of the learning rate. We use a dropout probability of 0.1 in all layers. Our implementation uses the OpenNMT frame- work (Klein et al., 2017); training takes place on four V100 GPUs. To avoid overfitting, we moni- tor the BLEU scores of the training and develop- ment sets and stop training when their difference is larger than four points. # Appendix B Evaluating Various Decoding Schemes Here we investigate how different decoding schemes used to produce queries affect the re- trieval effectiveness. We experiment with two de- coding methods: beam search and top-k random sampling with different beam sizes (number of generated hypotheses). Results are shown in Fig- ure 2. Top-k random sampling is slightly better than beam search across all beam sizes, and we observed a peak in the retrieval effectiveness when 10 queries are appended to the document. We conjecture that this peak occurs because too few queries yield insufficient diversity (fewer semantic 7 matches) while too many queries introduce noise and reduce the contributions of the original text to the document representation.
{ "id": "1810.04805" }
1904.07531
Understanding the Behaviors of BERT in Ranking
This paper studies the performances and behaviors of BERT in ranking tasks. We explore several different ways to leverage the pre-trained BERT and fine-tune it on two ranking tasks: MS MARCO passage reranking and TREC Web Track ad hoc document ranking. Experimental results on MS MARCO demonstrate the strong effectiveness of BERT in question-answering focused passage ranking tasks, as well as the fact that BERT is a strong interaction-based seq2seq matching model. Experimental results on TREC show the gaps between the BERT pre-trained on surrounding contexts and the needs of ad hoc document ranking. Analyses illustrate how BERT allocates its attentions between query-document tokens in its Transformer layers, how it prefers semantic matches between paraphrase tokens, and how that differs with the soft match patterns learned by a click-trained neural ranker.
http://arxiv.org/pdf/1904.07531
Yifan Qiao, Chenyan Xiong, Zhenghao Liu, Zhiyuan Liu
cs.IR, cs.CL
There is an error in Table 1 and we will update them to correct results. Please refer to MS MARCO Leaderboard for the actually evaluation results
null
cs.IR
20190416
20190426
9 1 0 2 r p A 6 2 ] R I . s c [ 4 v 1 3 5 7 0 . 4 0 9 1 : v i X r a # Understanding the Behaviors of BERT in Ranking Yifan Qiao Tsinghua University [email protected] Chenyan Xiong Microsoft Research [email protected] # Zhenghao Liu Tsinghua University [email protected] Zhiyuan Liu Tsinghua University [email protected] ABSTRACT This paper studies the performances and behaviors of BERT in ranking tasks. We explore several different ways to leverage the pre-trained BERT and fine-tune it on two ranking tasks: MS MARCO passage reranking and TREC Web Track ad hoc document ranking. Experimental results on MS MARCO demonstrate the strong effec- tiveness of BERT in question-answering focused passage ranking tasks, as well as the fact that BERT is a strong interaction-based seq2seq matching model. Experimental results on TREC show the gaps between the BERT pre-trained on surrounding contexts and the needs of ad hoc document ranking. Analyses illustrate how BERT allocates its attentions between query-document tokens in its Transformer layers, how it prefers semantic matches between paraphrase tokens, and how that differs with the soft match patterns learned by a click-trained neural ranker. Our experiments observed rather different performances of BERT- based rankers on the two benchmarks. On MS MARCO, fine-tuning BERT significantly outperforms previous state-of-the-art Neu-IR models, and the effectiveness mostly comes from its strong cross question-passage interactions. However, on TREC ad hoc ranking, BERT-based rankers, even further pre-trained on MS MARCO rank- ing labels, perform worse than feature-based learning to rank and a Neu-IR model pre-trained on user clicks in Bing log. We further study the behavior of BERT through its learned at- tentions and term matches. We illustrate that BERT uses its deep Transformer architecture to propagate information more globally on the text sequences through its attention mechanism, compared to interaction-based neural rankers which operate more individu- ally on term pairs. Further studies reveal that BERT focuses more on document terms that directly match the query. It is similar to the semantic matching behaviors of previous surrounding context- based seq2seq models, but different from the relevance matches neural rankers learned from user clicks. 1 INTRODUCTION In the past several years, neural information retrieval (Neu-IR) research has developed several effective ways to improve rank- ing accuracy. Interaction-based neural rankers soft match query- documents using their term interactions [3]; Representation-based embeddings capture relevance signals using distributed represen- tations [7, 8]; large capacity networks learn relevance patterns using large scale ranking labels [1, 5, 7]. These techniques lead to promising performances on various ranking benchmarks [1, 3, 5, 7]. Recently, BERT, the pre-trained deep bidirectional Transformer, has shown strong performances on many language processing tasks [2]. BERT is a very deep language model that is pre-trained on the surrounding context signals in large corpora. Fine-tuning its pre-trained deep network works well on many downstream se- quence to sequence (seq2seq) learning tasks. Different from seq2seq learning, previous Neu-IR research considers such surrounding- context-trained neural models not as effective in search as relevance modeling [7, 8]. However, on the MS MARCO passage ranking task, fine-tuning BERT and treating ranking as a classification problem outperforms existing Neu-IR models by large margins [4]. This paper studies the performances and properties of BERT in ad hoc ranking tasks. We explore several ways to use BERT in ranking, as representation-based and interaction-based neural rankers, as in combination with standard neural ranking layers. We study the behavior of these BERT-based rankers on two benchmarks: the MS MARCO passage ranking task, which ranks answer passages for questions, and TREC Web Track ad hoc task, which ranks ClueWeb documents for keyword queries. 2 BERT BASED RANKERS This section describes the notable properties of BERT and how it is used in ranking. 2.1 Notable Properties of BERT We refer readers to the BERT and Transformer papers for their details [2, 6]. Here we mainly discuss its notable properties that influence its usage in ranking. Large Capacity. BERT uses standard Transformer architecture— multi-head attentions between all term pairs in the text sequence— but makes it very deep. Its main version, BERT-Large, includes 24 Transformer layers, each with 1024 hidden dimensions and 16 attention heads. It in total has 340 million learned parameters, much bigger than typical Neu-IR models. Pretraining. BERT learns from the surrounding context signals in Google News and Wikipedia corpora. It is learned using two tasks: the first predicts random missing words (15%) using the rest of the sentence (Mask-LM); the second predicts whether two sentences appear next to each other. In the second task, the two sentences are concatenated to one sequence; a special token “[SEP]” marks the sequence boundaries. Its deep network is very resource consuming in training: BERT-Large takes four days to train on 64 TPUs and easily takes months on typical GPUs clusters. Fine Tuning. End-to-end training BERT is unfeasible in most academic groups due to resource constraints. It is suggested to use the pre-trained BERT as a fine-tuning method [2]. BERT provides a “[CLS]” token at the start of the sequence, whose embeddings are treated as the representation of the text sequence(s), and suggests to add task-specific layers on the “[CLS]” embedding in fine-tuning. 2.2 Ranking with BERT We experiment with four BERT based ranking models: BERT (Rep), BERT (Last-Int), BERT (Mult-Int), and BERT (Term-Trans). All four methods use the pre-trained BERT to obtain the representa- tion of the query q, the document d, or the concatenation of the two qd. In the concatenation sequence qd, the query and document are concatenated to one sequence with boundary marked by a marker token (“[SEP]”). The rest of this section uses subscript i, j, or cls to denote the tokens in q, d, or qd, and superscript k to denote the layer of BERT’s Transformer network: k = 1 is the first layer upon word embedding > k and k = 24 or “last” is the last layer. For example, qd,), is the embedding of the “[CLS]” token, in the k-th layer of BERT on the concatenation sequence qd. BERT (Rep) uses BERT to represent g and d: BERT (Rep)(q.d) = cos(qltst, d'ast), BERT (Rep)(q.d) = cos(qltst, d'ast), (1) It first uses the last layers’ “[CLS]” embeddings as the query and document representations, and then calculates the ranking score via their cosine similarity (cos). Thus it is a representation-based ranker. BERT (Last-Int) applies BERT on the concatenated qd sequence: last BERT (Last-Int)(q,d) = w! qd,),.- (2) It uses the last layer’s “[CLS]” as the matching features and com- bines them linearly with weight w. It is the recommended way to use BERT [2] and is first applied to MARCO passage ranking by Nogueira and Cho [4]. The ranking score from BERT (Last-Int) includes all term pair interactions between the query and docu- ment via its Transformer’s cross-match attentions [6]. Thus it is an interaction-based ranker. BERT (Mult-Int) is defined as: ok BERT (Mult-Int)(q.d)= >) (why) dders. 3) 1<k<24 =k It extends BERT (Last-Int) by adding the matching features qd,), from all BERT’s layers, to study whether different layers of BERT provide different information. BERT (Term-Trans) adds a neural ranking network upon BERT, to study the performance of their combinations: sk (q,d) = Meanj, ; (cos(relu(P* a), BERT (Term-Trans)(q,d) = # a), relu(Pkdk ))) )\Whranss*(qd). (4) k (5) It first constructs the translation matrix between query and docu- ment, using the cosine similarities between the projections of their contextual embeddings. Then it combines the translation matrices from all layers using mean-pooling and linear combination. All four BERT based rankers are fine-tuned from the pre-trained BERT-Large model released by Google. The fine-tuning uses clas- sification loss, i.e., to classify whether a query-document pair is relevant or not, following the prior research [4]. We experimented with pairwise ranking loss but did not observe any difference. 2 3 EXPERIMENTAL METHODOLOGIES Datasets. Our experiments are conducted on MS MARCO passage reranking task and TREC Web Track ad hoc tasks with ClueWeb documents. MS MARCO includes question-alike queries sampled from Bing search log and the task is to rank candidate passages based on whether the passage contains the answer for the question1. It in- cludes 1,010,916 training queries and a million expert annotated answer passage relevance labels. We follow the official train/develop split, and use the given “Train Triples Small” to fine-tune BERT. ClueWeb includes documents from ClueWeb09-B and queries from TREC Web Track ad hoc retrieval task 2009-2012. In total, 200 queries with relevance judgements are provided by TREC. Our experiments follow the same set up in prior research and use the processed data shared by their authors [1]: the same 10-fold cross validation, same data pre-processing, and same top 100 candidate documents from Galago SDM to re-rank. We found that the TREC labels alone are not sufficient to fine- tune BERT nor train other neural rankers to outperform SDM. Thus we decided to first pre-train all neural methods on MS MARCO and then fine-tune them on ClueWeb. Evaluation Metrics. MS MARCO uses MRR@10 as the official evaluation. Results on the Develop set re-rank top 100 from BM25 in our implementation. Results on Evaluations set are obtained from the organizers and re-rank top 1000 from their BM25 im- plementation. ClueWeb results are evaluated by NDCG@20 and ERR@20, the official evaluation metrics of TREC Web Track. Statistical significance is tested by permutation tests with p < 0.05, except on MS MARCO Eval where per query scores are not returned by the leader board. Compared Methods. The BERT based rankers are compared with the following baselines: • Base is the base retrieval model that provides candidate documents to re-rank. It is BM25 on MS MARCO and Galago- SDM on ClueWeb. • LeToR is the feature-based learning to rank. It is RankSVM on MS MARCO and Coordinate Ascent on ClueWeb. K-NRM is the kernel-based interaction-based neural ranker [7]. • Conv-KNRM is the n-gram version of K-NRM. K-NRM and Conv-KNRM results on ClueWeb are obtained by our implementations and pre-trained on MS MARCO. We also include Conv-KNRM (Bing) which is the same Conv-KNRM model but pre-trained on Bing clicks by prior research [1]. The rest baselines reuse the existing results from prior research. Keeping experimental setups consistent makes all results directly comparable. Implementation Details. All BERT rankers are trained using Adam optimizer and learning rate 3e-6, except Term-Trans which trains the projection layer with learning rate 0.002. On one typical GPU, the batch size is 1 at most; fine-tuning takes on average one day to converge. Convergence is determined by the loss on a small sample of validation data (MS MARCO) or the validation fold (ClueWeb). In comparison, K-NRM and Conv-KNRM take about 12 hours to converge on MS MARCO and one hour on ClueWeb. On MS MARCO all rankers take about 5% training triples to converge. 1http://msmarco.org Table 1: Ranking performances. Relative performances in percentages are compared to LeToR, the feature-based learning to rank. Statistically significant improvements are marked by † (over Base), ‡ (over LeToR), § (over K-NRM), and ¶ (over Conv-KNRM). Neural methods on ClueWeb are pre-trained on MS MARCO, except Conv-KNRM (Bing) which is trained on user clicks. Method Base LeToR K-NRM Conv-KNRM Conv-KNRM (Bing) BERT (Rep) BERT (Last-Int) BERT (Mult-Int) BERT (Term-Trans) MS MARCO Passage Ranking ClueWeb09-B Ad hoc Ranking ERR@20 MRR@10 (Dev) MRR@10 (Eval) 0.1649 +13.44% 0.2496§ −9.45% 0.2681 0.1905 – – +4.04% +7.92% 0.1590 0.1982 +27.15% 0.2472 +29.76% 0.2118§ NDCG@20 −6.89% – −14.25% – −28.26% −10.78% 0.1814†‡§ ¶ +12.18% −34.05% +2.00% +3.64% +2.81% 0.1762 0.1946 0.2100†‡ 0.2474†‡§ n.a. 0.0432 0.3367†‡§ ¶ +73.03% 0.3590 +88.45% 0.2407§ ¶ 0.3060†‡§ ¶ +57.26% 0.3287 +72.55% 0.2407§ ¶ +70.10% 0.3561 +86.93% 0.2339§ ¶ 0.3310†§ ¶ 0.1387 0.1617 −40.68% 0.1160 −20.98% 0.1443§ +7.12% −44.82% 0.1066 −10.22% 0.1649†§ ¶ −10.23% 0.1676†§ ¶ −12.76% 0.1663†§ ¶ 0.2872†‡§ ¶ n.a. n.a. n.a. −77.79% 0.0153 −91.97% 0.1479 # -*Markers . . More than Average Majority sop words w 24 a 10 Regular Words § 16 CP T*N 5 0.1 am a 7 ef > NE Aes NS bd .! 1 3 5 7 9 11 13 15 17 19 21 23 BERT Layer 1 3 5 7 9 11 13 15 17 19 21 23 BERT Layer Figure 1: The attentions to Markers, Stopwords, and Regular Words in BERT (Last-Int). X-axes mark layer levels from shallow (1) to deep (24). Y-axes are the number of tokens sending More than Average or Majority attentions to each group. 4 EVALUATIONS AND ANALYSES This section evaluates the performances of BERT-based rankers and studies their behaviors. 4.1 Overall Performances Table 1 lists the evaluation results on MS MARCO (left) and ClueWeb (right). BERT-based rankers are very effective on MS MARCO: All interaction-based BERT rankers improved Conv-KNRM, a previous state-of-the-art, by 30%-50%. The advantage of BERT in MS MARCO lies in the cross query-document attentions from the Transformers: BERT (Rep) applies BERT on the query and document individually and discard these cross sequence interactions, and its performance is close to random. BERT is an interaction-based matching model and is not suggested to be used as a representation model. The more complex architectures in Multi-Int and Term-Trans perform worse than the simplest BERT (Last-Int), even with a lot of MARCO labels to fine-tune. It is hard to modify the pre-trained BERT dramatically in fine-tuning. End-to-end training may make modifying pre-trained BERT more effective, but that would require more future research in how to make BERT trainable in accessible computing environments. BERT-based rankers behave rather differently on ClueWeb. Al- though pre-trained on large corpora and then on MARCO rank- ing labels, none of BERT models significantly outperforms LeToR on ClueWeb. In comparison, Conv-KNRM (Bing), the same Conv- KNRM model but pre-trained on Bing user clicks [1], performs the best on ClueWeb, and much better than Conv-KNRM pretrained on MARCO labels. These results demonstrate that MARCO passage ranking is closer to seq2seq task because of its question-answering focus, and BERT’s surrounding context based pre-training excels in this setting. In comparison, TREC ad hoc tasks require different signals other than surrounding context: pre-training on user clicks is more effective than on surrounding context based signals. 4.2 Learned Attentions This experiment illustrates the learned attention in BERT, which is the main component of its Transformer architecture. Our studies focus on MS MARCO and BERT (Last-Int), the best performing combination in our experiments, and randomly sampled 100 queries from MS MARCO Dev. We group the terms in the candidate passages into three groups: Markers (“[CLS]” and “[SEP]”), Stopwords, and Regular Words. The attentions allocated to each group is shown in Figure 1. The markers received most attention. Removing these markers decreases the MRR by 15%: BERT uses them to distinguish the two text sequences. Surprisingly, the stopwords received as much attention as non-stop words, but removing them has no effect in MRR performances. BERT learned these stopwords not useful and dumps redundant attention weights on them. As the network goes deeper, less tokens received the majority of other tokens attention: the attention spreads more on the whole sequence and the embeddings are contextualized. However, this does not necessarily lead to more global matching decisions, as studied in the next experiment. 3 0.75 0.5 0.25 i ie 0 a&: ee 0 025 05 0.75 1 BERT 0.5 ie} -0.5 4 J . -1 -0.5 i) 0.5 1 Conv-KNRM 0.75 0.5 0.5 ie} 0.25 -0.5 i ie 0 a&: ee 4 J . 0 025 05 0.75 1 -1 -0.5 i) 0.5 1 BERT Conv-KNRM Figure 2: Influences of removing regular terms in BERT (Last-Int) and Conv-KNRM on MS MARCO. Each point cor- responds to one query-passage pair with a random regular term removed from the passage. X-axes mark the original ranking scores and Y-axes are the scores after term removal. 4.3 Learned Term Matches This experiment studies the learned matching patterns in BERT (Last-Int) and compares it to Conv-KNRM. The same MS MARCO Dev sample from last experiment is used. We first study the influence of a term by comparing the ranking score of a document with and without the term. For each query- passage pair, we randomly remove a non-stop word, calculate the ranking score using BERT (Last-Int) or Conv-KNRM, and plot it w.r.t the original ranking score in Figure 2. Figure 2 illustrates two interesting behaviors of BERT. First, it as- signs more extreme ranking scores: most pairs receive either close to 1 or 0 ranking scores in BERT, while the ranking scores in Conv- KNRM are more uniformly distributed. Second, there are a few terms in each document that determine the majority of BERT’s ranking scores; removing them significantly changes the rank- ing score—drop from 1 to near 0, while removing the majority of terms does not matter much in BERT—most points are grouped in the corners. It indicates that BERT is well-trained from the large scale pre-training. In comparison, terms contribute more evenly in Conv-KNRM; removing single term often varies the ranking scores of Conv-KNRM by some degree, shown by the wider band near the diagonal in Figure 2, but not as dramatically as in BERT. We manually examined those most influential terms in BERT (Last-Int) and Conv-KNRM. Some examples of those terms are listed in Table 2. The exact match terms play an important role in BERT (Last-Int); we found many of the influential terms in BERT are those appear in the question or close paraphrases. Conv-KNRM, on the other hand, prefers terms that are more loosely related to the query in search [1]. For example, on MS MARCO, it focuses more on the terms that are the role of milk in macchiato (“visible mark”), the show and the role Sinbad played (“Cosby” and “Coach Walter”), and the task related to Personal Meeting ID (“schedule”). These observations suggest that, BERT’s pre-training on sur- rounding contexts favors text sequence pairs that are closer in their semantic meanings. It is consistent with previous observations in Neu-IR research, that such surrounding context trained models are not as effective in TREC-style ad hoc document ranking for keyword queries [1, 7, 8]. 4 # Table 2: Example of most influential terms in MS MARCO passages in BERT and Conv-KNRM. Query: “What is a macchiato coffee drink” BERT (Last-Int) macchiato, coffee Conv-KNRM visible mark Query: “What shows was Sinbad on” BERT (Last-Int) Conv-KNRM Query: “What is a PMI id” BERT (Last-Int) Conv-KNRM 5 CONCLUSIONS AND FUTURE DIRECTION This paper studies the performances and behaviors of BERT in MS MARCO passage ranking and TREC Web Track ad hoc ranking tasks. Experiments show that BERT is an interaction-based seq2seq model that effectively matches text sequences. BERT based rankers perform well on MS MARCO passage ranking task which is focused on question-answering, but not as well on TREC ad hoc document ranking. These results demonstrate that MS MARCO, with its QA focus, is closer to the seq2seq matching tasks where BERT’s sur- rounding context based pre-training fits well, while on TREC ad hoc document ranking tasks, user clicks are better pre-training signals than BERT’s surrounding contexts. Our analyses show that BERT is a strong matching model with globally distributed attentions over the entire contexts. It also as- signs extreme matching scores to query-document pairs; most pairs get either one or zero ranking scores, showing it is well tuned by pre-training on large corpora. At the same time, pre-trained on surrounding contexts, BERT prefers text pairs that are semantically close. This observation helps explain BERT’s lack of effectiveness on TREC-style ad hoc ranking which is considered to prefer pre- training from user clicks than surrounding contexts. Our results suggest the need of training deeper networks on user clicks signals. In the future, it will be interesting to study how a much deeper model—as big as BERT—behaves compared to both shallower neural rankers when trained on relevance-based labels. REFERENCES [1] Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In Proceedings of WSDM 2018. ACM, 126–134. [2] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint (2018). [3] Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proceedings of CIKM 2016. ACM, 55–64. [4] Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 (2019). [5] Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, and Xueqi Cheng. 2017. Deeprank: A new deep architecture for relevance ranking in information retrieval. In Proceedings of CIKM 2017. ACM, 257–266. [6] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeuIPS 2017. 5998–6008. [7] Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of SIGIR 2017. ACM, 55–64. [8] Hamed Zamani and W Bruce Croft. 2017. Relevance-based word embedding. In Proceedings of SIGIR 2017. ACM, 505–514.
{ "id": "1901.04085" }
1904.07734
Three scenarios for continual learning
Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and--in case it is not--whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.
http://arxiv.org/pdf/1904.07734
Gido M. van de Ven, Andreas S. Tolias
cs.LG, cs.AI, cs.CV, stat.ML
Extended version of work presented at the NeurIPS Continual Learning workshop (2018); 18 pages, 5 figures, 6 tables. Related to arXiv:1809.10635
null
cs.LG
20190415
20190415
9 1 0 2 r p A 5 1 ] G L . s c [ 1 v 4 3 7 7 0 . 4 0 9 1 : v i X r a # Three scenarios for continual learning Gido M. van de Ven1,2 & Andreas S. Tolias1,3 1 Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston 2 Computational and Biological Learning Lab, University of Cambridge, Cambridge 3 Department of Electrical and Computer Engineering, Rice University, Houston {ven,astolias}@bcm.edu # Abstract Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and—in case it is not—whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario. # Introduction Current state-of-the-art deep neural networks can be trained to impressive performance on a wide variety of individual tasks. Learning multiple tasks in sequence, however, remains a substantial challenge for deep learning. When trained on a new task, standard neural networks forget most of the information related to previously learned tasks, a phenomenon referred to as “catastrophic forgetting”. In recent years, numerous methods for alleviating catastrophic forgetting have been proposed. How- ever, due to the wide variety of experimental protocols used to evaluate them, many of these methods claim “state-of-the-art” performance [e.g., 1, 2, 3, 4, 5, 6]. To obscure things further, some methods shown to perform well in some experimental settings are reported to dramatically fail in others: compare the performance of elastic weight consolidation in Kirkpatrick et al. [1] and Zenke et al. [7] with that in Kemker et al. [8] and Kamra et al. [9]. To enable more structured comparisons of methods for reducing catastrophic forgetting, this report describes three distinct continual learning scenarios of increasing difficulty. These scenarios are distinguished by whether at test time task identity is provided and, if it is not, whether task identity must be inferred. We show that there are substantial differences between these three scenarios in terms of their difficulty and in terms of how effective different continual learning methods are on them. Moreover, using the split and permuted MNIST task protocols, we illustrate that any continual learning problem consisting of a series of clearly separated tasks can be performed according to all three scenarios. This is an extended version of work presented at the NeurIPS Continual Learning workshop (2018). As a second contribution, for each of the three scenarios we then provide an extensive comparison of recently proposed methods. These experiments reveal that even for experimental protocols involving the relatively simple classification of MNIST-digits, regularization-based approaches (e.g., elastic weight consolidation) completely fail when task identity needs to be inferred. We find that currently only replay-based approaches have the potential to perform well on all three scenarios. Well-documented and easy-to-adapt code for all compared methods is made available: https://github.com/GMvandeVen/continual-learning. # 2 Three Continual Learning Scenarios We focus on the continual learning problem in which a single neural network model needs to sequentially learn a series of tasks. During training, only data from the current task is available and the tasks are assumed to be clearly separated. This problem has been actively studied in recent years and many methods for alleviating catastrophic forgetting have been proposed. However, because of differences in the experimental protocols used for their evaluation, comparing methods’ performances can be difficult. In particular, one difference between experimental protocols we found to be very influential for the level of difficulty is whether at test time information about the task identity is available and—if it is not—whether the model is also required to explicitly identify the identity of the task it has to solve. Importantly, this means that even when studies use exactly the same sequence of tasks to be learned (i.e., the same task protocol), results are not necessarily comparable. In the hope to standardize evaluation and to enable more meaningful comparisons across papers, we describe three distinct scenarios for continual learning of increasing difficulty (Table 1). This categorization scheme was first introduced in a previous paper by us [10] and has since been adopted by several other studies [11, 12, 13]; the purpose of the current report is to provide a more in-depth treatment. Table 1: Overview of the three continual learning scenarios. Scenario Required at test time Task-IL Solve tasks so far, task-ID provided Domain-IL Solve tasks so far, task-ID not provided Class-IL Solve tasks so far and infer task-ID In the first scenario, models are always informed about which task needs to be performed. This is the easiest continual learning scenario, and we refer to it as task-incremental learning (Task-IL). Since task identity is always provided, in this scenario it is possible to train models with task-specific components. A typical network architecture used in this scenario has a “multi-headed” output layer, meaning that each task has its own output units but the rest of the network is (potentially) shared between tasks. In the second scenario, which we refer to as domain-incremental learning (Domain-IL), task identity is not available at test time. Models however only need to solve the task at hand; they are not required to infer which task it is. Typical examples of this scenario are protocols whereby the structure of the tasks is always the same, but the input-distribution is changing. A relevant real-world example is an agent who needs to learn to survive in different environments, without the need to explicitly identify the environment it is confronted with. Finally, in the third scenario, models must be able to both solve each task seen so far and infer which task they are presented with. We refer to this scenario as class-incremental learning (Class-IL), as it includes the common real-world problem of incrementally learning new classes of objects. # 2.1 Comparison with Single-Headed vs Multi-Headed Categorization Scheme In another recent attempt to structure the continual learning literature, a distinction is highlighted between methods being evaluated using a “multi-headed” or a “single-headed” layout [14, 15]. This distinction relates to the scenarios we describe here in the sense that a multi-headed layout requires task identity to be known, while a single-headed layout does not. Our proposed categorization however differs in two important ways. 2 Task 2 Task 3 2] 3 first second first second first second first second first second class class class class class class class class class class Figure 1: Schematic of split MNIST task protocol. Table 2: Split MNIST according to each scenario. Task-IL Domain-IL With task given, is it the 1st or 2nd class? (e.g., 0 or 1) With task unknown, is it a 1st or 2nd class? (e.g., in [0, 2, 4, 6, 8] or in [1, 3, 5, 7, 9]) Class-IL With task unknown, which digit is it? (i.e., choice from 0 to 9) Firstly, the multi-headed vs single-headed distinction is tied to the architectural layout of a network’s output layer, while our scenarios more generally reflect the conditions under which a model is evaluated. Although in the continual learning literature a multi-headed layout (i.e., using a separate output layer for each task) is the most common way to use task identity information, it is not the only way. Similarly, a single-headed layout (i.e., using the same output-layer for every task) might by itself not require task identity to be known, it is still possible for the model to use task identity in other ways (e.g., in its hidden layers, as in [4]). Secondly, our categorization scheme extends upon the multi-headed vs single-headed split by recog- nizing that when task identity is not provided, there is a further distinction depending on whether the network is explicitly required to infer task identity. Importantly, we will show that the two scenarios resulting from this additional split substantially differ in difficulty (see section 5). # 2.2 Example Task Protocols To demonstrate the difference between the three continual learning scenarios, and to illustrate that any task protocol can be performed according to each scenario, we will perform two different task protocols for all three scenarios. The first task protocol is sequentially learning to classify MNIST-digits (‘split MNIST’ [7]; Figure 1). In the recent literature, this task protocol has been performed under the Task-IL scenario (in which case it is sometimes referred to as ‘multi-headed split MNIST’) and under the Class-IL scenario (in which case it is referred to as ‘single-headed split MNIST’), but it could also be performed under the Domain-IL scenario (Table 2). The second task protocol is ‘permuted MNIST’ [16], in which each task involves classifying all ten MNIST-digits but with a different permutation applied to the pixels for every new task (Figure 2). Although permuted MNIST is most naturally performed according to the Domain-IL scenario, it can be performed according to the other scenarios too (Table 3). # 2.3 Task Boundaries The scenarios described in this report assume that during training there are clear and well-defined boundaries between the tasks to be learned. If there are no such boundaries between tasks—for example because transitions between tasks are gradual or continuous—the scenarios we describe here no longer apply, and the continual learning problem becomes less structured and potentially a lot harder. Among others, training with randomly-sampled minibatches and multiple passes over each task’s training data are no longer possible. We refer to [12] for a recent insightful treatment of the paradigm without well-defined task-boundaries. 3 Task 1 Task 2 Task 10 (permutation 1) (permutation 2) (permutation 10) }O| 72 oe =~ BGR Seek eee Figure 2: Schematic of permuted MNIST task protocol. Table 3: Permuted MNIST according to each scenario. Task-IL Given permutation X, which digit? Domain-IL With permutation unknown, which digit? Class-IL Which digit and which permutation? # 3 Strategies for Continual Learning # 3.1 Task-specific Components A simple explanation for catastrophic forgetting is that after a neural network is trained on a new task, its parameters are optimized for the new task and no longer for the previous one(s). This suggests that not optimizing the entire network on each task could be one strategy for alleviating catastrophic forgetting. A straightforward way to do this is to explicitly define a different sub-network per task. Several recent papers use this strategy, with different approaches for selecting the parts of the network for each task. A simple approach is to randomly assign which nodes participate in each task (Context-dependent Gating [XdG; 4]). Other approaches use evolutionary algorithms [17] or gradient descent [18] to learn which units to employ for each task. By design, these approaches are limited to the Task-IL scenario, as task identity is required to select the correct task-specific components. # 3.2 Regularized Optimization When task identity information is not available at test time, an alternative strategy is to still prefer- entially train a different part of the network for each task, but to always use the entire network for execution. One way to do this is by differently regularizing the network’s parameters during training on each new task, which is the approach of Elastic Weight Consolidation [EWC; 1] and Synaptic Intelligence [SI; 7]. Both methods estimate for all parameters of the network how important they are for the previously learned tasks and penalize future changes to them accordingly (i.e., learning is slowed down for parts of the network important for previous tasks). # 3.3 Modifying Training Data An alternative strategy for alleviating catastrophic forgetting is to complement the training data for each new task to be learned with “pseudo-data” representative of the previous tasks. This strategy is referred to as replay. One option is to take the input data of the current task, label them using the model trained on the previous tasks, and use the resulting input-target pairs as pseudo-data. This is the approach of Learning without Forgetting [LwF; 19]. An important aspect of this method is that instead of labeling the replayed inputs as the most likely category according to the previous tasks’ model (i.e., “hard targets”), it pairs them with the predicted probabilities for all target classes (i.e., “soft targets”). The objective for the replayed data is to match the probabilities predicted by the model being trained to these target probabilities. The approach of matching predicted probabilities of one network to those of another network had previously been used to compress (or “distill”) information from one (large) network to another (smaller) network [20]. An alternative is to generate the input data to be replayed. For this, besides the main model for task performance (e.g., classification), a separate generative model is sequentially trained on all tasks 4 to generate samples from their input data distributions. For the first application of this approach, which was called Deep Generative Replay (DGR), the generated input samples were paired with “hard targets” provided by the main model [21]. We note that it is possible to combine DGR with distillation by replaying input samples from a generative model and pairing them with soft targets [see also 6, 22]. We include this hybrid method in our comparison under the name DGR+distill. A final option is to store data from previous tasks and replay those. Such “exact replay” has been used in various forms to successfully boost continual learning performance in classification settings [e.g., 2, 3, 5]. A disadvantage of this approach is that it is not always possible, for example due to privacy concerns or memory constraints. # 3.4 Using Exemplars If it is possible to store data from previous tasks, another strategy for alleviating catastrophic forgetting is to use stored data as “exemplars” during execution. A recent method that successfully used this strategy is iCaRL [2]. This method uses a neural network for feature extraction and performs classification based on a nearest-class-mean rule [23] in that feature space, whereby the class means are calculated from the stored data. To protect the feature extractor network from becoming unsuitable for previously learned tasks, iCaRL also replays the stored data—as well as the current task inputs with a special form of distillation—during training of the feature extractor. # 4 Experimental Details In order to both explore the differences between the three continual learning scenarios and to comprehensively compare the performances of the above discussed approaches, we evaluated various recently proposed methods according to each scenario on both the split and permuted MNIST task protocols. # 4.1 Task Protocols For split MNIST, the original MNIST-dataset was split into five tasks, where each task was a two-way classification. The original 28x28 pixel grey-scale images were used without pre-processing. The standard training/test-split was used resulting in 60,000 training images (∼6000 per digit) and 10,000 test images (∼1000 per digit). For permuted MNIST, a sequence of ten tasks was used. Each task is now a ten-way classification. To generate the permutated images, the original images were first zero-padded to 32x32 pixels. For each task, a random permutation was then generated and applied to these 1024 pixels. No other pre-processing was performed. Again the standard training/test-split was used. # 4.2 Methods For a fair comparison, the same neural network architecture was used for all methods. This was a multi-layer perceptron with 2 hidden layers of 400 (split MNIST) or 1000 (permuted MNIST) nodes each. ReLU non-linearities were used in all hidden layers. Except for iCaRL, the final layer was a softmax output layer. In the Task-IL scenario, all methods used a multi-headed output layer, meaning that each task had its own output units and always only the output units of the task under consideration—i.e., either the current task or the replayed task—were active. In the Domain-IL scenario, all methods were implemented with a single-headed output layer, meaning that each task used the same output units (with each unit corresponding to one class in every task). In the Class-IL scenario, each class had its own output unit and always all units of the classes seen so far were active (see also sections A.1.1 and A.1.2 in the Appendix). We compared the following methods: - XdG: Following Masse et al. [4], for each task a random subset of X% of the units in each hidden layer was fully gated (i.e., their activations set to zero), with X a hyperparameter whose value was set by a grid search (see section D in the Appendix). As this method requires availability of task identity at test time, it can only be used in the Task-IL scenario. 5 - EWC / Online EWC / SI: For these methods a regularization term was added to the loss, with regularization strength controlled by a hyperparameter: Ltotal = Lcurrent + λLregularization. The value of this hyperparameter was again set by a grid search. The way the regularization terms of these methods are calculated differs ([1, 24, 7]; see section A.2 in the Appendix), but they all aim to penalize changes to parameters estimated to be important for previously learned tasks. LwF / DGR / DGR+distill: For these methods a loss-term for replayed data was added to the loss of the current task. In this case a hyperparameter could be avoided, as the loss for the current and replayed data was weighted according to how many tasks the model had been trained 1 on so far: Ltotal = Ntasks so far • For LwF, images of the current task were replayed with soft targets provided by a copy of the model stored after finishing training on the previous task ([19]; see also section A.1.2 in the Appendix). • For DGR, a separate generative model (see below) was trained to generate the images to be replayed. Following Shin et al. [21], the replayed images were labeled with the most likely category predicted by a copy of the main model stored after training on the previous task (i.e., hard targets). • For DGR+distill, also a separate generative model was trained to generate the images to be replayed, but these were then paired with soft targets (as in LwF) instead of hard targets (as in DGR). - iCaRL: This method was implemented following Rebuffi et al. [2]; see section A.4 in the Appendix for details. For the results in Tables 4 and 5, a memory budget of 2000 was used. Due to the way iCaRL is set up with distillation of current task data on the classes of all previous tasks using binary classification loss, it can only be applied in the Class-IL scenario. However, two components of iCaRL—the use of exemplars for classification and the replay of stored data during training—are suitable for all scenarios. Both these components are explored in section C in the Appendix. We included the following two baselines: - None: The model was sequentially trained on all tasks in the standard way. This is also called fine-tuning, and can be seen as a lower bound. - Offline: The model was always trained using the data of all tasks so far. This is also called joint training, and was included as it can be seen as an upper bound. All methods except for iCaRL used the standard multi-class cross entropy classification loss for the model’s predictions on the current task data (Lcurrent = Lclassification). For the split MNIST protocol, all models were trained for 2000 iterations per task using the ADAM-optimizer (β1 = 0.9, β2 = 0.999; [25]) with learning rate 0.001. The same optimizer was used for the permuted MNIST protocol, but with 5000 iterations and learning rate 0.0001. For each iteration, Lcurrent (and Lregularization) was calculated as average over 128 samples from the current task. If replay was used, in each iteration also 128 replayed samples were used to calculate Lreplay. For DGR and DGR+distill, a separate generative model was sequentially trained on all tasks. A symmetric variational autoencoder [VAE; 26] was used as generative model, with 2 fully connected hidden layers of 400 (split MNIST) or 1000 (permuted MNIST) units and a stochastic latent variable layer of size 100. A standard normal distribution was used as prior. See section A.3 in the Appendix for more details. Training of the generative model was also done with generative replay (provided by its own copy stored after finishing training on the previous task) and with the same hyperparameters (i.e., learning rate, optimizer, iterations, batch sizes) as for the main model. # 5 Results For the split MNIST task protocol, we found a clear difference in difficulty between the three continual learning scenarios (Table 4). All of the tested methods performed well in the Task-IL scenario, but LwF and especially the regularization-based methods (EWC, Online EWC and SI) struggled in the Domain-IL scenario and completely failed in the Class-IL scenario. Importantly, only methods using replay (DGR, DGR+distill and iCaRL) obtained good performance (above 90%) in the Domain-IL 6 Table 4: Average test accuracy (over all tasks) on the split MNIST task protocol. Each experiment was performed 20 times with different random seeds, reported is the mean (± SEM) over these runs. Approach Method Task-IL Domain-IL Class-IL Baselines None – lower bound Offline – upper bound 87.19 (± 0.94) 99.66 (± 0.02) 59.21 (± 2.04) 98.42 (± 0.06) 19.90 (± 0.02) 97.94 (± 0.03) Task-specific XdG 99.10 (± 0.08) - - Regularization EWC Online EWC SI 98.64 (± 0.22) 99.12 (± 0.11) 99.09 (± 0.15) 63.95 (± 1.90) 64.32 (± 1.90) 65.36 (± 1.57) 20.01 (± 0.06) 19.96 (± 0.07) 19.99 (± 0.06) Replay LwF DGR DGR+distill 99.57 (± 0.02) 99.50 (± 0.03) 99.61 (± 0.02) 71.50 (± 1.63) 95.72 (± 0.25) 96.83 (± 0.20) 23.85 (± 0.44) 90.79 (± 0.41) 91.79 (± 0.32) Replay + Exemplars iCaRL (budget = 2000) - - 94.57 (± 0.11) Table 5: Idem as Table 4, except on the permuted MNIST task protocol. Approach Method Task-IL Domain-IL Class-IL Baselines None – lower bound Offline – upper bound 81.79 (± 0.48) 97.68 (± 0.01) 78.51 (± 0.24) 97.59 (± 0.01) 17.26 (± 0.19) 97.59 (± 0.02) Task-specific XdG 91.40 (± 0.23) - - Regularization EWC Online EWC SI 94.74 (± 0.05) 95.96 (± 0.06) 94.75 (± 0.14) 94.31 (± 0.11) 94.42 (± 0.13) 95.33 (± 0.11) 25.04 (± 0.50) 33.88 (± 0.49) 29.31 (± 0.62) Replay LwF DGR DGR+distill 69.84 (± 0.46) 92.52 (± 0.08) 97.51 (± 0.01) 72.64 (± 0.52) 95.09 (± 0.04) 97.35 (± 0.02) 22.64 (± 0.23) 92.19 (± 0.09) 96.38 (± 0.03) Replay + Exemplars iCaRL (budget = 2000) - - 94.85 (± 0.03) and Class-IL scenarios. Somewhat strikingly, we found that in all scenarios, replaying images from the current task (LwF; e.g., replaying ‘2’s and ‘3’s in order not to forget how to recognize ‘0’s and ‘1’s), prevented the forgetting of previous tasks better than any of the regularization-based methods. We further note that in contrast to several recent reports [e.g., 3, 10, 11], we obtained competitive performance of EWC (and Online EWC) on the task-IL scenario variant of split MNIST (i.e., ‘multi- headed split MNIST’). Reason for this difference is that we explored a much wider hyperparameter range; our selected values were several orders of magnitude larger than those typically considered (see section D in the Appendix).1 For the permuted MNIST protocol, all methods except LwF performed well in both the Task-IL and the Domain-IL scenario (Table 5). In the Class-IL, however, the regularization-based methods failed again and only replay-based methods obtained good performance. The difference between the Task-IL and Domain-IL scenarios was only small for this task protocol, but this might be because—except in XdG—task identity information was only used in the output layer, while information about the applied permutation would likely be more useful in the network’s lower layers. Confirming this hypothesis, we found that each method’s performance in the Task-IL scenario could be improved by combining it with XdG (i.e., using task-identity in the hidden layers; see section B in the Appendix). Finally, while LwF had some success with the split MNIST protocol, this method did not work with the permuted MNIST protocol, presumably because now the inputs of the different tasks were uncorrelated due to the random permutations. 1An explanation for these extreme hyperparameter values is that the individual tasks of the split MNIST protocol (i.e., distinguishing between two digits) are relatively easy, making that after finishing training on each task the gradients—and thus the Fisher Information on which EWC is based—are very small. 7 # 6 Discussion Catastrophic forgetting is a major obstacle to the development of artificial intelligence applications capable of true lifelong learning [27, 28], and enabling neural networks to sequentially learn multiple tasks has become a topic of intense research. Yet, despite its scope, this research field is relatively unstructured: even though the same datasets tend to be used, direct comparisons between published methods are difficult. We demonstrate that an important difference between currently used experi- mental protocols is whether task identity is provided and—if it is not—whether it must be inferred. These two distinctions led us to identify three scenarios for continual learning of varying difficulty. It is hoped that these scenarios will help to provide structure to the continual learning field and make comparisons between studies easier. For each scenario, we performed a comprehensive comparison of recently proposed methods. An important conclusion is that for the class-incremental learning scenario (i.e., when task identity must be inferred), currently only replay-based methods are capable of producing acceptable results. In this scenario, even for relatively simple task protocols involving the classification of MNIST-digits, regularization-based methods such as EWC and SI completely fail. On the split MNIST task protocol, regularization-based methods also struggle in the domain-incremental learning scenario (i.e., when task identity does not need to be inferred but is also not provided). These results highlight that for the more challenging, ethological-relevant scenarios where task identity is not provided, replay might be an unavoidable tool. It should be stressed that a limitation of the current study is that MNIST-images are relatively easy to generate. It therefore remains an open question whether generative replay will still be so successful for task protocols with more complicated input distributions. However, promising for generative replay is that the capabilities of generative models are rapidly improving [e.g., 29, 30, 31]. Moreover, the good performance of LwF (i.e., replaying inputs from the current task) on the split MNIST task protocol suggests that even if the quality of replayed samples is not perfect, they could still be very helpful. For a further discussion of the scalability of generative replay we refer to [10]. Finally, as illustrated by iCaRL, an alternative / complement to replaying generated samples could be to store examples from previous tasks and replay those (see section C in the Appendix for a further discussion). # Acknowledgments We thank Mengye Ren, Zhe Li and anonymous reviewers for comments on various parts of this work. This research project has been supported by an IBRO-ISN Research Fellowship, by the Lifelong Learning Machines (L2M) program of the Defence Advanced Research Projects Agency (DARPA) via contract number HR0011-18-2-0025 and by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, IARPA, DoI/IBC, or the U.S. Government. # References [1] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, page 201611835, 2017. [2] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proc. CVPR, 2017. [3] Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. arXiv preprint arXiv:1710.10628, 2017. [4] Nicolas Y Masse, Gregory D Grant, and David J Freedman. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the national academy of sciences, pages E10467–E10475, 2018. 8 [5] Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. In International Conference on Learning Representations, 2018. URL https://openreview. net/forum?id=SJ1Xmf-Rb. [6] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, Zhengyou Zhang, and Yun Fu. Incremental classifier learning with generative adversarial networks. arXiv preprint arXiv:1802.00853, 2018. [7] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In Proceedings of the 34th International Conference on Machine Learning, pages 3987–3995, 2017. [8] Ronald Kemker, Angelina Abitino, Marc McClure, and Christopher Kanan. Measuring catas- trophic forgetting in neural networks. arXiv preprint arXiv:1708.02072, 2017. [9] Nitin Kamra, Umang Gupta, and Yan Liu. Deep generative dual memory network for continual learning. arXiv preprint arXiv:1710.10368, 2017. [10] Gido M van de Ven and Andreas S Tolias. Generative replay with feedback connections as a general strategy for continual learning. arXiv preprint arXiv:1809.10635, 2018. [11] Yen-Chang Hsu, Yen-Cheng Liu, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018. [12] Chen Zeno, Itay Golan, Elad Hoffer, and Daniel Soudry. Task agnostic continual learning using online variational bayes. arXiv preprint arXiv:1803.10123v3, 2019. [13] Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee. Incremental learning with unlabeled data in the wild. arXiv preprint arXiv:1903.12648, 2019. [14] Sebastian Farquhar and Yarin Gal. Towards robust evaluations of continual learning. arXiv preprint arXiv:1805.09733, 2018. [15] Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. arXiv preprint arXiv:1801.10112, 2018. [16] Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013. [17] Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734, 2017. [18] Joan Serrà, Dídac Surís, Marius Miron, and Alexandros Karatzoglou. Overcoming catastrophic forgetting with hard attention to the task. arXiv preprint arXiv:1801.01423, 2018. [19] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. [20] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. [21] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pages 2994–3003, 2017. [22] Ragav Venkatesan, Hemanth Venkateswara, Sethuraman Panchanathan, and Baoxin Li. A strategy for an uncompromising incremental learner. arXiv preprint arXiv:1705.00744, 2017. [23] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. In Computer Vision–ECCV 2012, pages 488–501. Springer, 2012. 9 [24] Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370, 2018. [25] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [26] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [27] Dharshan Kumaran, Demis Hassabis, and James L McClelland. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends in cognitive sciences, 20(7):512–534, 2016. [28] German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. Continual lifelong learning with neural networks: A review. arXiv preprint arXiv:1802.07569, 2018. [29] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. [30] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. [31] Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015. [32] James Martens. New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193, 2014. [33] Ferenc Huszár. Note on the quadratic penalties in elastic weight consolidation. Proceedings of the National Academy of Sciences, 115(11):E2496–E2497, 2018. [34] Carl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016. 10 # A Additional experimental details PyTorch-code to perform the experiments described in this report is available online: https:// github.com/GMvandeVen/continual-learning. # A.1 Loss functions # A.1.1 Classification The standard per-sample cross entropy loss function for an input x labeled with a hard target y is given by: Lclassification (x, y; θ) = − log pθ (Y = y|x) (1) where pθ is the conditional probability distribution defined by the neural network whose trainable bias and weight parameters are collected in θ. It is important to note that in this report this probability distribution is not always defined over all output nodes of the network, but only over the “active nodes”. This means that the normalization performed by the final softmax layer only takes into account these active nodes, and that learning is thus restricted to those nodes. For experiments performed according to the Task-IL scenario, for which we use a “multi-headed” softmax layer, always only the nodes of the task under consideration are active. Typically this is the current task, but for replayed data it is the task that is (intended to be) replayed. For the Domain-IL scenario always all nodes are active. For the Class-IL scenario, the nodes of all tasks seen so far are active, both when training on current and on replayed data. For the method DGR, there are also some subtle differences between the continual learning scenarios when generating hard targets for the inputs to be replayed. With the Task-IL scenario, only the classes of the task that is intended to be replayed can be predicted (in each iteration the available replays are equally divided over the previous tasks). With the Domain-IL scenario always all classes can be predicted. With the Class-IL scenario only classes from up to the previous task can be predicted. # A.1.2 Distillation The methods LwF and DGR+distill use distillation loss for their replayed data. For this, each input x to be replayed is labeled with a “soft target”, which is a vector containing a probability for each active class. This target probability vector is obtained using a copy of the main model stored after finishing training on the most recent task, and the training objective is to match the probabilities predicted by the model being trained to these target probabilities (by minimizing the cross entropy between them). Moreover, as is common for distillation, these two probability distributions that we want to match are made softer by temporary raising the temperature T of their models’ softmax layers.2 This means that before the softmax normalization is performed on the logits, these logits are first divided by T . For an input x to be replayed during training of task K, the soft targets are given by the vector ˜y whose cth element is given by: ˜yc = pT ˆθ (K−1) (Y = c|x) (2) (K−1) where ˆθ θ is the conditional probability distribution defined by the neural network with parameters θ and with the temperature of its softmax layer raised to T . The distillation loss function for an input x labeled with a soft target vector ˜y is then given by: Natasses Laistilation (@, 938) = —T? > Jolog pg (Y = clx) (3) c=1 where the scaling by T 2 is included to ensure that the relative contribution of this objective matches that of a comparable objective with hard targets [20]. When generating soft targets for the inputs to be replayed, there are again subtle differences between the three continual learning scenarios. With the Task-IL scenario, the soft target probability dis- tribution is defined only over the classes of the task intended to be replayed. With the Domain-IL 2The same temperature should be used for calculating the target probabilities and for calculating the probabilities to be matched during training; but during testing the temperature should be set back to 1. A typical value for this temperature is 2, which is the value used in this report. 11 scenario this distribution is always over all classes. With the Clase-IL scenario, the soft target probability distribution is first generated only over the classes from up to the previous task and then zero probabilities are added for all classes in the current task. # A.2 Regularization terms # A.2.1 EWC The regularization term of elastic weight consolidation [EWC; 1] consists of a quadratic penalty term for each previously learned task, whereby each task’s term penalizes the parameters for how different they are compared to their value directly after finishing training on that task. The strength of each parameter’s penalty depends for every task on how important that parameter was estimated to be for that task, with higher penalties for more important parameters. For EWC, a parameter’s importance is estimated for each task by the parameter’s corresponding diagonal element of that task’s Fisher Information matrix, evaluated at the optimal parameter values after finishing training on that task. The EWC regularization term for task K > 1 is given by: K—-1 (, Noacans K 1 k a(k) LO atzation ye (0) = > 3 > FS ) (6 — 6 ’) (4) k=l i=l (k) whereby ˆθ(k) i of task k, and F (k) ii (k) k evaluated at ˆθ of F (k) is defined as: is the ith element of ˆθ , which is the vector with parameter values at the end of training is the ith diagonal element of F (k), which is the Fisher Information matrix of task . Following the definitions and notation in Martens [32], the ith diagonal element Slog pe (Y = He)" 5) (k) Fy = E,LQ® Ene (yla) ( 66; 0-6" whereby Q(k) x is the (theoretical) input distribution of task k and pθ is the conditional distribution defined by the neural network with parameters θ. Note that in Kirkpatrick et al. [1] it is not specified exactly how these F (k) are calculated (except that it is said to be “easy”); but we have been made ii aware that they are calculated as the diagonal elements of the “true Fisher Information”: 1 Slogpe (Y = alle) ° js] 50; zest’) 9-8) FY (6) whereby S(k) is the training data of task k and ˆy(k) predicted by the model with parameters ˆθ given x.3 The calculation of the Fisher Information is time-consuming, especially if tasks have a lot of training data. In practice it might therefore sometimes be beneficial to trade accuracy for speed by using only a subset of a task’s training data for this calculation (e.g., by introducing another hyperparameter NFisher that sets the maximum number of samples to be used in equation 6). # A.2.2 Online EWC A disadvantage of the original formulation of EWC is that the number of quadratic terms in its regularization term grows linearly with the number of tasks. This is an important limitation, as for a method to be applicable in a true lifelong learning setting its computational cost should not increase with the number of tasks seen so far. It was pointed out by Huszár [33] that a slightly stricter 3An alternative way to calculate F (k) ii would be, instead of taking for each training input x only the most likely label predicted by model pˆθ(k) , to sample for each x multiple labels from the entire conditional distribution defined by this model (i.e., to approximate the inner expectation of equation 5 for each training sample x with Monte Carlo sampling from pˆθ(k) (·|x)). Another option is to use the “empirical Fisher Information", by replacing in equation 6 the predicted label ˆy(k) x by the observed label y. The results reported in Tables 4 and 5 do not depend much on the choice of how F (k) # ii 12 adherence to the approximate Bayesian treatment of continual learning, which had been used as motivation for EWC, actually results in only a single quadratic penalty term on the parameters that is anchored at the optimal parameters after the most recent task and with the weight of the parameters’ penalties determined by a running sum of the previous tasks’ Fisher Information matrices. This insight was adopted by Schwarz et al. [24], who proposed a modification to EWC called online EWC. The regularization term of online EWC when training on task K > 1 is given by: Nparams K a(K—-1 4(K-1)\? LL harzationgye = > A) (6: — 6 ’) (7) i=l whereby ˆθ(K−1) is a running sum of the ith diagonal elements of the Fisher Information matrices of the first K − 1 tasks, with a hyperparameter γ ≤ 1 that governs a gradual decay of each previous task’s contribution. That is: ˜F (k) is the ith diagonal element of the Fisher Information matrix of task k calculated according to equation 6. # A.2.3 SI Similar as for online EWC, the regularization term of synaptic intelligence [SI; 7] consists of only one quadratic term that penalizes changes to parameters away from their values after finishing training on the previous task, with the strength of each parameter’s penalty depending on how important that parameter is thought to be for the tasks learned so far. To estimate parameters’ importance, for every new task k a per-parameter contribution to the change of the loss is first calculated for each parameter i as follows: Nit 7 ters _ (k) () (0 1) — a(t — 1) ) Leora lt] 8 uP = 5 (ale) — ace — 1) 55 (8) with Niters the total number of iterations per task, θi[t(k)] the value of the ith parameter after the tth training iteration on task k and δLtotal[t(k)] the gradient of the loss with respect to the ith parameter during the tth training iteration on task k. For every task, these per-parameter contributions are normalized by the square of the total change of that parameter during training on that task plus a small dampening term ξ (set to 0.1, to bound the resulting normalized contributions when a parameter’s total change goes to zero), after which they are summed over all tasks so far. The estimated importance of parameter i for the first K − 1 tasks is thus given by: K-1 wl") afk) _ S k=1 (a) +€ (9) with ∆(k) starting training on task k. (An alternative formulation is ∆(k) of parameter i it was initialized with and ˆθ(k) regularization term of SI to be used during training on task K is then given by: Noparams 2 (K K-1 a(K-1 Le atizatong = > as ) (6 - 6 ) (10) i=l # A.3 Generative Model The separate generative model that is used for DGR and DGR+distill is a variational autoencoder (VAE; 26), of which both the encoder network qφ and the decoder network pψ are multi-layer perceptrons with 2 hidden layers containing 400 (split MNIST) or 1000 (permuted MNIST) units with ReLU non-linearity. The stochastic latent variable layer z has 100 units and the prior over them is the standard normal distribution. Following Kingma and Welling [26], the “latent variable regularization term” of this VAE is given by: 100 Linen (@: 4) = 592 (1 +10 (of) — nh” — 0”) al) j=l 13 whereby µ(x) are the jth elements of respectively µ(x) and σ(x), which are the outputs of the encoder network qφ given input x. Following Doersch [34], the output layer of the decoder network pψ has a sigmoid non-linearity and the “reconstruction term” is given by the binary cross entropy between the original and decoded pixel values: Npixels Leecon (#3, B) = > 2p log (%p) + (1 — 2p) log (1 — #p) (12) p=l whereby x, is the value of the p" pixel of the original input image « and , is the value of the p* pixel of the decoded image & = py (z@)) with z(*) = ps) + 0) . €, whereby e is sampled from N (0, Iio0). The per-sample loss function for an input a is then given by [26]: Lgenerative (x; φ, ψ) = Lrecon (x; φ, ψ) + Llatent (x; φ) (13) Similar to the main model, the generative model is trained with replay generated by its own copy stored after finishing training on the previous task. # A.4 iCaRL # A.4.1 Feature Extractor Network Architecture For our implementation of iCaRL [2], we used a feature extractor with the same architecture as the neural network used as classifier with the other methods. The only difference is that the softmax output layer was removed. We denote this feature extractor by ψφ(.), with its trainable parameters contained in the vector φ. These parameters were trained based on binary classification / distillation loss (see below). For this—during training only!—a sigmoid output layer was appended to ψφ. The resulting extended network outputs for any class c ∈ {1, ..., Nclasses so far} a binary probability whether input x belongs to it: pc θ(x) = 1 1 + e−wT c ψφ(x) (14) with θ = (φ, w1, ..., wNclasses so far ) a vector containing all iCaRL’s trainable parameters. Whenever a new class c was encountered, new parameters wc were added to θ. Training On each task, the parameters in θ were trained on an extended dataset containing the current task’s training data as well as all stored data from previous tasks (see section A.4.2). When training on task K, each input x with hard target y in this extended dataset is paired with a new target-vector ¯y whose jth element is given by: ¯yc = pc ˆθ δy=c (K−1) (x) if class c in task 1, ..., K − 1 if class c in task K (15) (K−1) whereby ˆθ is the vector with parameter values at the end of training of task K − 1. The per- sample loss function for an input x labeled with such an “old-task-soft-target / new-task-hard-target" vector ¯y is then given by: Netasses so far Licart (#, 9:9) =— S> [Je log p(w) + (1 — Ge) log (1 — p§(x))] (16) c=1 # A.4.2 Selection of Stored Data The assumption under which iCaRL operates is that up to B data-points (referred to as ‘exemplars’) are allowed to be stored in memory. The available memory budget is evenly distributed over the classes seen so far, resulting in m = stored exemplars per class. After training on a task is finished, the selection of data stored in memory is updated as follows: 14 Create exemplar-sets for new classes For each new class c, iteratively m exemplars are selected based on their extracted feature vectors according to a procedure referred to as ‘herding’. In each iteration, a new example from class c is selected such that the average feature vector over all selected examples is as close as possible to the average feature vector over all available examples of class c. Let X c = {x1, ..., xNc } be the set of all available examples of class c and let µc = 1 x∈X c ψφ(x) Nc be the average feature vector over set X c. The nth exemplar (for n = 1, ..., m) to be selected for class c is then given by: if. n-1 ps, = argmin||u° — = | ¥o(x) + )° ve(p§) (17) were n jal This results in ordered exemplar-sets P c = {pc 1, ..., pc m} for each new class c. Reduce exemplar-sets for old classes If the existing exemplar-sets for the old classes contain more than m exemplars each, for each old class the last selected exemplars are discarded until only m exemplars per class are left. # A.4.3 Nearest-Class-Mean Classification Classification by iCaRL is performed according to a nearest-class-mean rule in feature space based on the stored exemplars. For this, let uw, = PA pepe w¢(p) for c = 1,..., Netasses so far- The label y* predicted for a new input a is then given by: y= argmin — |lvg(x) — wll (18) c=1,. ++>-Netasses so far # B Using Task Identity in Hidden Layers For the permuted MNIST protocol, there were only small differences in Table 5 between the results of the Task-IL and Domain-IL scenario. This suggests that for this protocol, it is actually not so important whether task identity information is available at test time. However, in the main text it was hypothesized that this was the case because for most of the methods—only exception being XdG—task identity was only used in the network’s output layer, while information about which permutation was applied is likely more useful in the lower layers. To test this, we performed all methods again on the Task-IL scenario of the permuted MNIST protocol, this time instead using task identity information in the network’s hidden layers by combining each method with XdG. This significantly increased the performance of every method (Table B.1), thereby demonstrating that also for the permuted MNIST protocol, the Task-IL scenario is indeed easier than the Domain-IL scenario. Table B.1: Comparing two ways of using task identity information in the Task-IL scenario of the permuted MNIST task protocol. In the first column, each method uses a separate output layer for each task (i.e., multi-headed output layer; same as in Table 5). In the second column, each method is instead combined with XdG. Method + task-ID in output layer + task-ID in hidden layers None 81.79 (± 0.48) 90.41 (± 0.32) 94.74 (± 0.05) EWC Online EWC 95.96 (± 0.06) 94.75 (± 0.14) SI 96.94 (± 0.02) 96.89 (± 0.03) 96.53 (± 0.04) LwF DGR DGR+distill 69.84 (± 0.46) 92.52 (± 0.08) 97.51 (± 0.01) 84.21 (± 0.48) 95.31 (± 0.04) 97.67 (± 0.01) 15 # C Replay of Stored Data As an alternative to generative replay, when data from previous tasks can be stored, it is possible to instead replay that data during training on new tasks. Another way in which stored data could be used is during execution: for example, as in iCaRL, instead of using the softmax classifier, classification can be done using a nearest-class-mean rule in feature space with class means calculated based on the stored data. Stored data can also be used during both training and execution. We evaluated the performance of these different variants of exact replay as a function of the total number of examples allowed to be stored in memory (Figure C.1), whereby the data to be stored was always selected using the same ‘herding’-algorithm as used in iCaRL (see section A.4.2). A Task-IL B Task-IL 15 15 2 3 ~ g 3 oo - $o.054 ¢ 3 o 84 — 2 oo 3 - g 8 07-4 - = 2 {> a o8s + ° Other 10 20 50 100 200 50010002000 5000 Other 100 200 5001000 — 500010,000 50,000 methods methods Domain-IL Domain-IL ° L L t \ \ ° L L Average accuracy Average accuracy T 2 L 1 L © T T T T T T Other 10 20 50 100 200 50010002000 5000 Other 100 200 5001000 — 500010,000 50,000 methods methods Class-IL Class-IL ° L L =None = si = ewe = iwe = o-EWo = xd 2 L L SDGR_ = DGR+distill 2 L L Average accuracy Average accuracy -_ —— Replay exemplars - oe Classify with exemplars 024——— 024 ===* Repiay + classity with exemplars SS Gre ’ T 1 o 1 7 T Other 10 20 50 100 200 50010002000 5000 Other 100 200 5001000 500010,000 50,000 methods Total memory budget (log-scale) methods Total memory budget (log-scale) Figure C.1: Average test accuracy (over all tasks) of different ways of using stored data on split MNIST (A) and on permuted MNIST (B) as a function of the total number of examples allowed to be stored in memory. For comparison, the accuracy of the other methods is indicated on the left or as horizontal lines. Displayed are the means over 20 repetitions, shaded areas are ± 1 SEM. In the Class-IL scenario, we found that for both task protocols even storing one example per class was enough for any of the exact replay methods to outperform all regularization-based methods. However, to match the performance of generative replay, it was required to store substantially more data. In particular, for permuted MNIST, even with 50,000 stored examples exact replay variants were consistently outperformed by DGR+distill. # D Hyperparameters Several of the in this report compared continual learning methods have one or more hyperparameters. The typical way of setting the value of hyperparameters is by training models on the training set for a range of hyperparameter-values, and selecting those that result in the best performance on a separate validation set. This strategy has been adapted to the continual learning setting as training models on the full protocol with different hyperparameter-values using only every task’s training data, and comparing their overall performances using separate validation sets (or sometimes the test sets) for each task [e.g., see 16, 1, 8, 24]. However, here we would like to stress that this means that these hyperparameters are set (or learned) based on an evaluation using data from all tasks, which violates the continual learning principle of only being allowed to visit each task once and in sequence. 16 a isd io Task-incremental learning 0.9 ° oo =f f 0.8 ~xXdG =None Average accuracy 19°10? 10" 10°" 10° "10 EWC: lambda (log-scale) isd © a fu Gua iad a 0.7 0.65 0.6 Lyn — 001 1 100 Sl: ¢ (log-scale) Domain-incremental learning 0.75 Average accuracy i 10° 10? 10% 10° 10% 107° EWC: lambda (log-scale) 0.01 1 100 SI: c (log-scale) Class-incremental learning EWC —=None Online E\ wc: 0.214)~SI — .205 ° N 0.2 a © a 195 Average accuracy 10° 10? 104 10° 10° 10% EWC: lambda (log-scale) 0.01 1 100 SI: c (log-scale) ——— 10,000 10,000 10,000 0 20 40 60 80 XdG: % of nodes gated Figure D.1: Grid searches for the split MNIST task protocol. Shown are the average test set accuracies (over all 5 tasks) for the (combination of) hyperparameter-values tested for each method. Task-incremental learning is © 0.95 0.95 0.95 | |~ XdG 3 —None o 0.9 0.9 o 30-85 0.85 0.85 G 08 0.8 Z —— eee To 100 1000 10,000 0.01 1 100 10,000 0 20 40 60 80 "EWC: lambda (log-scale) SI: ¢ (log-scale) XdG: % of nodes gated Domain-incremental learning G1 1 g 5 0.9 0.9 oS 0.8 0.8 S 074 [-eWC_—None 07 D o6 Online EWC: 06 @ 0. eye 7 ~y=0.6} 0-67/=sI g 0.5 |~ y=0.9 ~ y=0.8 ~y=0.5| 0.5 =None] < LS - 1 10 100 1000 10,000 0.01 T 100 “70,000 EWC: lambda (log-scale) SI: ¢ (log-scale) s Class-incremental learning g 0.4 5 03 o I wv 0.2 S @ 01 01 Fg 10 100 1000 10,000 0.01 1 100 10,000 "EWC: lambda (log-scale) SI: c (log-scale) Figure D.2: Grid searches for the permuted MNIST task protocol. Shown are the average test set accuracies (over all 10 tasks) for the (combination of) hyperparameter-values tested for each method. 17 Although it is tempting to think that it is acceptable to relax this principle for tasks’ validation data, we argue here that it is not. A clear example of how using each task’s validation data continuously throughout an incremental training protocol can lead to an in our opinion unfair advantage is provided by Wu et al. [6], in which after finishing training on each task a “bias-removal parameter” is set that optimizes performance on the validation sets of all tasks seen so far (see their section 3.3). Although the hyperparameters of the methods compared here are much less influential than those in the above report, we believe that it is important to realize this issue associated with traditional grid searches in a continual learning setting and that at a minimum influential hyperparameters should be avoided in methods for continual learning. Nevertheless, to give the competing methods of generative replay the best possible chance—and to explore how influential their hyperparameters are—we do perform grid searches to set the values of their hyperparameters (see Figures D.1 and D.2). Given the issue discussed above we do not see much value in using validation sets for this, and we evaluate the performances of all hyperparameter(- combination)s using the tasks’ test sets. For this grid search each experiment is run once, after which 20 new runs are executed using the selected hyperparameter-values to obtain the results in Tables 4 and 5 in the main text. 18
{ "id": "1802.07569" }
1904.06652
Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering
Recently, a simple combination of passage retrieval using off-the-shelf IR techniques and a BERT reader was found to be very effective for question answering directly on Wikipedia, yielding a large improvement over the previous state of the art on a standard benchmark dataset. In this paper, we present a data augmentation technique using distant supervision that exploits positive as well as negative examples. We apply a stage-wise approach to fine tuning BERT on multiple datasets, starting with data that is "furthest" from the test data and ending with the "closest". Experimental results show large gains in effectiveness over previous approaches on English QA datasets, and we establish new baselines on two recent Chinese QA datasets.
http://arxiv.org/pdf/1904.06652
Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin
cs.CL, cs.IR
null
null
cs.CL
20190414
20190414
9 1 0 2 r p A 4 1 ] L C . s c [ 1 v 2 5 6 6 0 . 4 0 9 1 : v i X r a # Data Augmentation for BERT Fine-Tuning in Open-Domain Question Answering # Wei Yang,1,2∗ Yuqing Xie,1,2∗ Luchen Tan,2 Kun Xiong,2 Ming Li,1,2 and Jimmy Lin1,2 1 David R. Cheriton School of Computer Science, University of Waterloo 2 RSVP.ai # Abstract Recently, a simple combination of passage re- trieval using off-the-shelf IR techniques and a BERT reader was found to be very effective for question answering directly on Wikipedia, yielding a large improvement over the previ- ous state of the art on a standard benchmark dataset. In this paper, we present a data aug- mentation technique using distant supervision that exploits positive as well as negative ex- amples. We apply a stage-wise approach to fine tuning BERT on multiple datasets, starting with data that is “furthest” from the test data and ending with the “closest”. Experimental results show large gains in effectiveness over previous approaches on English QA datasets, and we establish new baselines on two recent Chinese QA datasets. # 1 Introduction BERT (Devlin et al., 2018) represents the latest re- finement in a series of neural models that take advantage of pretraining on a language modeling task (Peters et al., 2018; Radford et al., 2018). Re- searchers have demonstrated impressive gains in a broad range of NLP tasks, from sentence classifi- cation to sequence labeling. Recently, Yang et al. (2019) showed that combining a BERT-based reader with passage retrieval using the Anserini IR toolkit yields a large improvement in question answering directly from a Wikipedia corpus, mea- sured in terms of exact match on a standard bench- mark (Chen et al., 2017). Interestingly, the approach of Yang et al. (2019) represents a simple method to combining BERT with off-the-shelf IR. In this paper, we build on these initial successes to explore how much fur- ther we can push this simple architecture by data augmentation, taking advantage of distant supervi- sion techniques to gather more and higher-quality training data to fine tune BERT. Experiments show that, using the same reader model as Yang et al. (2019), our simple data-augmentation techniques yield additional large improvements. To illustrate the robustness of our methods, we also demon- strate consistent gains on another English QA dataset and present baselines for two additional Chinese QA datasets (which have not to date been evaluated in an “end-to-end” manner). In addition to achieving state-of-the-art results, we contribute important lessons on how to lever- age BERT effectively for question answering. First, most previous work on distant supervision focuses on generating positive examples, but we show that using existing datasets to identify neg- ative training examples is beneficial as well. Sec- ond, we propose an approach to fine-tuning BERT with disparate datasets that works well in practice: our heuristic is to proceed in a stage-wise manner, beginning with the dataset that is “furthest” from the test data and ending with the “closest”. # 2 Background and Related Work In this paper, we tackle the “end-to-end” vari- ant of the question answering problem, where the system is only provided a large corpus of arti- cles. This stands in contrast to reading compre- hension datasets such as SQuAD (Rajpurkar et al., 2016), where the system works with a sin- gle pre-determined document, or most QA benchmarks today such as TrecQA (Yao et al., 2013), WikiQA (Yang et al., 2015), and MS- MARCO (Bajaj et al., 2016), where the system is provided a list of candidate passages to choose from. This task definition, which combines a strong element of information retrieval, traces back to the Text Retrieval Conferences (TRECs) in the late 1990s (Voorhees and Tice, 1999), but there is a recent resurgence of interest in this for- mulation (Chen et al., 2017). # ∗ equal contribution The roots of the distant supervision techniques we use trace back to at least the 1990s (Yarowsky, 1995; Riloff, 1996), although the term had not yet been coined. Such techniques have recently be- come commonplace, especially as a way to gather large amounts of labeled examples for data-hungry neural networks and other machine learning algo- rithms. Specific recent applications in question an- swering include Bordes et al. (2015), Chen et al. (2017), Lin et al. (2018), as well as Joshi et al. (2017) for building benchmark test collections. # 3 Approach In this work, we fix the underlying model and fo- cus on data augmentation techniques to explore how to best fine-tune BERT. We use the same exact setup as the “paragraph” variant of BERT- serini (Yang et al., 2019), where the input corpus is pre-segmented into paragraphs at index time, each of which is treated as a “document” for re- trieval purposes. The question is used as a “bag of words” query to retrieve the top k candidate paragraphs using BM25 ranking. Each paragraph is then fed into the BERT reader along with the original natural language question for inference. Our reader is built using Google’s reference imple- mentation, but with a small tweak: to allow com- parison and aggregation of results from different segments, we remove the final softmax layer over different answer spans; cf. (Clark and Gardner, 2018). For each candidate paragraph, we apply inference over the entire paragraph, and the reader selects the best text span and provides a score. We then combine the reader score with the retriever score via linear interpolation: S = (1 − µ) · SAnserini + µ · SBERT, where µ ∈ [0, 1] is a hy- perparameter (tuned on a training sample). is One major shortcoming with BERTserini that Yang et al. (2019) only fine tune on SQuAD, which means that the BERT reader is exposed to an impoverished set of examples; all SQuAD data come from a total of only 442 documents. This contrasts with the diversity of paragraphs that the model will likely encounter at inference time, since they are selected from potentially millions of articles. The solution to this problem, of course, is to fine tune BERT with the types of paragraphs it is likely to see at inference time. Unfortunately, such data does not exist for modern QA test collections. Distant supervision can provide a bridge. Starting from a source dataset comprising question–answer pairs (for example, SQuAD), we can create training data for a specific corpus by us- ing passage retrieval to fetch paragraphs from that corpus (with the question as the query) and then searching (i.e., matching) for answer instances in those paragraphs. A hyperparameter here is n, the number of candidates we examine from passage retrieval. Larger values of n will lead to more training examples, but as n increases, so does the chance that a paragraph will spuriously match the answer without actually answering the question. The above technique allows us to extract pos- itive training examples, but previous work has shown the value of negative examples, specifically for QA (Zhang et al., 2017). To extract negative examples, we sample the top n candidates from passage retrieval for paragraphs that do not contain the answer, with a ratio of d:1. That is, for every positive example we find, we sample d negative examples, where d is also a hyperparameter. Note that these negative examples are also noisy, since they may in fact contain an alternate correct (or acceptable) answer to the question, one that dif- fers from the answer given in the source dataset. Thus, given a corpus, we can create using dis- tant supervision a new dataset that is specifically adapted to a particular passage retrieval method. For convenience, we refer to training data gath- ered using this technique that only contain positive examples as DS(+) and use DS(±) to refer to the additional inclusion of negative examples. Next, we have a design decision regarding how to fine tune BERT using the source QA pairs (SRC) and the augmented dataset using distant su- pervision (DS). There are three possibilities: SRC + DS: Fine tune BERT with all data, grouped together. In practice, this means that the source and augmented data are shuffled together. DS → SRC: Fine tune the reader on the aug- mented data and then the source dataset. SRC → DS: Fine tune the reader on the source dataset and then the augmented data. Experiment results show that of the three choices above, the third option is the most effective. More generally, when faced with multiple, qualitatively- different datasets, we advocate a stage-wise fine- tuning strategy that starts with the dataset “fur- thest” to the task at hand and ending with the dataset “closest”. Another way to think about using different datasets is in terms of a very simple form of trans- SQuAD TriviaQA CMRC DRCD Train Test 87,599 10,570 87,622 11,313 10,321 3,351 26,936 3,524 DS(+) DS(±) 118,406 710,338 264,192 789,089 10,223 71,536 41,792 246,604 Table 1: Number of examples in each dataset. A exam- ple means a paragraph-question pair. fer learning. The stage-wise fine-tuning strategy is in essence trying to transfer knowledge from la- beled data that is not drawn from the same distri- bution as the test instances. We wish to take ad- vantage of transfer effects, but limit the scope of erroneous parameterizations. Thus it makes sense not to intermingle qualitatively different datasets, but to fine tune the model in distinct stages. # 4 Experimental Setup To show the generalizability of our data aug- mentation technique, we conduct experiments on two English datasets: SQuAD (v1.1) and Trivia- QA (Joshi et al., 2017). For both, we use the 2016-12-21 dump of English Wikipedia, follow- ing Chen et al. (2017). We also examine two Chinese datasets: CMRC (Cui et al., 2018) and DRCD (Shao et al., 2018). For these, we use the 2018-12-01 dump of Chinese Wikipedia, to- kenized with Lucene’s CJKAnalyzer into over- lapping bigrams. We apply hanziconv1 to trans- form the corpus into simplified characters for CMRC and traditional characters for DRCD. Following Yang et al. (2019), to evaluate an- swers in an end-to-end setup, we disregard the paragraph context from the original datasets and use only the answer spans. As in previous work, exact match (EM) score and F1 score (at the token level) serve as the two primary evaluation metrics. In addition, we compute recall (R), the fraction of questions for which the correct answer appears in any retrieved paragraph; to make our results com- parable to Yang et al. (2019), Anserini returns the top k = 100 paragraphs to feed into the BERT reader. Note that this recall is not the same as the token-level recall component in the F1 score. Statistics for the datasets are shown in Table 4.2 1https://pypi.org/project/hanziconv/0.2.1/ 2Note the possibly confusing terminology: for SQuAD (as well as the other datasets), what we use for test is actually the publicly-available development set (same as previous work). For data augmentation, based on preliminary experiments, we find that examining n = 10 can- didates from passage retrieval works well, and we further discover that effectiveness is insensitive to the amount of negative samples. Thus, we elim- inate the need to tune d by simply using all pas- sages that do not contain the answer as negative examples. The second block of Table 4 shows the sizes of the augmented datasets constructed using our distant supervision techniques: DS(+) contains positive examples only, while DS(±) in- cludes both positive and negative examples. There are two additional characteristics to note about our data augmentation techniques: The most salient characteristic is that SQuAD, CMRC, and DRCD all have source answers drawn from Wikipedia (English or Chinese), while TriviaQA includes web pages as well as Wikipedia. There- fore, for the first three collections, the source and augmented datasets share the same docu- ment genre—the primary difference is that data augmentation increases the amount and diver- sity of answer passages seen by the model dur- ing training. For TriviaQA, however, we con- sider the source and augmented datasets as coming from different genres (noisy web text vs. higher the quality Wikipedia articles). TriviaQA augmented dataset is also much larger— suggesting that those questions are qualitatively different (e.g., in the manner they were gathered). These differences appear to have a substantial im- pact, as experiment results show that TriviaQA be- haves differently than the other three collections. For model training, we begin with the BERT- Base model (uncased, 12-layer, 768-hidden, 12- heads, 110M parameters), which is then fine-tuned using the various conditions described in the pre- vious section. All inputs to the model are padded to 384 tokens; the learning rate is set to 3 × 10−5 and all other defaults settings are used. # 5 Results Our main results on SQuAD are shown in Table 2. The row marked “SRC” indicates fine tuning with SQuAD data only and matches the BERTserini condition of Yang et al. (2019); we report higher scores due to engineering improvements (primar- ily a Lucene version upgrade). As expected, fine tuning with augmented data improves effective- ness, and experiments show that while training with positive examples using DS(+) definitely Model EM F1 R Dr.QA (Chen et al., 2017) Dr.QA + Fine-tune Dr.QA + Multitask R3 (Wang et al., 2017) Kratzwald and Feuerriegel (2018) Par. R. (Lee et al., 2018) Par. R. + Answer Agg. Par. R. + Full Agg. MINIMAL (Min et al., 2018) 27.1 28.4 29.8 29.1 29.8 28.5 28.9 30.2 34.7 - - - 37.5 - - - - 42.5 77.8 - - - - 83.1 - - 64.0 BERTserini (Yang et al., 2019) 38.6 46.1 85.9 SRC DS(+) DS(±) SRC + DS(±) DS(±) → SRC SRC → DS(±) 41.8 44.0 48.7 49.5 51.4 56.5 85.9 85.9 85.9 45.7 47.4 50.2 53.5 55.0 58.2 85.9 85.9 85.9 Table 2: Results on SQuAD helps, an even larger boost comes from leverag- ing negative examples using DS(±). In both these cases, we only fine tune BERT with the augmented data, ignoring the source data. What if we use the source data as well? Re- sults show that “lumping” all training data to- gether (both the source and augmented data) to fine tune BERT is not the right approach: in fact, the SRC + DS(±) condition performs worse than just using the augmented data alone. Instead, disparate datasets should be leveraged using the stage-wise fine-tuning approach we propose, ac- cording to our heuristic of starting with the dataset that is “furtherest” away from the test data. That is, we wish to take advantage of all available data, but the last dataset we use to fine tune BERT should be “most like” the test data the model will see at infer- ence time. Indeed, this heuristic is borne out em- pirically, as SRC → DS(±) yields another boost over using DS(±) only. Further confirmation for this heuristic comes from an alternative where we switch the order of the stages, DS(±) → SRC, which yields results worse than DS(±) alone. We note that our best configuration beats BERTserini, the previous state of the art, by over ten points. Note that recall in all our conditions is the same since we are not varying the passage retrieval algo- rithm, and in each case Anserini provides exactly the same candidate passages. Improvements come solely from a better BERT reader. Results on TriviaQA are shown in Table 3. With just fine tuning on the source dataset, we obtain a Model R3 (Wang et al., 2017) DS-QA (Lin et al., 2018) Evidence Agg. (Wang et al., 2018) EM F1 47.3 48.7 50.6 53.7 56.3 57.3 SRC DS(+) DS(±) SRC + DS(±) DS(±) → SRC SRC → DS(±) 51.0 48.2 54.4 56.3 53.6 60.2 53.1 49.8 53.7 58.6 55.9 59.3 R - - - 83.7 83.7 83.7 83.7 83.7 83.7 Table 3: Results on TriviaQA score that is only slightly above the previous state of the art (Wang et al., 2018). Interestingly, using only positive examples leads to worse effective- ness than just using the source dataset. However, fine tuning on both positive and negative examples leads to a three point boost in exact match score, establishing a new high score on this dataset. Experiments on fine tuning with both source and augmented data show the same pattern as with SQuAD: stage-wise tuning is more effective than just combining datasets, and tuning should pro- ceed in the “furthest to closest” sequence we pro- pose. While data augmentation no doubt helps (beats the source-only baseline), for this dataset the highest effectiveness is achieved by disregard- ing the source dataset completely; that is, DS(±) beats SRC → DS(±). We attribute this behav- ior to the difference between TriviaQA and the other datasets discussed in Section 4: it appears that gains from transfer effects are outweighed by genre mismatch. Results on the Chinese datasets are shown in Ta- ble 4. To our knowledge, they have only been eval- uated as reading comprehension tests, not in the “end-to-end” setup that we tackle here (requiring retrieval from a sizeable corpus). Although there is no previous work to compare against, our results provide a strong baseline for future work. Experiment results on the two Chinese datasets support the same conclusions as SQuAD: First, we see that data augmentation using distant supervi- sion is effective. Second, including both positive and negative training examples is better than hav- ing positive examples only. Third, when lever- aging multiple datasets, our “furthest to closest” heuristic for stage-wise tuning yields the best re- sults. Since the source datasets also draw from (Chinese) Wikipedia, we benefit from fine tuning with both source and augmented data. Model EM F1 R CMRC 44.5 SRC 45.5 DS(+) DS(±) 48.3 SRC + DS(±) 49.0 DS(±) → SRC 45.6 SRC → DS(±) 49.2 60.9 61.1 63.9 64.6 61.9 65.4 86.5 86.5 86.5 86.5 86.5 86.5 DRCD 50.7 SRC 50.5 DS(+) DS(±) 53.2 SRC + DS (±) 55.4 DS(±) → SRC 53.4 SRC → DS(±) 54.4 65.0 64.3 66.0 67.7 67.1 67.0 81.5 81.5 81.5 81.5 81.5 81.5 Table 4: Results on the two Chinese datasets: CMRC (top) and DRCD (bottom). # 6 Conclusions In this paper, we have further advanced the state of the art in end-to-end open-domain question an- swering using simple BERT models. We focus on data augmentation using distant supervision tech- niques to construct datasets that are closer to the types of paragraphs that the reader will see at in- ference time. Explained this way, it should not come as a surprise that effectiveness improves as a result. This work confirms perhaps something that machine learning practitioners already know too well: quite often, the best way to better results is not better modeling, but better data preparation. # References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Ti- wary, and Tong Wang. 2016. MS MARCO: A hu- man generated MAchine Reading COmprehension dataset. arXiv:1611.09268. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale sim- ple question answering with memory networks. arXiv:1506.02075. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879. Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 845–855. Yiming Cui, Ting Liu, Li Xiao, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, and Guoping Hu. 2018. A span-extraction dataset for Chinese Ma- chine Reading Comprehension. arXiv:1810.07366. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- In Proceedings of the 55th Annual prehension. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1601–1611. Bernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 576–581. Jinhyuk Lee, Seongjun Yun, Hyunjae Kim, Miyoung Ko, and Jaewoo Kang. 2018. Ranking paragraphs for improving answer recall in open-domain ques- In Proceedings of the 2018 Con- tion answering. ference on Empirical Methods in Natural Language Processing, pages 565–569. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736– 1745. Sewon Min, Victor Zhong, Richard Socher, and Caim- ing Xiong. 2018. Efficient and robust question an- swering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1725–1735. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Technical re- port. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383–2392. Ellen Riloff. 1996. Automatically generating extrac- tion patterns from untagged text. In Proceedings of the Thirteenth National Conference on Artificial In- telligence, pages 1044–1049. Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. DRCD: a Chinese machine reading comprehension dataset. arXiv:1806.00920. Ellen M. Voorhees and Dawn M. Tice. 1999. The TREC-8 Question Answering Track evaluation. In Proceedings of the Eighth Text REtrieval Conference (TREC-8), pages 83–106, Gaithersburg, Maryland. Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerald Tesauro, Bowen Zhou, and Jing Jiang. 2017. R3: Reinforced reader-ranker for open-domain question answering. arXiv:1709.00023. Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Camp- Evidence aggregation for answer bell. 2018. re-ranking in open-domain question answering. arXiv:1711.05116. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. arXiv:1902.01718. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- In Proceedings of the 2015 Con- tion answering. ference on Empirical Methods in Natural Language Processing, pages 2013–2018. Xuchen Yao, Benjamin Van Durme, Chris Callison- burch, and Peter Clark. 2013. Answer extraction as In Pro- sequence tagging with tree edit distance. ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 858–867. David Yarowsky. 1995. Unsupervised word sense dis- In Pro- ambiguation rivaling supervised methods. ceedings of the 33rd Annual Meeting of the Associa- tion for Computational Linguistics, pages 189–196. Haotian Zhang, Jinfeng Rao, Jimmy Lin, and Mark D. Smucker. 2017. Automatically extracting high- quality negative examples for answer selection in question answering. In Proceedings of the 40th An- nual International ACM SIGIR Conference on Re- search and Development in Information Retrieval (SIGIR 2017), pages 797–800.
{ "id": "1709.00023" }
1904.05868
Improved training of binary networks for human pose estimation and image recognition
Big neural networks trained on large datasets have advanced the state-of-the-art for a large variety of challenging problems, improving performance by a large margin. However, under low memory and limited computational power constraints, the accuracy on the same problems drops considerable. In this paper, we propose a series of techniques that significantly improve the accuracy of binarized neural networks (i.e networks where both the features and the weights are binary). We evaluate the proposed improvements on two diverse tasks: fine-grained recognition (human pose estimation) and large-scale image recognition (ImageNet classification). Specifically, we introduce a series of novel methodological changes including: (a) more appropriate activation functions, (b) reverse-order initialization, (c) progressive quantization, and (d) network stacking and show that these additions improve existing state-of-the-art network binarization techniques, significantly. Additionally, for the first time, we also investigate the extent to which network binarization and knowledge distillation can be combined. When tested on the challenging MPII dataset, our method shows a performance improvement of more than 4% in absolute terms. Finally, we further validate our findings by applying the proposed techniques for large-scale object recognition on the Imagenet dataset, on which we report a reduction of error rate by 4%.
http://arxiv.org/pdf/1904.05868
Adrian Bulat, Georgios Tzimiropoulos, Jean Kossaifi, Maja Pantic
cs.CV
null
null
cs.CV
20190411
20190411
9 1 0 2 r p A 1 1 ] V C . s c [ 1 v 8 6 8 5 0 . 4 0 9 1 : v i X r a # Improved training of binary networks for human pose estimation and image recognition Adrian Bulat Georgios Tzimiropoulos Jean Kossaifi Maja Pantic # Samsung AI Center, Cambridge United Kingdom {adrian.bulat, georgios.t, j.kossaifi, maja.pantic}@samsung.com # Abstract Big neural networks trained on large datasets have ad- vanced the state-of-the-art for a large variety of challenging problems, improving performance by a large margin. How- ever, under low memory and limited computational power constraints, the accuracy on the same problems drops con- siderable. In this paper, we propose a series of techniques that significantly improve the accuracy of binarized neu- ral networks (i.e networks where both the features and the weights are binary). We evaluate the proposed improve- ments on two diverse tasks: fine-grained recognition (hu- man pose estimation) and large-scale image recognition (ImageNet classification). Specifically, we introduce a se- ries of novel methodological changes including: (a) more appropriate activation functions, (b) reverse-order initial- ization, (c) progressive quantization, and (d) network stack- ing and show that these additions improve existing state-of- the-art network binarization techniques, significantly. Ad- ditionally, for the first time, we also investigate the extent to which network binarization and knowledge distillation can be combined. When tested on the challenging MPII dataset, our method shows a performance improvement of more than 4% in absolute terms. Finally, we further vali- date our findings by applying the proposed techniques for large-scale object recognition on the Imagenet dataset, on which we report a reduction of error rate by 4%. high-end GPU is available for model training and infer- ence. While it can be assumed that for training both la- belled training data and computational resources are avail- able, from a a practical perspective, in many applications (e.g. object recognition or human sensing on mobile de- vices and robots), it is unreasonable to assume that dedi- cated high-end GPUs are available for inference. The aim of this paper is to enable highly accurate and efficient convo- lutional networks on devices with limited memory, storage and computational power. Under such constraints, the ac- curacy and performance of existing methods rapidly drops, and the problem is considered far from being solved. Perhaps, the most promising method for model compres- sion and efficient model inference is network binarization, especially when both activations and weights are binary [9, 10, 28]. In this case, the binary convolution operation can be efficiently implemented with the bitwise XNOR, re- sulting in speed-up of ∼ 58× on CPU (this speed-up on FPGAs can be even higher) and model compression ratio of ∼ 32× [28]. Although no other technique can achieve such impressive speed-ups and compression rates, this also comes at the cost of reduced accuracy. For example, there is ∼ 18% difference in top-1 accuracy between a real-valued ResNet-18 and its binary counterpart on ImageNet [28], and ∼ 9% difference between a real-valued state-of-the-art net- work for human pose estimation and its binary counterpart on MPII [3]. # 1. Introduction Motivated by the above findings, in this work, we focus on improving the training of binary networks by proposing a series of methodological improvements. In particular, we make the following contributions: Recent methods based on Convolutional Neural Net- works (CNNs) have been shown to produce results of high accuracy for a wide range of challenging Computer Vision tasks like image recognition [22, 32, 16], object detection [29], semantic segmentation [24, 14] and human pose es- timation [41, 25]. Two fundamental assumptions made by these methods are that: 1) very large and diverse labelled datasets are available for training, and 2) that at least one • We motivate, provide convincing evidence and de- scribe a series of methodological changes for training binary neural networks including (a) more appropriate non-linear activation functions (Sub-section 3.2), (b) reverse-order initialization (Sub-section 3.3), (c) pro- gressive quantization (Sub-section 3.4), and (d) net- work stacking (Sub-section 3.5) that, individually and 1 combined, are shown to improve existing state-of-the- art network binarization techniques, significantly. (e) We also show to what extent network binarization and knowledge distillation can be combined (Section 3.6). • We show that our improved training of binary networks is task and network agnostic by applying it on two di- verse tasks: fine-grained recognition and, in particular, human pose estimation and classification, specifically ImageNet classification. • Exhaustive experiments conducted on the challenging MPII dataset show that our method offers an improve- ment of more than 4% in absolute terms over the state- of-the-art (Section 4). • On ImageNet we report a reduction of error rate by 4% over the current state-of-the-art (Section 5). # 2. Related work In this section, we review related prior work including network quantization and knowledge distillation for image classification, and methods for efficient human pose estima- tion. # 2.1. Network Quantization Network quantization refers to quantizing the weights and/or the features of a neural network. It is considered the method of choice for model compression and efficient model inference and a very active topic of research. Sem- inal work in this area goes back to [8, 23] who introduced techniques for 16- and 8-bit quantization. The method of [46] proposed a technique which allocates different num- bers of bits (1-2-6) for the network parameters, activations and gradients. For more recent work see [38, 40, 45, 35]. The focus of the first methods proposed in this work is on binarization of both weights and features which is the extreme case aiming to quantizing to {−1, 1}, thus offer- ing the largest possible compression and speed gains. The work of [9] introduced a technique for training a CNN with binary weights. A follow-up work [10] demonstrates how to binarize both parameters and activations. This has the ad- vantage that, during the forward pass, multiplications can be replaced with binary operations. The method of [28] proposes to model the weights with binary numbers mul- tiplied by a scaling factor. Using this simple modification which does not sacrifice the beneficial properties of binary networks, [28] was the first to report good results on a large scale dataset (ImageNet [11]). Our method proposes several extensions to [28], includ- ing more appropriate activation functions, reverse-order ini- tialization, progressive quantization, and network stacking, which are shown to produce large improvements of more than % 4 (in absolute terms) for human pose estimation over the state-of-the art [3]. We also report similar improve- ments for large-scale image classification on ImageNet, in particular, we report a reduction of error rate by 4% over the current state-of-the-art [28]. # 2.2. Knowledge Distillation Recent works [18] have shown that, at least for real- valued networks, the performance of a smaller network can be improved by “distilling the knowledge” of another one, where “knowledge distillation” refers to transferring knowl- edge from one CNN (the so-called “teacher”) to another (the so-called “student”). Typically, the teacher is a high- capacity model of great accuracy, while the student is a compact model with much fewer parameters (thus also re- quiring much less computation). Thus, the goal of knowl- edge distillation is to use the teacher to train a compact stu- dent model with similar accuracy to that of the teacher. The term “knowledge” refers to the soft outputs of the teacher. Such soft outputs provide extra supervisory signals of intra- class and inter-class similarities learned by teacher. Further extensions include transferring from intermediate represen- tations of the teacher network [30] and from attention maps [43]. While most of the prior work focuses on distilling real-valued neural networks little to no-work has been done on studying the effectiveness of such approaches for bina- rized neural networks. In this work, we propose to adapt such techniques to binary networks, showing through em- pirical evidence their positive effect on improving accuracy. # 2.3. Human pose estimation A large number of works have been recently proposed for both single-person [25, 39, 2, 20, 34, 7, 42, 6] and multi- person [5, 26, 14, 12] human pose estimation. We note that the primary focus of these works is accuracy (especially for the single-person case) rather than efficient inference un- der low memory and computational constraints which is the main focus of our work. Many of the aforementioned methods use the so-called HourGlass (HG) architecture [25] and its variants. While we also used the HG in our work, our focus was to enhance its efficiency while maintaining as much as possible its high accuracy which makes our work different to all aforemen- tioned works. To our knowledge, the only papers that have similar aims are the works of [3] and [35]. [3] and its exten- sion [4] aim to improve binary neural networks for human pose estimation by introducing a novel residual block. [35] aims to improve quantized neural networks by introducing a new HG architecture. In contrast, in this work, we focus on improving binary networks for human pose estimation by (a) improving the binarization process per se, and (b) combining binarization with knowledge distillation. Our method is more general than the improvements proposed in [3] and [35]. We illustrate this by showing the benefits of the proposed method for also improving ImageNet classifi- cation with binary networks. \ \ «< PN Figure 1: The residual binary block of [4] used in our work. The module has a hierarchical, parallel, multi-scale struc- ture comprised of three 3 × 3 convolutional layers with input-output channel ratio equal to 1:2, 1:4 and 1:4. Each convolution layer is preceded by a BatchNorm and the bina- rization function (sgn(x)) and followed by a non-linearity. See Section 3 for the changes introduced in our work for improving its performance. # 3. Method This section presents the proposed methodological changes for improving the network binarization process. Throughout this section, we validated the performance gains offered by our method on the single person human pose estimation dataset, MPII. We note that we chose hu- man pose estimation as the main dataset to report the bulk of our results as the dataset is considerably smaller and train- ing is much faster (compared to ImageNet). Sub-section 3.1 describes the strong baseline used in our work, briefly explaining the binarization process proposed in[28] and [3], while the proposed improvements are de- scribed in Sub-sections 3.2, 3.3, 3.4, 3.5 and 3.6. # 3.1. Baseline All results reported herein are against the state-of-the-art method of [3] which we used as a strong baseline to report the performance improvements introduced by our method. The method of [3] combines the HourGlass (HG) architec- ture of [25] with the a newly proposed residual block that was specifically designed for binary CNNs (see Fig. 1). The network was binarized using the approach described in [28] as follows: T « W & (sgn(T) @ sgn(W))Ka, (1) where T is the input tensor, W is the layer’s weight ten- sor, K a matrix containing the scaling factors for all the sub-tensors of T, and a € R* is a scaling factor for the weights. ® denotes the binary convolution operation which can be efficiently implemented with the bitwise XNOR, re- sulting in speed-up of © 58x and model compression ratio of % 32x [28]. Note than in practice, we follow [3, 28] and drop K since this speed-ups the network at a negligible performance drop. # 3.2. Leaky non-linearities Previous work [28, 3] has shown that adding a non- linearity after each convolutional layer can be used to in- crease the performance of binarized CNNs. In the context of real-valued networks there exists a plethora or works that explore their effect on the overall network accuracy, how- ever, in contrast to this, there is little to no work avaial- ble for binary networks. Herein, we rigorously explore the choice of the non-linearity and its impact on the overall per- formance for the task of human pose estimation showing empirically in the process the negative impact of the pre- viously proposed ReLU. Instead of using a ReLU, we pro- pose to use the recently introduced PReLU [15] function, an adaptation of the leaky ReLU that has a learnable nega- tive slope, which we find it to perform better than both the ReLU and the leaky ReLU. There are two main arguments for justifying our find- ings. Firstly, with the help of the sgn function, the binariza- tion process restricts the possible states of the filters and fea- tures to {−1, 1}. As such, the representational power of the network resides on these two states, and removing one of them during training using a ReLU for each convolutional layer makes the training unstable. See also Fig. 3. Secondly, this instability is further amplified by the fact that the imple- mentation of the sign function is “leaky” at 0, introducing a third unwanted spurious state and the subsequent itera- tions can cause easy jumps between the two states. See also Fig. 2. Note, that despite the fact that the Batch Normalisa- tion [19] layer mitigates some of this effects by re-centering the input distribution, as the experiments show, in practice, the network can achieve significantly better accuracy if the non-linearity function allows negative values to pass. On the other hand, we know that non-linearities should be used to increase the representational power of the network. We conclude that a PReLU can be safely used for this purpose removing also the aforementioned instabilities. # 3.3. Reverse-order initialization Initialization of neural networks has been the subject of study of many recent works [13, 15, 31] where it was shown that an appropriate initialization is often required for achiev- ing good performance [33]. The same holds for quantized networks, where most of prior works either use an adap- tation of the above mentioned initialization strategies, or start from a pretrained real-valued neural network. How- ever, while the weight binarization alone can be done with little to no accuracy loss [28], quantizing the features has much higher detrimental effect [28, 46]. In addition, since the output signal from sgn is very different to the output of a ReLU layer, the transition from a fully real-valued network Figure 2: Weight distribution for various layers from a network using PReLU (first row) and ReLU (second row) as we advance in the network (from left to right). The ReLU tends to push the weights closer to 0 making a jump between states more likely, thus causing the observed instabilities. Validation accuracy 90 80 = 70 = Bin+PReLU(n) 3 60 — Bin+PReLU(1) 9 l — Bin 50 — Bin+PReLU+Distil —— Bin+LReLU 40 — Bin+PReLU+GN —— Bin+ReLU 0 20 40 60 80 100 120 Epoch Initialisation method PCK-h (%) Oo oo o 8 6S 6S 6 rs is} —— Proposed init —— From pretrained real 305 20 40 60 80 100 120 Epoch Figure 3: Accuracy evolution on the validation set of MPII during training. Notice the high fluctuations introduced by the ReLU. Best performance is obtained with PReLU. Figure 4: Accuracy evolution on the validation set of MPII during training for different pre-initialization approaches. Our initialization provides a much better starting point. to a binary one causes a catastrophic loss in accuracy often comparable with training from scratch. To alleviate this, we propose the opposite of what is cur- rently considered the standard method to initialize a binary network from a real-valued one: we propose to firstly train a network with real weights and binary features (the features are binarized using the approach presented in Section 3.4) and only after this, it is fully trained to further binarize the weights. By doing so, we effectively split the problem into two sub-problems: weight and feature binarization which we then try to solve from the hardest to the easiest one. Fig. 4 shows the advantage of the proposed initialization method against standard pre-training. of quantized weights [44] leads to decent performance im- provements. While the later is more practical, it requires a careful fine-tuning of the quantization ratio at each step. Instead, in this work, we follow a different route by proposing to approximate the quantization function sgn(x) with a smoother one, in which the estimation error is con- trolled by λ. By gradually increasing λ during training, we achieve a progressive binarization. This allows for a nat- ural and smoother transition in which the selection of the weights to be binarized occurs implicitly and can be easily controlled by varying λ without the need to define a fixed scheduling for increasing the amount of quantized weights as in [44]. # 3.4. Smooth progressive quantization Previous works have shown that incrementally quantiz- ing the network either by gradually decreasing the precision or by partitioning and progressively increasing the amount In the following, we present a few options we explored to approximate the sgn(x) function alongside their derivatives (see also Fig. 6): # Sigmoid: ere sen(2) ~ 2(>—) -1 d 2( ere ) De" dx +e (eA + 1)? # d dx # SoftSign: sgn(x) ≈ λx 1 + λ|x| d dx λx 1 + λ|x| = λ (1 + λ|x|)2 (3) # Tanh: sgn(x) ≈ tanh(λx) d dx tanh(λx) = λ(1 − tanh2(λx)) (4) As λ → ∞ the function converges to sgn(x). In a sim- ilar fashion, the derivative of the approximation function converges to the Dirac function δ. In practice, as most of the features are outside of the region with high approxima- tion error (see Fig. 5), we started observing close-to-binary results starting with λ = 25. See Fig. 7. In our tests we found that all the above approximation functions behaved similarly, however the best performance was obtained using the tanh, while the softsign offered slightly lower performance. As such, the final reported re- sults are obtained using the tanh. During training we pro- gressively increased the value of λ starting from 20 to 216. # 3.5. Stacked binary networks As shown in [25], using a stack of HG networks can be used to greatly improve human pose estimation accu- racy, allowing the network to gradually refine its prediction In a similar fashion, in this work we con- at each stage. structed a stack of binary HG networks also incorporating the improvements introduced in the previous subsections. We would like to verify to what extent stacking can fur- ther contribute on top of these improvements. In addition to these improvements, our method differs to [4] in that all the intermediate layers used to join the stacks are also bina- rized. As the results from section 4 show, stacking further improves upon the improvements reported in the previous subsections. # 3.6. Combining binarization with distillation Recent work on knowledge distillation has focused on real-valued networks [18], largely ignoring the quantized, and especially, the binarized case. (2) In this work, and in light of the methods proposed in the previous sub-sections, we also study the effect and ef- fectiveness of knowledge distillation for the case of binary networks, evaluating in the process the following options: (a) using a real-valued teacher and a binary student and (b) using a binary teacher and a binary student with and with- out feature matching. During training, we used the output heatmaps of the teacher network as soft labels for the Bi- nary Cross Entropy Loss. In addition, we found that the best results can be obtained by combining the ground truth and the soft labels with a weight equal to 0.25. # 4. Human pose estimation experiments In this section, we report our results on MPII, one of the most challenging datasets for single person human pose es- timation [1]. MPII contains approximately 25,000 images and more than 40,000 persons annotated with up to 16 land- marks and visibility labels. We use the same split for valida- tion and training as in [37] (3,000 for validation and 22,000 for training). We firstly report the performance improve- ments, using the PCKh metric [1], obtained by applying incrementally the proposed methods in the same order as these methods appear in the paper. We then evaluate the proposed improvements in isolation. # 4.1. Results Baseline: The performance of our strong baseline [3] us- ing 1 HG with and without ReLU is shown in the first 2 rows of Table 1. Leaky non-linearities (Section 3.2): The performance improvement obtained by replacing the ReLU with Leaky ReLU and then PReLU as proposed in our work is shown in the 3-rd and 4-th rows of Table 1. We observe a large improvement of 2.5% in terms of absolute error with the highest gains offered by the PReLU function. Note that we obtained similar accuracy between the variant that uses a single scale factor and the one that uses one per each chan- nel for the negative slope. Reverse-order initialization (Section 3.3): We observe an additional improvement of 0.8% by firstly binarizing the features and then the weights, as proposed in our work and as shown in the 5-th row of Table 1. This, alongside the results from Fig. 4 show that the proposed strategy is an efficient way of improving the performance of binary net- works. Progressive binarization (Section 3.4): We observe an additional improvement of 0.4% by the proposed progres- sive binarization as shown in the 6-th row of Table 1. 6000 5000 4000 3000 2000 1000 iL ° 6000 1500 5000 1400 1200 4000 1000 3000 800 2000 400 1000 iL 200 ° o ‘800009 700000 600009 = il °. 1500 1400 1200 1000 800 400 200 o ‘800009 700000 600009 = il °. Figure 5: Input distribution before the sgn function from 3 layers located at the bottom, middle and top of the network. Most values are in a range where the approximation function outputs values close to ±1 allowing the approximator to reach good estimates for relatively low values of λ. tanh(A * x) softsign(A * x) 2* sigmoid(A*x) —1 1.0 05 0.0 StanhiA *x) Esoftsign(A * x) 10 | 5 0 2 1 0 «1 2 A=. A=5 A=25 A= 625 £2 * sigmoid(a * x) — 1 A= 65536 The quantization approximation functions Figure 6: used: sigmoid, softsign and tanh (first row) and their derivatives (second row) for various values of λ = {1, 5, 25, 625, 65536}. HG networks. We observe an additional improvement of 1.5% and 1.9% by using 2-stack and 3-stack HG networks, respectively. While we explored with using both a binary and a real- valued “teacher” given that finding a high performing bi- nary teacher is challenging on its own, we obtained the best results using a real-valued one. However, in both cases the network converged to a satisfactory condition. Activ. | Rev. init. | Prog. bin. | Distill. Method | co-3.2 | Sec.3.3 | Sec.34 | Sec.3.6 | POKA Bl x x x xX | 16.6% BI ReLU x x x | 76.3% Ours | LReLU x x xX | 78.1% Ours | PReLU x x x | 79.1% Ours | PReLU v x x | 79.9% Ours | PReLU v v x — | 80.3% Ours | PReLU v v Y | 80.9% [Real > 85.6% Stacked binary networks (Section 3.5): We observe an additional improvement of 1.5% and 1.9% by using 2-stack and 3-stack HG networks, respectively, as shown in the 4-th column of Table 2. While significant improvements can be observed when going from 1 HG to a stack of 2, the gain in performance diminishes when 1 more binary HG is added to the network. A similar phenomenon is also observed (but to less extent though) for the case of real-valued networks. Table 1: PCK-h on the validation set of MPII for different configurations and methods. Each column points out to the section that presents or proposes the particular method. #stacks 1 2 3 [4] #params 6.2M 76.6% 11.0M 79.9% 17.8M 81.3% Ours w/o distil. Ours w. distil 80.3% 81.8% 82.2% 80.9% 82.3% 82.7% Binarization plus distillation As shown in the last row of Table 1, we obtained an improvement of 0.6% via combin- ing binarization and distillation for a binary network with a single HG distilled using a high performing real-valued teacher. Note that the binary network already incorporates the improvements proposed in section 3. Also, the last col- umn of Table 2 shows the improvements obtained by com- bining binarization with distillation for multi-stack binary Table 2: PCK-h on the validation set of MPII for various number of binarized stacked HGs. While the above results illustrate the accuracy gains by incrementally applying our improvements it is also impor- tant to evaluate the performance gains of each proposed improvement in isolation. As the results from Table 3 show, the proposed techniques also yield high improve- 120000: 80000 40000 0 -1.00 0.00 1.00 250000} 200000 150000 100000 50000 0 -1,00 0.00 1.00 100000 500000 400000 300000 200000 0 -1.00 0.00 1.00 500000 400000 300000 200000 100000 0 -1.00 0.00 1.00 (a) λ = 1 (b) λ = 5 (c) λ = 25 (d) λ = 625 Figure 7: Output distribution of tanh(λx) for λ = {1, 5, 25, 625}. Notice that starting with λ = 25, in practice, the function behaves close to sgn(x). ments when applied independently. At the same time, when evaluated in isolation the proposed modification offers a noticeable higher performance increase compared with the case where they are gradually added together (i.e 0.8% vs 1.9% for reverse-order initialization). results produced by our best-performing binary model can be seen in Fig. 9. # 5. Imagenet classification experiments Activ. Rev. init. | Prog. bin. | Distill. Method | 52-32 | Sec.33 | Sec.3.4 | Sec.3.6 | PCKB Bl x x x X | 16.6% B] ReLU x x x | 763% Ours LReLU x x x 78.1% Ours PReLU x x x 79.1% Ours x v x x 78.5% Ours x x v x 78.0% Ours x x x v 77.6% the (a) leaky non-linearities (Sec- proposed improvements: tion 3.2), (b) reverse-order initialization (Section 3.3), (c) smooth progressive quantization (Section 3.4), and (d) knowledge distillation (Section 3.6), in this section, we show that they are largely task-, architecture- and block- independent, by applying them on both of a more tradi- tional architecture (i.e AlexNet [22]) and a resnet-based one (ResNet-18) for the task of Imagenet [11] classification. Table 3: PCK-h on the validation set of MPII that evalu- ates each proposed improvement in isolation. Each column points out to the section that presents the particular method. # 4.2. Training We trained all models for human pose estimation (both real-valued and binary) following the same procedure: they were trained for 120 epochs, using a learning rate of 2.5e—4 that was dropped every 40 epochs by a factor of 10. For data augmentation, we applied random flipping, scale (0.75 x to 1.25 x) jittering and rotation (—30° to 30°). Instead of using the MSE loss, we followed the findings from [3] and used the BCE loss defined as: 1 N W H 1 N W H nay SE SES pi 108 B%; + (1 — ph) log (1 — BF), ©) n=1 i=1 j=1 where p}'; denotes the ground truth confidence map of the n—th part at pixel location (i, 7) and Di is the correspond- ing predicted output at the same location. For distillation, we simply applied a BCE loss using as ground truth the pre- dictions of the teacher network. The models were optimized using RMSProp [36]. We implemented our models using PyTorch [27]. Qualitative AlexNet: Similarly to [28, 10], we removed the lo- cal normalization layer preserving the same structure, namely (from input to output): C[3, 96, (11 × 11), 4], C[256, 384, (3 × 3), 1], C[96, 256, (5 × 5), 1], C[384, 256, (3 × 3), 1], C[384, 384, (3 × 3), 1], L(4096, 4096), L(4096, 4096), L(4096, 1000), apply- ing max-pooling after the 1−st, 2−nd and 5−th layers. Similarly to [28], the first and last layer were kept real. ResNet: We kept the original ResNet-18 macro- architecture unchanged, using, as proposed in [28], the basic block from [17] with pre-activation. Similarly to the previous works [28], the first and last layers were kept real. Results: As Table 4 shows, when compared against the state-of-the-art method of [28] and [10], our approach of- fers a large improvement of up to 4% in terms of abso- lute error for both Top-1 and Top-5 error metrics using both AlexNet and ResNet-18 architectures. This further validate the generality of our method. Training: We of AlexNet [22] and ResNet-18 [16] using Adam [21] starting with a learning rate of 1e − 3 that is gradually (a) Top-1 accuracy on ImageNet. (b) Top-5 accuracy on ImagenNet. Top-1 54 52 50 48 46 Acc (%) 42 40 38 Bin+PReLU-+distil. (train) Bin+PReLU+distil. (valid) Bin+PReLU (train) Bin+PReLU (valid) 0 10 20 30 50 60 70 80 40 Epoch Top-5 Bin+PReLU-+ distil. (train) Bin+PReLU+distil. (valid) Bin+PReLU (train) Bin+PReLU (valid) 0 10 20 30 50 60 70 80 40 Epoch Figure 8: ImageNet training and validation accuracy vs epoch for different variants of our binary AlexNet. Classification accuracy (%) Method AlexNet ResNet-18 Top-1 accuracy Top-5 accuracy Top-1 accuracy Top-5 accuracy BNN [10] XNOR-Net [28] Ours –expand-bbl Real valued [22] 41.8% 44.2% 48.6% 56.6% 67.1% 69.2% 72.8% 80.2% 42.2% 51.2% 53.7% 69.3% 69.2% 73.2% 76.8% 89.2% Table 4: Top-1 and Top-5 classification accuracy using binary AlexNet and ResNet-18 on Imagenet. Notice that our method offers consistent improvements across multiple architectures: both traditional ones(AlexNet) and residual ones (ResNet-18). decreased every 25 epochs by 0.1. We simply augment the data by firstly resizing the images to have 256px over the smallest dimension and then randomly cropping them to 227 × 227 for AlexNet and 224 × 224px for ResNet. We believe that further performance gains can be achieved with more aggressive augmentation. At test time, instead of random-crop we center-crop the images. To alleviate problems introduced by the binarization process, and similarly to [28], we trained the network using a large batch size, specifically 400 for AlexNet and 256 for ResNet-18. All models were trained for 80 epochs. Fig. 8 shows the top-1 and top-5 accuracy across training epochs for AlexNet (the network was initialized using the procedure proposed in Section 3.3). # 6. Conclusions Figure 9: Qualitative results produced by our binary net- work on the validation set of MPII. In this work, we proposed a series of novel techniques for highly efficient binarized convolutional neural network. We experimentally validated our results on the challeng- ing problems of human pose estimation and large scale im- age classification. Mainly, we propose (a) more appropriate non-linear activation functions, (b) reverse-order initializa- tion, (c) progressive features quantization, and (d) network stacking that improve existing state-of-the-art network bina- rization techniques. Furthermore, we explore the effect and efficiency of knowledge distillation procedures in the con- text of binary networks using a real-valued “teacher” and binary “student”. Overall, our results show that a performance improve- ment of up to 5% in absolute terms is obtained on the chal- lenging human pose estimation dataset of MPII. Finally, we show that our approach is architecture and task-agnostic and can increase the performance of arbitrary networks. In particular, by applying the proposed techniques to Ima- genet classification, we report an absolute performance im- provement of 4% over the current state-of-the-art using both AlexNet and ResNet architectures. # References [1] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In CVPR, 2014. 5 [2] A. Bulat and G. Tzimiropoulos. Human pose estimation via convolutional part heatmap regression. In ECCV, 2016. 2 [3] A. Bulat and G. Tzimiropoulos. Binarized convolutional landmark localizers for human pose estimation and face alignment with limited resources. In ICCV, 2017. 1, 2, 3, 5, 6, 7 [4] A. Bulat and Y. Tzimiropoulos. Hierarchical binary cnns for landmark localization with limited resources. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 2018. 2, 3, 5, 6 [5] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi- person 2d pose estimation using part affinity fields. In CVPR, 2017. 2 [6] Y. Chen, C. Shen, X.-S. Wei, L. Liu, and J. Yang. Adversarial posenet: A structure-aware convolutional network for human pose estimation. CoRR, abs/1705.00389, 2017. 2 [7] X. Chu, W. Yang, W. Ouyang, C. Ma, A. L. Yuille, and X. Wang. Multi-context attention for human pose estima- tion. arXiv preprint arXiv:1702.07432, 2017. 2 [8] M. Courbariaux, Y. Bengio, and J.-P. David. Training deep neural networks with low precision multiplications. arXiv, 2014. 2 [9] M. Courbariaux, Y. Bengio, and J.-P. David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, 2015. 1, 2 [10] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neu- ral networks with weights and activations constrained to+ 1 or-1. arXiv, 2016. 1, 2, 7, 8 [11] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 2, 7 [12] R. Girdhar, G. Gkioxari, L. Torresani, M. Paluri, and D. Tran. Detect-and-track: Efficient pose estimation in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 350–359, 2018. 2 [13] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intel- ligence and statistics, pages 249–256, 2010. 3 [14] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Con- ference on, pages 2980–2988. IEEE, 2017. 1, 2 [15] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international con- ference on computer vision, pages 1026–1034, 2015. 3 [16] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 7 [17] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016. 7 [18] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 2, 5 [19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015. 3 [20] L. Ke, M.-C. Chang, H. Qi, and S. Lyu. Multi-scale structure-aware network for human pose estimation. arXiv preprint arXiv:1803.09894, 2018. 2 [21] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 7 [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. classification with deep convolutional neural networks. NIPS, 2012. 1, 7, 8 Imagenet In [23] D. D. Lin, S. S. Talathi, and V. S. Annapureddy. Fixed point quantization of deep convolutional networks. arXiv, 2015. 2 [24] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 1 [25] A. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In ECCV, 2016. 1, 2, 3, 5 [26] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tomp- son, C. Bregler, and K. Murphy. Towards accurate multi- person pose estimation in the wild. In CVPR, 2017. 2 [27] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De- Vito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Auto- matic differentiation in pytorch. 2017. 7 [28] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. Xnor- net: Imagenet classification using binary convolutional neu- ral networks. In ECCV, 2016. 1, 2, 3, 7, 8 [29] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. 1 [30] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In ICLR, 2015. 2 [31] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact so- lutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. 3 [32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv, 2014. 1 [33] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the im- portance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139– 1147, 2013. 3 [34] W. Tang, P. Yu, and Y. Wu. Deeply learned compositional models for human pose estimation. In ECCV, 2018. 2 [35] Z. Tang, X. Peng, S. Geng, L. Wu, S. Zhang, and D. Metaxas. Quantized densely connected u-nets for efficient landmark localization. In ECCV, 2018. 2 [36] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012. 7 [37] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint train- ing of a convolutional network and a graphical model for human pose estimation. In NIPS, 2014. 5 [38] F. Tung and G. Mori. Clip-q: Deep network compression learning by in-parallel pruning-quantization. In CVPR, 2018. 2 [39] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Con- volutional pose machines. In CVPR, 2016. 2 [40] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng. Quantized convolutional neural networks for mobile devices. In CVPR, 2016. 2 [41] B. Xiao, H. Wu, and Y. Wei. Simple baselines for human pose estimation and tracking. In ECCV, 2018. 1 [42] W. Yang, S. Li, W. Ouyang, H. Li, and X. Wang. Learning feature pyramids for human pose estimation. In ICCV, 2017. 2 [43] S. Zagoruyko and N. Komodakis. Paying more attention to attention: Improving the performance of convolutional neu- ral networks via attention transfer. In ICLR, 2017. 2 [44] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremen- tal network quantization: Towards lossless cnns with low- precision weights. arXiv preprint arXiv:1702.03044, 2017. 4 [45] A. Zhou, A. Yao, K. Wang, and Y. Chen. Explicit loss- error-aware quantization for low-bit deep neural networks. In CVPR, 2018. 2 [46] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou. Dorefa- net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv, 2016. 2, 3
{ "id": "1702.03044" }
1904.05342
ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission
Clinical notes contain information about patients that goes beyond structured data like lab values and medications. However, clinical notes have been underused relative to structured data, because notes are high-dimensional and sparse. This work develops and evaluates representations of clinical notes using bidirectional transformers (ClinicalBERT). ClinicalBERT uncovers high-quality relationships between medical concepts as judged by humans. ClinicalBert outperforms baselines on 30-day hospital readmission prediction using both discharge summaries and the first few days of notes in the intensive care unit. Code and model parameters are available.
http://arxiv.org/pdf/1904.05342
Kexin Huang, Jaan Altosaar, Rajesh Ranganath
cs.CL, cs.LG
CHIL 2020 Workshop
null
cs.CL
20190410
20201129
0 2 0 2 v o N 9 2 ] L C . s c [ 3 v 2 4 3 5 0 . 4 0 9 1 : v i X r a # ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission # Kexin Huang # Jaan Altosaar # Rajesh Ranganath Health Data Science, Harvard T.H. Chan School of Public Health Department of Physics, Princeton University Courant Institute of Mathematical Science, New York University # Abstract Clinical notes contain information about patients beyond structured data such as lab values or medications. However, clinical notes have been underused relative to structured data, because notes are high- dimensional and sparse. We aim to develop and evaluate a continu- ous representation of clinical notes. Given this representation, our goal is to predict 30-day hospital readmission at various timepoints of admission, including early stages and at discharge. We apply bidi- rectional encoder representations from transformers (bert) to clini- cal text. Publicly-released bert parameters are trained on standard corpora such as Wikipedia and BookCorpus, which differ from clini- cal text. We therefore pre-train bert using clinical notes and fine- tune the network for the task of predicting hospital readmission. This defines ClinicalBERT. ClinicalBERT uncovers high-quality relation- ships between medical concepts, as judged by physicians. Clinical- BERT outperforms various baselines on 30-day hospital readmission prediction using both discharge summaries and the first few days of notes in the intensive care unit on various clinically-motivated met- rics. The attention weights of ClinicalBERT can also be used to in- terpret predictions. To facilitate research, we open-source model pa- rameters, and scripts for training and evaluation. ClinicalBERT is a flexible framework to represent clinical notes. It improves on previ- ous clinical text processing methods and with little engineering can be adapted to other clinical predictive tasks. Clinical notes contain significant clinical value [5, 35, 18, 34]. A pa- tient might be associated with hundreds of notes within a stay and over their history of admissions. Compared to structured features, clinical notes provide a richer picture of the patient since they de- scribe symptoms, reasons for diagnoses, radiology results, daily ac- tivities, and patient history. Consider clinicians working in the inten- sive care unit, who need to make decisions under time constraints. Making accurate clinical predictions may require reading a large vol- ume of clinical notes. This can add to a doctor’s workload, so tools that make accurate predictions based on clinical notes might be use- ful in practice. Hospital readmission lowers patients’ quality of life and wastes money [2, 40]. One estimate puts the financial burden of readmission at $17.9 billion and the fraction of avoidable admissions at 76% [4]. Accurately predicting readmission has clinical significance, as it may improve efficiency and reduce the burden on intensive care unit doc- tors. We develop a discharge support model, ClinicalBERT, that pro- cesses patient notes and dynamically assigns a risk score of whether the patient will be readmitted within 30 days (Figure 1). As physi- cians and nurses write notes about a patient, ClinicalBERT processes the notes and updates the risk score of readmission. This score can inform provider decisions, such as whether to intervene. Besides readmission, ClinicalBERT can be adapted to other tasks such as diagnosis prediction, mortality risk estimation, or length-of-stay as- sessment. # ACM Reference Format: Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2020. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. In CHIL ’20: ACM Conference on Health, Inference, and Learning; Workshop Track. April 02–04, 2020, Toronto, ON. ACM, New York, NY, USA, 9 pages. 1 An electronic health record (ehr) stores patient information; it can save money, time, and lives [21]. Data is added to an ehr daily, so analyses may benefit from machine learning. Machine learning tech- niques leverage structured features in ehr data, such as lab results or electrocardiography measurements, to uncover patterns and improve predictions [30, 36, 37]. However, unstructured, high-dimensional, and sparse information such as clinical notes are difficult to use in clinical machine learning models. Our goal is to create a framework for modeling clinical notes that can uncover clinical insights and make medical predictions. CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON © 2020 Authors. # 1.1 Background Electronic health records are useful for risk prediction [13]. Clinical notes in such electronic health records use abbreviations, jargon, and have an unusual grammatical structure. Building models that learn useful representations of clinical text is a challenge [9]. Bag-of-words assumptions have been used to model clinical text [38], in addition to log-bilinear word embedding models such as Word2Vec [20, 23]. The latter word embedding models learn representations of clinical text using local contexts of words. But clinical notes are long and their words are interdependent [39], so these methods cannot capture the long-range dependencies needed to capture clinical meaning. Natural language processing methods where representations include global, long-range information can yield boosts in performance on clinical tasks [24, 25, 11]. Modeling clinical notes requires captur- ing interactions between distant words. The need to model this long- range structure makes clinical notes suitable for contextual repre- sentations like bidirectional encoder representations from transform- ers (bert) [11]. Lee et al. [17] apply bert to biomedical literature, and [31] use bert to enhance clinical concept extraction. Concurrent to our work, Alsentzer et al. [1] also apply bert to clinical notes; we CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON Kexin Huang, Jaan Altosaar, and Rajesh Ranganath Probability 0.74 0.76 0.89 of Readmission 4 4 4 0.74 aus 0.35 0.34 4 t t ClinicalBERT IN IN IN IN IN Clinical Notes Ys <p mee [> Radiology Nursing Physician Echo Discharge Pharmacy Patient Timeline Report Progress Report Report Summary Note | oon Day of Admission Day 2 Day of Discharge Figure 1: ClinicalBERT learns deep representations of clinical notes that are useful for tasks such as readmission prediction. In this example, care providers add notes to an electronic health record during a patient’s admission, and the model dynamically updates the patient’s risk of being readmitted within a 30-day window. evaluate and adapt ClinicalBERT to the clinical task of readmission and pre-train on longer sequence lengths. weights can be visualized to understand which elements of clinical notes are relevant to a prediction. Methods to evaluate models of clinical notes are also relevant to Clin- icalBERT. Wang et al. [34] and Chiu et al. [10] evaluate the quality of biomedical embeddings by computing correlations between doctor- rated relationships and embedding similarity scores. We adopt simi- lar evaluation techniques in our work. Good representations of clinical text require good performance on downstream tasks. We use 30-day hospital readmission prediction as a case study since it is of clinical importance. We refer readers to Fu- toma et al. [12] for comparisons of traditional machine learning meth- ods such as random forests and neural networks on hospital readmis- sion tasks. Work in this area has focused on integrating a multitude of covariates about a patient into a model [7]. Caruana et al. [8] develop an interpretable model for readmission prediction based on general- ized additive models and highlight the need for intelligible clinical predictions. Rajkomar et al. [26] predict readmission using a stan- dard ontology from notes alongside structured information. Much of this previous work uses information at discharge, whereas Clinical- BERT can predict readmission during a patient’s stay. # 1.2 Significance ClinicalBERT is bert [11] specialized to clinical notes. Clinical notes are lengthy and numerous, and the computationally-efficient architecture of bert can model long-term dependencies. Compared to two popular models of clinical text, Word2Vec and FastText, Clin- icalBERT more accurately captures clinical word similarity. We de- scribe one way to scale up ClinicalBERT to handle large collections of clinical notes for clinical prediction tasks. In a case study of hospi- tal readmission prediction, ClinicalBERT outperforms competitive deep language models. We open source ClinicalBERT1 pre-training and readmission model parameters along with scripts to reproduce results and apply the model to new tasks. # 2 Methods ClinicalBERT learns deep representations of clinical text. These representations can uncover clinical insights (such as predictions of disease), find relationships between treatments and outcomes, or create summaries of corpora. ClinicalBERT is an application of the bert model [11] to clinical corpora to address the challenges of clinical text. Representations are learned using medical notes and further processed for clinical tasks; we demonstrate ClinicalBERT on the task of hospital readmission prediction. ClinicalBERT improves readmission prediction over methods that center on discharge summaries. Making a prediction using a dis- charge summary at the end of a stay means that there are fewer op- portunities to reduce the chance of readmission. To build a clinically- relevant model, we define a task of predicting readmission at any timepoint since a patient was admitted. To evaluate models on read- mission prediction, we define a metric motivated by a clinical chal- lenge. Medicine suffers from alarm fatigue [28, 3]. This means useful classification rules for medicine need to have high positive predictive value (precision). We evaluate model performance at a fixed positive predictive value. We show that ClinicalBERT has the highest recall compared to popular methods for representing clinical notes. Clini- calBERT can be readily applied to other tasks such as mortality pre- diction and disease prediction. In addition, ClinicalBERT attention 2.1 BERT Model bert is a deep neural network that uses the transformer encoder ar- chitecture [33] to learn embeddings for text. We omit a detailed de- scription of the architecture; it is described in [33]. The transformer encoder architecture is based on a self-attention mechanism. The pre-training objective function for the model is defined by two un- supervised tasks: masked language modeling and next sentence pre- diction. The text embeddings and model parameters are fit using sto- chastic optimization. For downstream tasks, the fine-tuning phase is problem-specific; we describe a fine-tuning task specific to clinical text. 1https://github.com/kexinhuang12345/clinicalBERT ClinicalBERT [CLs] —> —-» [CLS] acute —> Transformer —-> acute Sentence |) MASK] —> Encoders » dia #4st0 —> — Histo his > —> his Sentence 2 sep —> —> sep [MASK] —» —> tisis —> Next Sentence Label Figure 2: ClinicalBERT learns deep representations of clinical text using two unsupervised language modeling tasks: masked language modeling and next sentence prediction. In masked language modeling, a fraction of input tokens are held out for prediction; in next sentence prediction, ClinicalBERT predicts whether two input sentences are consecutive. # 2.2 Clinical Text Embedding A clinical note input to ClinicalBERT is represented as a collection of tokens. These tokens are subword units extracted from text in a preprocessing step [29]. In ClinicalBERT, a token in a clinical note is represented as a sum of the token embedding, a learned segment embedding, and a position embedding. When multiple sequences of tokens are fed to ClinicalBERT, the segment embedding identifies which sequence a token is associated with. The position embedding of a token is a learned set of parameters corresponding to the token’s position in the input sequence (position embeddings are shared across tokens). A classification token [CLS] is inserted in front of every sequence of input tokens for use in classification tasks. # 2.3 Self-Attention Mechanism The attention function is computed on an input sequence using the embeddings associated with the input tokens. The attention function takes as input a set of queries, keys, and values. To construct the queries, keys, and values, every input embedding is multiplied by learned sets of weights (it is called ‘self’ attention because the values are the same as the keys and queries). For a single query, the output of the attention function is a weighted combination of values. The query and a key determine the weight for a value. Denote a set of queries, keys, and values by Q, K, and V. The attention function is Attention(𝑄, 𝐾, 𝑉 ) = softmax( 𝑄𝐾𝑇 √ 𝑑 𝑉 ), (1) where d is the dimensionality of the queries, keys, and values. This function can be computed efficiently and can capture long-range interactions between any two elements of the input sequence [33]. The length and complex patterns in clinical notes makes the transformer architecture with self-attention a good choice. (We later describe how this attention mechanism can allow interpretation of ClinicalBERT predictions.) # 2.4 Pre-training ClinicalBERT The quality of learned representations of text depends on the text the model was trained on. bert is trained on BooksCorpus and Wikipedia. But these datasets are distinct from clinical notes, as jargon and abbreviations prevail: clinical notes have different syntax and grammar than books or encyclopedias. These differences make clinical notes hard to understand without expertise. ClinicalBERT is pre-trained on clinical notes as follows. CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON [ets] > ‘cute > Transformer dia—_> Encoders #4st0 > hows . Hi >| W |— P(readmit = 1 | hos) his > sep > isis —> Figure 3: ClinicalBERT models clinical notes and can be read- ily adapted to clinical tasks such as predicting 30-day readmis- sion. The model is fed a patient’s clinical notes, and the patient’s risk of readmission within a 30-day window is predicted using a linear layer applied to the classification representation ℎ [CLS] learned by ClinicalBERT. This fine-tuning task is described in Equation (2) ClinicalBERT uses the same pre-training tasks as [11]. Masked lan- guage modeling means masking some input tokens and training the model to predict the masked tokens. In next sentence prediction, two sentences are fed to the model. The model predicts whether these sentences are consecutive. The pre-training objective function is the sum of the log-likelihood of the predicted masked tokens and the log- likelihood of the binary variable indicating whether two sentences are consecutive. # 2.5 Fine-tuning ClinicalBERT After pre-training, ClinicalBERT is fine-tuned on a clinical task: readmission prediction. Let readmit be a binary indicator of readmis- sion of a patient in the next 30 days. Given clinical notes as input, the output of ClinicalBERT is used to predict the probability of read- mission: (2) 𝑃 (readmit = 1|ℎ [CLS] ) = 𝜎 (𝑊 ℎ [CLS] ) where 𝜎 is the sigmoid function, ℎ [CLS] is the output of the model corresponding to the classification token, and W is a parameter matrix. The model parameters are fine-tuned to maximize the log-likelihood of this binary classifier. # 3 Empirical Study 3.1 Data We use the Medical Information Mart for Intensive Care III (mimic- iii) dataset [15]. mimic-iii consists of the electronic health records of 58,976 unique hospital admissions from 38,597 patients in the intensive care unit of the Beth Israel Deaconess Medical Center between 2001 and 2012. There are 2,083,180 de-identified notes associated with the admissions. Preprocessing of the clinical notes is described in S2. If text that exists in the test set of the fine-tuning task is used for pre-training, then training and test metrics will not be independent. To avoid this, admissions are split into five folds for independent runs, with four folds for pre-training (and training during fine-tuning) and the fifth for testing during fine-tuning. # 3.2 Empirical Study I: Language Modeling and Clinical Word Similarity We developed ClinicalBERT, a model of clinical notes whose repre- sentations can be used for clinical tasks. Before evaluating its per- formance as a model of readmission, we study its performance in two experiments. First, we find that ClinicalBERT outperforms bert CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON Table 1: ClinicalBERT improves over bert on clinical language modeling. We report the five-fold average accuracy of masked language modeling (predicting held-out tokens) and next sen- tence prediction (a binary prediction of whether two sentences are consecutive), on the mimic-iii corpus of clinical notes. Model ClinicalBERT bert Language modeling Next sentence prediction 0.857 ± 0.002 0.495 ± 0.007 0.994 ± 0.003 0.539 ± 0.006 in clinical language modeling. Then we compare ClinicalBERT to popular word embedding models using a clinical word similarity task. The relationships between medical concepts learned by Clini- calBERT correlate with human evaluations of similarity. 3.2.1 Clinical Language Modeling. We report the five-fold average accuracy of the masked language modeling and next sentence predic- tion tasks on the mimic-iii data in Table 1. bert underperforms, as it was not trained on clinical text, highlighting the need for building models tailored to clinical data such as ClinicalBERT. 3.2.2 Qualitative Analysis. We test ClinicalBERT on data collected to assess medical term similarity [22]. The data is 30 pairs of medical terms whose similarity is rated by physicians. To compute an embed- ding for a medical term, ClinicalBERT is fed a sequence of tokens cor- responding to the term. Following [11], the sum of the last four hid- den states of ClinicalBERT encoders is used to represent each medi- cal term. Medical terms vary in length, so the average is computed over the hidden states of subword units. This results in a fixed 768- dimensional vector for each medical term. We visualize the similarity of medical terms using dimensionality reduction [19], and display a cluster heart-related concepts in Figure 4. Heart-related concepts such as myocardial infarction, atrial fibrillation, and myocardium are close together; renal failure and kidney failure are also close. This demon- strates that ClinicalBERT captures some clinical semantics. 3.2.3 Quantitative Analysis. We benchmark embedding models us- ing the clinical concept dataset in [22]. The data consists of concept pairs, and the similarity of a pair is rated by physicians, with a score ranging from 1.0 to 4.0 (least similar to most similar). To evaluate representations of clinical text, we calculate the similarity between two concepts’ embeddings a and b using cosine similarity, Similarity(𝑎, 𝑏) = 𝑎 · 𝑏 ∥𝑎∥ ∥𝑏 ∥ (3) We calculate the Pearson correlation between physician ratings of medical concept similarity and the cosine similarity between model embeddings. Models with high correlation capture human-rated simi- larity between clinical terms. Wang et al. [34] conducts a similar eval- uation on this data using Word2Vec word embeddings [20] trained on clinical notes, biomedical literature, and Google News. However, this work relies on a private clinical note dataset from The Mayo Clinic to train the Word2Vec model. For a fair comparison with ClinicalBERT, we retrain the Word2Vec model using clinical notes from mimic- iii. The Word2Vec model is trained on 2.8B words from mimic-iii with the same hyperparameters as [34]. Word2Vec cannot handle out-of-vocabulary words; we ignore the three medical pairs in the Kexin Huang, Jaan Altosaar, and Rajesh Ranganath Table 2: ClinicalBERT captures physician-assessed relation- ships between clinical terms. The Pearson correlation is com- puted between the cosine similarity of embeddings learned by models of clinical text and physician ratings of the similarity of medical concepts in the dataset of [22]. These numbers are com- parable to the best result, 0.632, from [34]. Model ClinicalBERT Word2Vec FastText Pearson correlation 0.670 0.553 0.487 clinical concepts dataset that do not have embeddings (correlation is computed using the remaining 27 medical pairs). Because of this shortcoming, we also train a FastText model [6] on mimic-iii, which models out-of-vocabulary words using subword units. FastText and Word2Vec are trained on the full mimic-iii data, so we also pre-train ClinicalBERT on the full data for comparison. Table 2 shows how these models correlate with physician, with ClinicalBERT more ac- curately correlating with physician judgment. # 3.3 Empirical Study II: 30-Day Hospital # Readmission Prediction The representations learned by ClinicalBERT can help address prob- lems in the clinic. We build a model to predict hospital readmission from clinical notes. Compared to benchmark language models, Clin- icalBERT accurately predicts readmission. Further, ClinicalBERT predictions can be interrogated by visualizing attention weights to reveal interpretable patterns in medical data. 3.3.1 Cohort. We select a patient cohort from mimic-iii using pa- tient covariates. The binary readmit label associated with each pa- tient admission is computed as follows. Admissions where a patient is readmitted within 30 days are labeled readmit = 1. All other pa- tient admissions are labeled zero, including patients with appoint- ments within 30 days (to model unexpected readmission). In-hospital death precludes readmission, so admissions with deaths are removed. Newborn patients account for 7,863 admissions. Newborns are in the neonatal intensive care unit, where most undergo testing and are sent back for routine care. This leads to a different distribution of clinical notes and readmission labels; we filter out newborns and focus on non- newborn readmissions. The final cohort contains 34,560 patients with 2,963 positive readmission labels and 42,358 negative labels. Scalable Readmission Prediction. Patients are often associated 3.3.2 with many notes. ClinicalBERT has a fixed length of input sequence, so notes are concatenated and split to this maximum length. Predic- tions for patients with many notes are computed by binning the pre- dictions on each subsequence. The probability of readmission for a patient is computed as follows. For a patient whose notes are split into n subsequences, ClinicalBERT outputs a probability for each subsequence. The probability of readmission is computed using the predictions for each subsequence: # max + 𝑃𝑛 𝑃𝑛 𝑃 (readmit = 1 | ℎpatient) = max + 𝑃𝑛 𝑃𝑛 mean𝑛/𝑐 1 + 𝑛/𝑐 , (4) ClinicalBERT Cholangiocarcinoma Congestive heart failure Pulmonary embolus Myocardial infarction Heart Atrial fibrillation Appendicits Myocardium APP Colonoscopy Rheumatoid arthritis Peptic ulcer disease Calcification oKidney failure oRenal Failure Figure 4: ClinicalBERT reveals interpretable patterns in medi- cal concepts. The model is trained on clinical notes from mimic- iii, and the embeddings of clinical terms from the dataset in [22] are plotted using the t-distributed stochastic neighbor embed- ding algorithm for dimensionality reduction [19]. We highlight a subset of the plot centered on a cluster of terms relating to heart conditions such as myocardial infarction, heart failure, and kid- ney failure. The scaling factor 𝑐 controls the influence of the number of subse- quences n, and ℎpatient) is the implicit ClinicalBERT representation of all of a patient’s notes. The maximum and mean probabilities of readmission over n subsequences are 𝑃𝑛 Computing readmission probability using Equation (4) outperforms predictions using the mean for each subsequence by 3–8%. This for- mula is motivated by observations: some subsequences do not con- tain information about readmission (such as tokens corresponding to progress reports), whereas others do. The risk of readmission should be computed using subsequences that correlate with readmission, and the effect of unimportant subsequences should be minimized. This is accomplished by using the maximum probability over subse- quences. Second, noise in subsequences decreases performance. For example, consider the case where one noisy subsequence has a pre- diction of 0.8, but all other subsequences have predictions close to zero. Using only the maximum would lead to a false prediction if the maximum is due to noise, so we include the average probability of readmission across subsequences. This leads to a trade-off between the mean and maximum probabilities of readmission in Equation (4). Finally, if there are a large number of subsequences (for a patient with many clinical notes), there is a higher probability of a noisy maximum probability of readmission. This means longer sequences may need a larger weight on the mean prediction. We include this weight as an 𝑛/𝑐 scaling factor, with c accounting for patients with many notes. The denominator results from normalizing the risk score to the unit interval. The parameter c is selected using the validation set; 𝑐 = 2 was selected. 3.3.3 Evaluation. For validation and testing, the cohort is split into five folds. In each fold 20% is used for validation (10%) and test (10%) sets, with the rest for training. Each model is evaluated using three metrics: 1. Area under the receiver operating characteristic curve (AUROC): the area under the true positive rate versus the false positive rate. 2. Area under the precision-recall curve (AUPRC): the area under the plot of precision versus recall. 3. Recall at precision of 80% (RP80): for readmission prediction, false positives are important. To minimize the number of false CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON Table 3: ClinicalBERT accurately predicts 30-day readmission using discharge summaries. The mean and standard deviation of 5-fold cross validation is reported. ClinicalBERT outper- forms the bag-of-words model, the bi-lstm, and bert deep lan- guage models. Model ClinicalBERT 0.714 ± 0.018 0.684 ± 0.025 Bag-of-words bi-lstm 0.694 ± 0.025 bert 0.692 ± 0.019 AUROC AUPRC 0.701 ± 0.021 0.674 ± 0.027 0.686 ± 0.029 0.678 ± 0.016 RP80 0.242 ± 0.111 0.217 ± 0.119 0.223 ± 0.103 0.172 ± 0.101 positives and hence minimize the risk of alarm fatigue, we fix precision to 80% (or, 20% false positives in the positive class predictions). This threshold is used to calculate recall. This leads to a clinically-relevant metric that enables building models that minimize the false positive rate. 3.3.4 Models. We compare ClinicalBERT to three competitive mod- els. Boag et al. [5] conclude that a bag-of-words model and a long short-term memory (lstm) model with Word2Vec embeddings work well for predictive tasks on mimic-iii clinical notes. We also com- pare to bert with trainable weights. Training details are in Appen- dix A. 1. ClinicalBERT: the model parameters include the weights of the encoder network and the learned classifier weights. 2. Bag-of-words: this method uses word counts to represent a note. The 5,000 most frequent words are used as features. Logistic regression with L2 regularization is used to predict readmission. 3. Bidirectional long short-term memory (bi-lstm) and Word2Vec [27, 14]: a bi-lstm is used to model words in a sequence. The final hidden layer is used to predict readmission. 4. bert: this is what ClinicalBERT is based on, but bert is pre- trained not on clinical notes but standard language corpora. We also compared to ELMo [24], where a standard 1,024-dimensional embedding for each text subsequence is computed and a neural net- work classifier is used to fit the training readmission labels. The per- formance was much worse, and we omit these results. This may be because the weights in ELMo are not learned, and the fixed-length embedding may not be able to store the information needed for a clas- sifier to detect signal from long and complex clinical text. 3.3.5 Readmission Prediction with Discharge Summaries. Discharge summaries contain essential information of patient admissions since they are used by the post-hospital care team and by doctors in fu- ture visits [32]. The summary may contain information like a pa- tient’s discharge condition, procedures, treatments, and significant findings [16]. This means discharge summaries should have predic- tive value for hospital readmission. Table 3 shows that ClinicalBERT outperforms competitors in terms of precision and recall on a task of readmission prediction using patient discharge summaries. CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON Kexin Huang, Jaan Altosaar, and Rajesh Ranganath Table 4: ClinicalBERT outperforms competitive baselines on readmission prediction using clinical notes from early on within patient admissions. In mimic-iii data, admission and discharge times are available, but clinical notes do not have timestamps. The cutoff time indicates the range of admission durations that are fed to the model from early in a patient’s admission. For example, in the 24–48h column, the model may only take as input a patient’s notes up to 36h because of that patient’s specific admission time. Metrics are reported as the mean and standard deviation of 5 independent runs. Model ClinicalBERT Bag-of-words bi-lstm bert Cutoff time 24–48h 48–72h 24–48h 48–72h 24–48h 48–72h 24–48h 48–72h AUROC 0.674 ± 0.038 0.672 ± 0.039 0.648 ± 0.029 0.654 ± 0.035 0.649 ± 0.044 0.656 ± 0.035 0.659 ± 0.034 0.661 ± 0.028 AUPRC 0.674 ± 0.039 0.677 ± 0.036 0.650 ± 0.027 0.657 ± 0.026 0.660 ± 0.036 0.668 ± 0.028 0.656 ± 0.021 0.668 ± 0.021 RP80 0.154 ± 0.099 0.170 ± 0.114 0.144 ± 0.094 0.122 ± 0.106 0.143 ± 0.080 0.150 ± 0.081 0.141 ± 0.080 0.167 ± 0.088 0.656 +0.021 0.141 + 0.080 0.668 + 0.021 0.167 + 0.088 volume T D 1 fl i fl 0.05 0.10 0.15 0.20 0.25 0.30 Token Position Attention Weight 3.3.6 Readmission Prediction with Early Clinical Notes. Discharge summaries can be used to predict readmission, but may be written af- ter a patient has left the hospital. Therefore, discharge summaries are not useful for intervention—doctors cannot intervene when a patient has left the hospital. Models that dynamically predict readmission in the early stages of a patient’s admission are relevant to clinicians. For the second set of readmission prediction experiments, a maxi- mum of the first 48 or 72 hours of a patient’s notes are concatenated. These concatenated notes are used to predict readmission. Since we separate notes into subsequences of the same length, the training set consists of all subsequences up to a cutoff time. The model is tested given notes up to 24–48h or 48–72h of a patient’s admission. We do not consider 0-24h cutoff time because there may be too few notes for good predictions. Note that readmission predictions from a model are not actionable if a patient has been discharged. For evaluation, patients that are discharged within the cutoff time are filtered out. Models of readmission prediction are evaluated using the metrics. Table 4 shows that ClinicalBERT outperforms competitors in both experiments. The AUROC and AUPRC results show that Clinical- BERT has more confidence and higher accuracy. At a fixed rate of false alarms, ClinicalBERT recalls more patients that have been read- mitted, and its performance increases as the length of admissions increases and the model has access to more clinical notes. Interpretability. Clinician mistrust of data-driven methods is 3.3.7 sensible: predictions from a neural network are difficult to under- stand for humans, and it is not clear why a model makes a certain pre- diction or what parts of the data are most informative. ClinicalBERT uses several attention mechanisms which can be used to inspect pre- dictions by visualizing terms correlated with hospital readmission. For a clinical note fed to ClinicalBERT, attention mechanisms com- pute a distribution over every term in a sentence, given a query term. For a given query vector q computed from an input token, the atten- tion weight distribution is defined as a qk" AttentionWeight(q, K) = softmax| —— }. (5) vd The attention weights are used to compute the weighted sum of values. A high attention weight between a query and key token means the interaction between these tokens is predictive of readmission. In the Figure 5: ClinicalBERT provides interpretable predictions, by revealing which terms in clinical notes are predictive of patient readmission. The self-attention mechanisms in ClinicalBERT can be used to interpret model predictions on clinical notes. The input sentence “he has experienced acute chronic diastolic heart failure in the setting of volume overload due to his sepsis.” is fed to the model (this sentence is representative of a clinical note found in mimic-iii). Equation (5) is used to compute a distribu- tion over tokens in this sentence, where every query token is it- self a token in the same input sentence. In the panel, we show one of the self-attention mechanisms in ClinicalBERT, and only la- bel terms that have high attention weight. The x-axis labels are query tokens and the y-axis labels are key tokens. ClinicalBERT ClinicalBERT encoder, there are 144 self-attention mechanisms (or, 12 multi-head attention mechanisms for each of the 12 transformer encoders). After training, each mechanism specializes to different patterns in clinical notes that are indicative of readmission. To illustrate, a sentence representative of a mimic-iii note is fed to ClinicalBERT. Both the queries and keys are the tokens in the sentence. Attention weight distributions for every query are computed using Equation (5) and visualized in Figure 5. The panel shows an attention mechanism that is activated for the word ‘chronic’ and ‘acute’ given any query term. This means some attention heads focus on for specific predictive terms, a similar computation to a bag- of-words model. Intuitively, the word ‘chronic’ is a predictor of readmission. # 4 Guidelines on using ClinicalBERT in # Practice ClinicalBERT is pre-trained on mimic-iii, which consists of patients from ICUs in one Boston hospital. As notes vary by institution and clinical setting (e.g. ICU vs outpatient), to use ClinicalBERT in prac- tice we recommend training ClinicalBERT using the private ehr dataset available at the practitioner’s institution. After fitting the model, ClinicalBERT can be used for downstream clinical tasks (e.g. mortality prediction or length-of-stay prediction). We include a tuto- rial for adapting ClinicalBERT for such downstream classification tasks in the repository. # 5 Discussion We developed ClinicalBERT, a model for learning deep representa- tions of clinical text. Empirically, ClinicalBERT is an accurate lan- guage model and captures physician-assessed semantic relationships in clinical text. In a 30-day hospital readmission prediction task, Clin- icalBERT outperforms a deep language model and yields a large rel- ative increase on recall at a fixed rate of false alarms. Future work includes engineering to scale ClinicalBERT to capture dependencies in long clinical notes; the max and sum operations in Equation (4) may not capture correlations within long notes. Finally, note that the mimic-iii dataset we use is small compared to the large volume of clinical notes available internally at hospitals. Rather than using pre-trained mimic-iii ClinicalBERT embeddings, this suggests that the use of ClinicalBERT in hospitals should entail re-training the model on this larger collection of notes for better performance. The publicly-available ClinicalBERT model parameters can be used to evaluate performance on clinically-relevant prediction tasks based on clinical notes. # 6 Acknowledgements We thank Noémie Elhadad for helpful discussion. Grass icon by Milinda Courey from the Noun Project. # References [1] E. Alsentzer, J. R. Murphy, W. Boag, W.-H. Weng, D. Jin, T. Naumann, and M. B. A. McDermott. “Publicly Available Clinical BERT Embeddings”. In: arXiv:1904.03323 (2019). [2] G. F. Anderson and E. P. Steinberg. “Hospital readmissions in the Medicare population”. In: New England Journal of Medicine 21 (1984). CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON [3] D. Banerjee, C. Thompson, C. Kell, R. Shetty, Y. Vetteth, H. Grossman, A. DiBiase, and M. Fowler. “An informatics-based approach to reducing heart failure all-cause readmissions: the Stanford heart failure dashboard”. In: Journal of the American Medical Informatics Association 3 (2016). [4] S. Basu Roy, A. Teredesai, K. Zolfaghar, R. Liu, D. Hazel, S. Newman, and A. Marinez. “Dynamic Hierarchical Clas- sification for Patient Risk-of-Readmission”. In: Knowledge Discovery and Data Mining (2015). [5] W. Boag, D. Doss, T. Naumann, and P. Szolovits. “What’s in a Note? Unpacking Predictive Value in Clinical Note Repre- sentations”. In: AMIA Joint Summits on Translational Science (2018). P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov. “Enrich- ing word vectors with subword information”. In: Transactions of the Association for Computational Linguistics (2017). [7] X. Cai, O. Perez-Concha, E. Coiera, F. Martin-Sanchez, R. Day, D. Roffe, and B. Gallego. “Real-time prediction of mor- tality, readmission, and length of stay using electronic health record data”. In: Journal of the American Medical Informat- ics Association 3 (2015). [6] [8] R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. El- hadad. “Intelligible models for healthcare: Predicting pneu- monia risk and hospital 30-day readmission”. In: Knowledge Discovery and Data Mining. 2015. [9] W. W. Chapman, P. M. Nadkarni, L. Hirschman, L. W. D’Avolio, G. K. Savova, and O. Uzuner. “Overcoming barriers to NLP for clinical text: the role of shared tasks and the need for addi- tional creative solutions”. In: Journal of the American Medi- cal Informatics Association 5 (2011). [10] B. Chiu, G. Crichton, A. Korhonen, and S. Pyysalo. “How to Train good Word Embeddings for Biomedical NLP”. In: Proceedings of the 15th Workshop on Biomedical Natural Language Processing, ACL 2016 (). J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”. In: arXiv:1810.04805 (2018). J. Futoma, J. Morris, and J. Lucas. “A comparison of models for predicting early hospital readmissions”. In: Journal of Biomedical Informatics (2015). [11] [13] B. A. Goldstein, A. M. Navar, M. J. Pencina, and J. P. A. Ioan- nidis. “Opportunities and challenges in developing risk pre- diction models with electronic health records data: a system- atic review”. In: Journal of the American Medical Informat- ics Association (2017). [14] S. Hochreiter and J. Schmidhuber. “Long Short-Term Mem- ory”. In: Neural Computation 8 (1997). [15] A. E. W. Johnson, T. J. Pollard, L. Shen, L.-w. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. Anthony Celi, and R. G. Mark. “MIMIC-III, a freely accessible critical care database”. In: Scientific Data (2016). [16] A. J. Kind and M. A. Smith. “Documentation of mandated discharge summary components in transitions from acute to subacute care”. In: Agency for Healthcare Research and Quality (2008). CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON [17] J. Lee, W. Yoon, S. Kim, D. Kim, S. Kim, C. H. So, and J. Kang. “BioBERT: a pre-trained biomedical language repre- sentation model for biomedical text mining”. In: arXiv:1901.08746 (2019). J. Liu, Z. Zhang, and N. Razavian. “Deep EHR: Chronic Disease Prediction Using Medical Notes”. In: Proceedings of the 3rd Machine Learning for Healthcare Conference. 2018. [19] L. van der Maaten and G. Hinton. “Visualizing data using t- [18] SNE”. In: Journal of Machine Learning Research (2008). [20] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. “Distributed representations of words and phrases and their compositionality”. In: Advances in Neural Information Pro- cessing Systems. 2013. [21] C. A. Pedersen, P. J. Schneider, and D. J. Scheckelhoff. “ASHP national survey of pharmacy practice in hospital settings: Pre- scribing and transcribing—2016”. In: American Journal of Health-System Pharmacy 17 (2017). [22] T. Pedersen, S. V. Pakhomov, S. Patwardhan, and C. G. Chute. “Measures of semantic similarity and relatedness in the biomed- ical domain”. In: Journal of Biomedical Informatics 3 (2007). J. Pennington, R. Socher, and C. Manning. “Glove: Global Vectors for Word Representation”. In: EMNLP (2014). [24] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. “Deep contextualized word rep- resentations”. In: arXiv:1802.05365 (2018). [25] A. Radford. “Improving Language Understanding by Gen- erative Pre-Training”. https : / / s3 - us - west - 2 . amazonaws . com/openai-assets/research-covers/language-unsupervised/ language_understanding_paper.pdf. 2018. [26] A. Rajkomar, E. Oren, K. Chen, A. M. Dai, N. Hajaj, M. Hardt, P. J. Liu, X. Liu, J. Marcus, M. Sun, P. Sundberg, H. Yee, K. Zhang, Y. Zhang, G. Flores, G. E. Duggan, J. Irvine, Q. Le, K. Litsch, A. Mossin, J. Tansuwan, D. Wang, J. Wexler, J. Wilson, D. Ludwig, S. L. Volchenboum, K. Chou, M. Pearson, S. Madabushi, N. H. Shah, A. J. Butte, M. D. Howell, C. Cui, G. S. Corrado, and J. Dean. “Scalable and accurate deep learning with electronic health records”. In: NPJ Digital Medicine 1 (2018). [27] M. Schuster and K. K. Paliwal. “Bidirectional recurrent neural networks”. In: IEEE Trans. Signal Processing (1997). S. Sendelbach and M. Funk. “Alarm fatigue: a patient safety concern”. In: AACN Advanced Critical Care 4 (2013). [29] R. Sennrich, B. Haddow, and A. Birch. “Neural Machine Translation of Rare Words with Subword Units”. In: Proceed- ings of the 54th Annual Meeting of the Association for Com- putational Linguistics. 2016. [28] [30] B. Shickel, P. J. Tighe, A. Bihorac, and P. Rashidi. “Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis”. In: IEEE Journal of Biomedical and Health Informatics 5 (2018). [31] Y. Si, J. Wang, H. Xu, and K. Roberts. “Enhancing clinical concept extraction with contextual embeddings”. In: Journal of the American Medical Informatics Association 11 (2019). [32] C. Van Walraven, R. Seth, P. C. Austin, and A. Laupacis. “Ef- fect of discharge summary availability during post-discharge visits on hospital readmission”. In: Journal of General Inter- nal Medicine 3 (2002). Kexin Huang, Jaan Altosaar, and Rajesh Ranganath [33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. “Attention is all you need”. In: Advances in Neural Information Processing Systems. 2017. [34] Y. Wang, S. Liu, N. Afzal, M. Rastegar-Mojarad, L. Wang, F. Shen, P. Kingsbury, and H. Liu. “A comparison of word embeddings for the biomedical natural language processing”. In: Journal of Biomedical Informatics (2018). [35] W.-H. Weng, K. B. Wagholikar, A. T. McCray, P. Szolovits, and H. C. Chueh. “Medical Subdomain Classification of Clin- ical Notes Using a Machine Learning-Based Natural Lan- guage Processing Approach”. In: BMC Medical Informatics and Decision Making 1 (2017). [36] C. Xiao, E. Choi, and J. Sun. “Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review”. In: Journal of the Ameri- can Medical Informatics Association 10 (2018). [37] K.-H. Yu, A. L. Beam, and I. S. Kohane. “Artificial intelli- gence in healthcare”. In: Nature Biomedical Engineering 10 (2018). [38] Y. Zhang, R. Jin, and Z.-H. Zhou. “Understanding bag-of- words model: a statistical framework”. In: International Jour- nal of Machine Learning and Cybernetics 1 (2010). [39] Y. Zhang, R. Henao, Z. Gan, Y. Li, and L. Carin. “Multi-Label Learning from Medical Plain Text with Convolutional Resid- ual Models”. In: Proceedings of the 3rd Machine Learning for Healthcare Conference. 2018. [40] R. B. Zuckerman, S. H. Sheingold, E. J. Orav, J. Ruhter, and A. M. Epstein. “Readmissions, observation, and the hospital readmissions reduction program”. In: New England Journal of Medicine 16 (2016). # A Hyperparameters and training details The parameters are initialized to the bert Base parameters released by [11]; we follow their recommended hyper-parameter settings. The model dimensionality is 768. We use the Adam optimizer with a learning rate of 2𝑥10−5. The maximum sequence length supported by the model is set to 512, and the model is first trained using shorter sequences. The details of constructing a sequence are in [11]. For efficient mini-batching that avoids padding mini-batch elements of variable lengths with too many zeros, a corpus is split into multi- ple sequences of equal lengths. Many sentences are packed into a sequence until the maximum length is reached; a sequence may be composed of many sentences. The next sentence prediction task de- fined in [11] might more accurately be termed a next sequence pre- diction task. Our ClinicalBERT model is first trained using a maxi- mum sequence length of 128 for 100,000 iterations on the masked language modeling and next sentence prediction tasks, with a batch size 64. Next, the model is trained on longer sequences of maximum length 512 for an additional 100,000 steps with a batch size of 8. When using text that exists in the test set of the fine-tuning task for pre-training, the training and test set during fine-tuning will not be independent. To avoid this, admissions are split into five folds for independent runs, with four folds for pre-training and training dur- ing fine-tuning and the fifth for testing during fine-tuning. Hence, for each independent run, during pre-training, we use all the discharge ClinicalBERT summaries associated with admissions in the four folds. During fine- tuning for readmission task, ClinicalBERT is trained for three epochs with batch size 56 and learning rate 2𝑥10−5. The binary classifier is a three layers neural network of shape 768 x 2048, 2048 x 768, and 768 x 1. We fine-tune ClinicalBERT with three epochs and early stopped on validation loss as the criteria. For Bi-lstm, for the input word embedding, the Word2Vec model is used. The Bi-lstm has 200 output units, with a dropout rate of 0.1. The hidden state is fed into a global max pooling layer and a fully- connected layer with a dimensionality of 50, followed by a rectifier activation function. The rectifier is followed by a fully-connected layer with a single output unit with sigmoid activation function. The binary classification objective function is optimized using the Adam adaptive learning rate (40). The Bi-lstm is trained for three epochs with a batch size of 64 with early stopping based on the validation loss. For the empirical study, we use a server with 2 Intel Xeon E5- 2670v2 2.5GHZ CPUs, 128GB RAM and 2 NVIDIA Tesla P40 GPUs. # B Preprocessing Notes for Pretraining ClinicalBERT ClinicalBERT requires minimal preprocessing. First, words are con- verted to lowercase and line breaks and carriage returns are removed. Then de-identified brackets and remove special characters like ==, – are removed. The next sentence prediction pretraining task described in Section 5 requires two sentences at every iteration. The SpaCy sentence segmentation package is used to segment each note. Since clinical notes don’t follow rigid standard language grammar, we find rule-based segmentation has better results than dependency parsing- based segmentation. Various segmentation signs that misguide rule- based segmentators are removed (such as 1.2.) or replaced (M.D., dr. with MD, Dr). Clinical notes can include various lab results and medications that also contain numerous rule-based separators, such as 20mg, p.o., q.d.. To address this, segmentations that have less than 20 words are fused into the previous segmentation so that they are not singled out as different sentences. CHIL ’20 Workshop, April 02–04, 2020, Toronto, ON
{ "id": "1802.05365" }
1904.02920
Branched Multi-Task Networks: Deciding What Layers To Share
In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand. Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers. Understandably, as the number of possible network configurations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome. Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search techniques to establish the network design, which is considerably expensive. In this paper, we go beyond these limitations and propose an approach to automatically construct branched multi-task networks, by leveraging the employed tasks' affinities. Given a specific budget, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific. Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters.
http://arxiv.org/pdf/1904.02920
Simon Vandenhende, Stamatios Georgoulis, Bert De Brabandere, Luc Van Gool
cs.CV
Accepted at BMVC 2020
null
cs.CV
20190405
20200813
VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS 0 2 0 2 # Branched Multi-Task Networks: Deciding What Layers To Share g u A 3 1 Simon Vandenhende1 [email protected] Stamatios Georgoulis2 [email protected] Bert De Brabandere1 [email protected] Luc Van Gool12 [email protected] 1 PSI-ESAT KU Leuven Leuven, Belgium 2 CVL/TRACE ETH Zurich Zurich, Switzerland ] # V C . s c [ | 5 v 0 2 9 2 0 . 4 0 9 1 : v i X r a # Abstract In the context of multi-task learning, neural networks with branched architectures have often been employed to jointly tackle the tasks at hand. Such ramified networks typically start with a number of shared layers, after which different tasks branch out into their own sequence of layers. Understandably, as the number of possible network config- urations is combinatorially large, deciding what layers to share and where to branch out becomes cumbersome. Prior works have either relied on ad hoc methods to determine the level of layer sharing, which is suboptimal, or utilized neural architecture search tech- niques to establish the network design, which is considerably expensive. In this paper, we go beyond these limitations and propose an approach to automatically construct branched multi-task networks, by leveraging the employed tasks’ affinities. Given a specific bud- get, i.e. number of learnable parameters, the proposed approach generates architectures, in which shallow layers are task-agnostic, whereas deeper ones gradually grow more task-specific. Extensive experimental analysis across numerous, diverse multi-tasking datasets shows that, for a given budget, our method consistently yields networks with the highest performance, while for a certain performance threshold it requires the least amount of learnable parameters. 1 # Introduction Deep neural networks are usually trained to tackle different tasks in isolation. Humans, in contrast, are remarkably good at solving a multitude of tasks concurrently. Biological data processing appears to follow a multi-tasking strategy too; instead of separating tasks and solving them in isolation, different processes seem to share the same early processing layers in the brain – see e.g. V1 in macaques [15]. Drawing inspiration from such observations, deep learning researchers began to develop multi-task networks with branched architectures. As a whole, multi-task networks [6] seek to improve generalization and processing ef- ficiency through the joint learning of related tasks. Compared to the typical learning of separate deep neural networks for each of the individual tasks, multi-task networks come © 2020. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. 1 2 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS with several advantages. First, due to their inherent layer sharing [14, 20, 21, 26, 31], the resulting memory footprint is typically substantially lower. Second, as features in the shared layers do not need to be calculated repeatedly for the different tasks, the overall inference speed is often higher [31, 34]. Finally, multi-task networks may outperform their single-task counterparts [20, 32, 43, 44]. Evidently, there is merit in utilizing multi-task networks. When it comes to designing them, however, a significant challenge is to decide on the layers that need to be shared among tasks. Assuming a hard parameter sharing setting1, the number of possible network configurations grows quickly with the number of tasks. As a result, a trial-and-error procedure to define the optimal architecture becomes unwieldy. Re- sorting to neural architecture search [11] techniques is not a viable option too, as in this case, the layer sharing has to be jointly optimized with the layers types, their connectivity, etc., rendering the problem considerably expensive. Instead, researchers have recently ex- plored more viable alternatives, like routing [39], stochastic filter grouping [5], and feature partitioning [35], which are, however, closer to the soft parameter sharing setting. Previous works on hard parameter sharing opted for the simple strategy of sharing the initial layers in the network, after which all tasks branch out simultaneously. The point at which the branch- ing occurs is usually determined ad hoc [14, 20, 43]. This situation hurts performance, as a suboptimal grouping of tasks can lead to the sharing of information between unrelated tasks, known as negative transfer [47]. In this paper, we go beyond the aforementioned limitations and propose a novel approach to decide on the degree of layer sharing between multiple visual recognition tasks in order to eliminate the need for manual exploration. To this end, we base the layer sharing on measur- able levels of task affinity or task relatedness: two tasks are strongly related, if their single task models rely on a similar set of features. [46] quantified this property by measuring the performance when solving a task using a variable sets of layers from a model pretrained on a different task. However, their approach is considerably expensive, as it scales quadrati- cally with the number of tasks. Recently, [10] proposed a more efficient alternative that uses representation similarity analysis (RSA) to obtain a measure of task affinity, by computing correlations between models pretrained on different tasks. Given a dataset and a number of tasks, our approach uses RSA to assess the task affinity at arbitrary locations in a neural network. The task affinity scores are then used to construct a branched multi-task network in a fully automated manner. In particular, our task clustering algorithm groups similar tasks together in common branches, and separates dissimilar tasks by assigning them to different branches, thereby reducing the negative transfer between tasks. Additionally, our method allows to trade network complexity against task similarity. We provide extensive empiri- cal evaluation of our method, showing its superiority in terms of multi-task performance vs computational resources. # 2 Related work Multi-task learning. Multi-task learning (MTL) [6, 41] is associated with the concept of jointly learning multiple tasks under a single model. This comes with several advantages, as described above. Early work on MTL often relied on sparsity constraints [4, 19, 27, 30, 45] 1In this setting, the input is first encoded through a stack of shared layers, after which tasks branch out into their own sequence of task-specific layers [14, 20, 21, 31, 43]. Alternatively, a set of task-specific networks can be used in conjunction with a feature sharing mechanism [26, 33, 42]. The latter approach is termed soft parameter sharing in the literature. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS to select a small subset of features that could be shared among all tasks. However, this can lead to negative transfer when not all tasks are related to each other. A general solution to this problem is to cluster tasks based on prior knowledge about their similarity or relatedness [1, 3, 12, 22, 48]. In the deep learning era, MTL models can typically be classified as utilizing soft or hard parameter sharing. In soft parameter sharing, each task is assigned its own set of parameters and a feature sharing mechanism handles the cross-task talk. Cross-stitch networks [33] softly share their features among tasks, by using a linear combination of the activations found in multiple single task networks. Sluice networks [42] extend cross-stitch networks and allow to learn the selective sharing of layers, subspaces and skip connections. In a different vein, multi-task attention networks [26] use an attention mechanism to share a general feature pool amongst task-specific networks. In general, MTL networks using soft parameter sharing are limited in terms of scalability, as the size of the network tends to grow linearly with the number of tasks. In hard parameter sharing, the parameter set is divided into shared and task-specific pa- rameters. MTL models using hard parameter sharing are often based on a generic framework with a shared off-the-shelf encoder, followed by task-specific decoder networks [7, 20, 34, 43]. Multilinear relationship networks [29] extend this framework by placing tensor normal priors on the parameter set of the fully connected layers. [14] proposed the construction of a hierarchical network, which predicts increasingly difficult tasks at deeper layers. A limi- tation of the aforementioned approaches is that the branching points are determined ad hoc, which can easily lead to negative transfer if the predefined task groupings are suboptimal. In contrast, in our branched multi-task networks, the degree of layer sharing is automatically determined in a principled way, based on task affinities. Our work bears some similarity to fully-adaptive feature sharing [31] (FAFS), which starts from a thin network where tasks initially share all layers, but the final one, and dy- namically grows the model in a greedy layer-by-layer fashion. Task groupings, in this case, are decided on the probability of concurrently simple or difficult examples across tasks. Dif- ferently, (1) our method clusters tasks based on feature affinity scores, rather than example difficulty, which is shown to achieve better results for a variety of datasets; (2) the tree struc- ture is determined offline using the precalculated affinities for the whole network, and not online in a greedy layer-by-layer fashion, which promotes task groupings that are optimal in a global, rather than local, sense. Neural architecture search. Neural architecture search (NAS) [11] aims to automate the construction of the network architecture. Different algorithms can be characterized based on their search space, search strategy or performance estimation strategy. Most existing works on NAS, however, are limited to task-specific models [24, 25, 37, 38, 49]. This is to be expected as when using NAS for MTL, layer sharing has to be jointly optimized with the layers types, their connectivity, etc., rendering the problem considerably expensive. To alleviate the heavy computation burden, a recent work [23] implemented an evolutionary architecture search for multi-task networks, while other researchers explored more viable alternatives, like routing [39], stochastic filter grouping [5], and feature partitioning [35]. In contrast to traditional NAS, the proposed methods do not build the architecture from scratch, but rather start from a predefined backbone network for which a layer sharing scheme is automatically determined. Transfer learning. Transfer learning [36] uses the knowledge obtained when solving one task, and applies it to a different but related task. Our work is loosely related to transfer learning, as we use it to measure levels of task affinity. [46] provided a taxonomy for task 3 4 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS Algorithm 1 Branched Multi-Task Networks - Task clustering 1: Input: Tasks T , K images I, a sharable encoder E with D locations where we can branch, a set of task specific decoders Dt and a computational budget C. 2: for t in T do 3: 4: 5: end for 6: Ad,i, j ← rs 7: D = 1 − A 8: Return: Task-grouping with minimal task dissimilarity that fits within C Train the encoder E and task-specific decoder Dt for task t. RDMt ← RDM(E, D, I) > RDM for task f. 6: Agij Ts (triu (RDM }...) ,triu (RDM j...)) for t;,t; in T and d in locations > Task affinity 7, D=1-A > Task dissimilarity transfer learning to quantify such relationships. However, their approach scales unfavorably w.r.t. the number of tasks, and we opted for a more efficient alternative proposed by [10]. The latter uses RSA to obtain a measure of task affinity, by computing correlations between models pretrained on different tasks. In our method, we use the performance metric from their work to compare the usefulness of different feature sets for solving a particular task. Loss weighting. One of the known challenges of jointly learning multiple tasks is prop- erly weighting the loss functions associated with the individual tasks. Early work [20] used the homoscedastic uncertainty of each task to weigh the losses. Gradient normalization [7] balances the learning of tasks by dynamically adapting the gradient magnitudes in the net- work. Liu et al. [26] weigh the losses to match the pace at which different tasks are learned. Dynamic task prioritization [14] prioritizes the learning of difficult tasks. [43] cast MTL as a multi-objective optimization problem, with the overall objective of finding a Pareto optimal solution. Note that, addressing the loss weighting issue in MTL is out of the scope of this work. In fact, all our experiments are based on a simple uniform loss weighing scheme. # 3 Method In this paper, we aim to jointly solve N different visual recognition tasks T = {t1, . . . ,tN} given a computational budget C, i.e. number of parameters or FLOPS. Consider a backbone architecture: an encoder, consisting of a sequence of shared layers or blocks fl, followed by a decoder with a few task-specific layers. We assume an appropriate structure for layer sharing to take the shape of a tree. In particular, the first layers are shared by all tasks, while later layers gradually split off as they show more task-specific behavior. The proposed method aims to find an effective task grouping for the sharable layers fl of the encoder, i.e. grouping related tasks together in the same branches of the tree. When two tasks are strongly related, we expect their single-task models to rely on a similar feature set [46]. Based on this viewpoint, the proposed method derives a task affinity score at various locations in the sharable encoder. The number of locations D can be freely determined as the number of candidate branching locations. As such, the resulting task affinity scores are used for the automated construction of a branched multi-task network that fits the computational budget C. Fig. 3 illustrates our pipeline, while Algorithm 1 summarizes the whole procedure. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS Train single Representation Similarity Analysis at D locations task networks Task i FE) 3 Correlation —> —> > Tesk/ me > fom -Oeston - ve) D cates x Task N Lt P imei | > a task networks Task i Me FE) 3 Correlation —> —> > Tesk/ me > fom -Oeston - @ © ve) D cates x Task N Lt P imei | > a @) @) ) Figure 1: Pipeline overview. (left) We train a single-task model for every task t ∈ T . (middle) We use RSA to measure the task affinity at D predefined locations in the sharable encoder. In particular, we calculate the representation dissimilarity matrices (RDM) for the features at D locations using K images, which gives a D × K × K tensor per task. (right) The affinity tensor A is found by calculating the correlation between the RDM matrices, which results in a three-dimensional tensor of size D × N × N, with N the number of tasks. Our Figure pipeline’s output is a branched multi- task network, similar to how NAS tech- niques output sample An architectures. example branched multi-task network is visualized here. Figure 3: The proposed method: (a) calculate task affinities at various locations in the sharable encoder; (b) build a branched multi-task network based on the computed affinities. # 3.1 Calculate task affinity scores As mentioned, we rely on RSA to measure task affinity scores. This technique has been widely adopted in the field of neuroscience to draw comparisons between behavioral models and brain activity. Inspired by how [10] applied RSA to select tasks for transfer learning, we use the technique to assess the task affinity at predefined locations in the sharable encoder. Consequently, using the measured levels of task affinity, tasks are assigned in the same or different branches of a branched multi-task network, subject to the computational budget C. The procedure to calculate the task affinity scores is the following. As a first step, we train a single-task model for each task ti ∈ T . The single-task models use an identical encoder E - made of all sharable layers fl - followed by a task-specific decoder Dti. The decoder contains only task-specific operations and is assumed to be significantly smaller in size compared to the encoder. As an example, consider jointly solving a classification and a dense prediction task. Some fully connected layers followed by a softmax operation are typically needed for the classification task, while an additional decoding step with some upscaling operations is required for the dense prediction task. Of course, the appropriate loss functions are applied in each case. Such operations are part of the task-specific decoder Dti. The different single-task networks are trained under the same conditions. At the second step, we choose D locations in the sharable encoder E where we calculate a two-dimensional task affinity matrix of size N × N. When concatenated, this results in a 5 6 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS three-dimensional tensor A of size D × N × N that holds the task affinities at the selected lo- cations. To calculate these task affinities, we have to compare the representation dissimilarity matrices (RDM) of the single-task networks - trained in the previous step - at the specified D locations. To do this, a held-out subset of K images is required. The latter images serve to compare the dissimilarity of their feature representations in the single-task networks for every pair of images. Specifically, for every task ti, we characterize these learned feature representations at the selected locations by filling a tensor of size D × K × K. This tensor contains the dissimilarity scores 1 − ρ between feature representations, with ρ the Pearson correlation coefficient. Specifically, RDMd,i, j is found by calculating the dissimilarity score between the features at location d for image i and j. The 3-D tensors are linearized to 1-D tensors to calculate the pearson correlation coefficient. For a specific location d in the network, the computed RDMs are symmetrical, with a diagonal of zeros. For every such location, we measure the similarity between the upper or lower triangular part of the RDMs belonging to the different single-task networks. We use the Spearman’s correlation coefficient rs to measure similarity. When repeated for every pair of tasks, at a specific location d, the result is a symmetrical matrix of size N × N, with a diagonal of ones. Concatenating over the D locations in the sharable encoder, we end up with the desired task affinity tensor of size D × N × N. Note that, in contrast to prior work [31], the described method focuses on the features used to solve the single tasks, rather than the examples and how easy or hard they are across tasks, which is shown to result in better task groupings in Section 4. Furthermore, the computational overhead to determine the task affinity scores based on feature correlations is negligible. We conclude that the computational cost of the method boils down to pre-training N single task networks. A detailed computational cost analysis can be found in the supplementary materials. Other measures of task similarity [2, 9] probed the features from a network pre-trained on ImageNet. This avoids the need to pre-train a set of single-task networks first. However, in this case, the task dictionaries only consisted of various, related classification problems. Differently, we consider more diverse, and loosely related task (see Section 4). In our case, it is arguably more important to learn the task-specific information needed to solve a task. This motivates the use of pre-trained single-task networks. # 3.2 Construct a branched multi-task network Given a computational budget C, we need to derive how the layers (or blocks) fl in the sharable encoder E should be shared among the tasks in T . Each layer fl ∈ E is represented as a node in the tree, i.e. the root node contains the first layer f0, and nodes at depth l contain layer(s) fl. The granularity of the layers fl corresponds to the intervals at which we measure the task affinity in the sharable encoder, i.e. the D locations. When the encoder is split into bl branches at depth l, this is equivalent to a node at depth l having bl children. The leaves of the tree contain the task-specific decoders Dt . Fig. 2 shows an example of such a tree using the aforementioned notation. Each node is responsible for solving a unique subset of tasks. The branched multi-task network is built with the intention to separate dissimilar tasks by assigning them to separate branches. To this end, we define the dissimilarity score be- tween two tasks ti and t j at location d as 1 − Ad,i, j, with A the task affinity tensor2. The branched multi-task network is found by minimizing the sum of the task dissimilarity scores at every location in the sharable encoder. In contrast to prior work [31], the task affinity (and 2This is not to be confused with the dissimilarity score used to calculate the RDM elements RDMd,i, j. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS dissimilarity) scores are calculated a priori. This allows us to determine the task clustering offline. Since the number of tasks is finite, we can enumerate all possible trees that fall within the given computational budget C. Finally, we select the tree that minimizes the task dissimilarity score. The task dissimilarity score of a tree is defined as Ccluster = ∑l Cl cluster, where Cl cluster is found by averaging the maximum distance between the dissimilarity scores of the elements in every cluster. The use of the maximum distance encourages the separation of dissimilar tasks. By taking into account the clustering cost at all depths, the procedure can find a task grouping that is considered optimal in a global sense. This is in contrast to the greedy approach in [31], which only minimizes the task dissimilarity locally, i.e. at isolated locations in the network. # 4 Experiments In this section, we evaluate the proposed method on a number of diverse multi-tasking datasets, that range from real to semi-real data, from few to many tasks, from dense predic- tion to classification tasks, and so on. For every experiment, we describe the most important elements of the setup. We report the number of parameters (#P) for every model to facilitate a fair comparison. Additional implementation details can be found in the supplementary materials. # 4.1 Cityscapes Dataset. The Cityscapes dataset [8] considers the scenario of urban scene understanding. The train, validation and test set contain respectively 2975, 500 and 1525 real images, taken by driving a car in Central European cities. It considers a few dense prediction tasks: se- mantic segmentation (S), instance segmentation (I) and monocular depth estimation (D). As in prior works [20, 43], we use a ResNet-50 encoder with dilated convolutions, followed by a Pyramid Spatial Pooling (PSP) [17] decoder. Every input image is rescaled to 512 x 256 pixels. We reuse the approach from [20] for the instance segmentation task, i.e. we consider the proxy task of regressing each pixel to the center of the instance it belongs to. Results. We measure the task affinity after every block (1 to 4) in the ResNet-50 model (see Fig. 4). The task affinity decreases in the deeper layers, due to the features becoming more task-specific. We compare the performance of the task groupings generated by our method with those by other approaches. As in [32], the performance of a multi-task model m is defined as the average per-task performance drop/increase w.r.t. a single-task baseline b. We trained all possible task groupings that can be derived from branching the model in the last three ResNet blocks. Fig. 5 visualizes performance vs number of parameters for the trained architectures. Depending on the available computational budget C, our method generates a specific task grouping. We visualize these generated groupings as a blue path in Fig. 5, when gradually increasing the computational budget C. Similarly, we consider the task groupings when branching the model based on the task affinity measure proposed by FAFS [31] (green path). We find that, in comparison, the task groupings devised by our method achieve higher performance within a given computational budget C. Furthermore, in the majority of cases, for a fixed budget C the proposed method is capable of selecting the best performing task grouping w.r.t. performance vs parameters metric (blue vs other). We also compare our branched multi-task networks with cross-stitch networks [33], NDDR-CNNs [13] and MTAN [26] in Table 2. While cross-stitch nets and NDDR-CNNs 7 8 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS 0.8 > 2 06 S “4 2 04 e 02 — Seg+Dep | “| | —— Seg+Inst ——Inst+Dep oF 3 3 4 0.5 T 0.0 7 4 05 | e = -LOF ee 4 iS a are | = | J e in Single-Task Bl I —— Ours | —— FAFS 3.5 b —— Other Groupings | | 1 I I 100 120 140 160 Number of parameters (M) # Ea] Measurement after ResNet block Figure 4: Task affinity scores measured after each ResNet-50 block on Cityscapes. Figure 5: Number of parameters versus multi-task performance on Cityscapes for different task groupings. The ’Other Group- ings’ contain any remaining tree structures that can be found by randomly branching the model in the last three ResNet blocks. give higher multi-task performance, attributed to their computationally expensive soft pa- rameter sharing setting, our branched networks can strike a better trade-off between the per- formance and number of parameters. In particular, we can effectively sample architectures which lie between the extremes of a baseline multi-task model and a cross-stitch or NDDR- CNN architecture. Finally, our models provide a more computationally efficient alternative to the MTAN model, which reports similar performance while using more parameters. # 4.2 Taskonomy Dataset. The Taskonomy dataset [46] contains semi-real images of indoor scenes, annotated for 26 (dense prediction, classification, etc.) tasks. Out of the available tasks, we select scene categorization (C), semantic segmentation (S), edge detection (E), monocular depth estimation (D) and keypoint detection (K). The task dictionary was selected to be as diverse as possible, while still keeping the total number of tasks reasonable for all computations. We use the tiny split of the dataset, containing 275k train, 52k validation and 54k test images. We reuse the architecture and training setup from [46]: the encoder is based on ResNet-50; a 15-layer fully-convolutional decoder is used for the pixel-to-pixel prediction tasks. Results. The task affinity is again measured after every ResNet block. Since the number of tasks increased to five, it is very expensive to train all task groupings exhaustively, as done above. Instead, we limit ourselves to three architectures that are generated when gradually increasing the parameter budget. As before, we compare our task groupings against the method from [31]. The numerical results can be found in Table 1. The task groupings themselves are shown in the supplementary materials. The effect of the employed task grouping technique can be seen from comparing the performance of our models against the corresponding FAFS models, generated by [31]. The latter are consistently outperformed by our models. Compared to the results on Cityscapes VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS Method Single task MTL baseline MTAN Cross-stitch NDDR-CNN FAFS - 1 FAFS - 2 FAFS - 3 Ours - 1 Ours - 2 Ours - 3 D (L1)↓ 0.60 0.75 0.71 0.61 0.66 0.74 0.80 0.74 0.76 0.74 0.74 S (IoU)↑ 43.5 47.8 43.8 44.0 45.9 46.1 39.9 46.1 47.6 48.0 47.9 C (Top-5)↑ 66.0 56.0 59.6 58.2 64.5 62.7 62.4 64.9 63.3 63.6 64.5 E (L1)↓ 0.99 1.37 1.86 1.35 1.05 1.30 1.68 1.05 1.12 0.96 0.94 K (L1)↓ 0.23 0.34 0.40 0.50 0.45 0.39 0.52 0.27 0.29 0.35 0.26 #P (M) 224 130 158 224 258 174 188 196 174 188 196 ∆MT L (%)↑ + 0.00 - 22.50 -37.36 - 32.29 - 21.02 - 24.5 - 48.32 - 8.48 - 11.88 - 12.66 - 4.93 Table 1: Results on the tiny Taskonomy test set. The results for edge (E) and keypoints (K) detection were multiplied by a factor of 100 for better readability. The FAFS models refer to generating the task groupings with the task affinity technique proposed by [31]. (Fig. 5), we find that the multi-task performance is much more susceptible to the employed task groupings, possibly due to negative transfer. Furthermore, we observe that none of the soft parmeter sharing models can handle the larger, more diverse task dictionary: the perfor- mance decreases when using these models, while the number of parameters increases. This is in contrast to our branched multi-task networks, which seem to handle the diverse set of tasks rather positively. As opposed to [46], but in accordance with [32], we show that it is possible to solve many heterogeneous tasks simultaneously when the negative transfer is limited, by separating dissimilar tasks from each other in our case. In fact, our approach is the first to show such consistent performance across different multi-tasking scenarios and datasets. Existing approaches seem to be tailored for particular cases, e.g. few/correlated tasks, synthetic-like data, binary classification only tasks, etc., whereas we show stable per- formance across the board of different experimental setups. # 4.3 CelebA Dataset. The CelebA dataset [28] contains over 200k images of celebrities, labeled with 40 facial attribute categories. The training, validation and test set contain 160k, 20k and 20k images respectively. We treat the prediction of each facial attribute as a single binary classification task, as in [18, 31, 43]. To ensure a fair comparison: we reuse the thin-ω model from [31] in our experiments; the parameter budget C is set for the model to have the same amount of parameters as prior work. Results. Table 3 shows the results on the CelebA test set. Our branched multi-task net- works outperform earlier works [18, 31] when using a similar amount of parameters. Since the Ours-32 model (i.e. ω is 32) only differs from the FAFS model on the employed task grouping technique, we can conclude that the proposed method devises more effective task groupings for the attribute classification tasks on CelebA. Furthermore, the Ours-32 model performs on par with the VGG-16 model, while using 64 times less parameters. We also compare our results with the ResNet-18 model from [43]. The Ours-64 model performs 1.35% better compared to the ResNet-18 model when trained with a uniform loss weigh- ing scheme. More noticeably, the Ours-64 model performs on par with the state-of-the-art ResNet-18 model that was trained with the MGDA loss weighing scheme from [43], while at the same time using 31% less parameters (11.2 vs 7.7 M). 9 10 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS Method Single task MTL baseline MTAN Cross-stitch NDDR-CNN Ours - 1 Ours - 2 Ours - 3 S (IoU)↑ 65.2 61.5 62.8 65.1 65.6 62.1 62.7 64.1 I (px)↓ 11.7 11.8 11.8 11.6 11.6 11.7 11.7 11.6 D (px)↓ 2.57 2.66 2.66 2.55 2.54 2.66 2.62 2.62 #P (M) 138 92 113 140 190 107 114 116 ∆MT L (%)↑ +0.00 -3.33 -2.53 +0.42 +0.89 -2.68 -1.84 -0.96 Method MOON [40] Independent Group [16] MCNN [16] MCNN-AUX [16] VGG-16 [31] FAFS [31] GNAS [16] Res-18 (Uniform) [43] Res-18 (MGDA) [43] Ours-32 Ours-64 Acc. (%) 90.94 91.06 91.26 91.29 91.44 90.79 91.63 90.38 91.75 91.46 91.73 #P (M) 119.73 - - - 134.41 2.09 7.73 11.2 11.2 2.20 7.73 Table 2: Results on the Cityscapes validation set. Table 3: Results on the CelebA test set. The Ours-32, Ours-64 architectures are found by optimizing the task clustering for the pa- rameter budget that is used in the FAFS, GNAS model respectively. # 5 Conclusion In this paper, we introduced a principled approach to automatically construct branched multi- task networks for a given computational budget. To this end, we leverage the employed tasks’ affinities as a quantifiable measure for layer sharing. The proposed approach can be seen as an abstraction of NAS for MTL, where only layer sharing is optimized, without having to jointly optimize the layers types, their connectivity, etc., as done in traditional NAS, which would render the problem considerably expensive. Extensive experimental analysis shows that our method outperforms existing ones w.r.t. the important metric of multi-tasking per- formance vs number of parameters, while at the same time showing consistent results across a diverse set of multi-tasking scenarios and datasets. Acknowledgment This work is sponsored by the Flemish Government under the Artificiele Intelligentie (AI) Vlaanderen programme. The authors also acknowledge support by Toyota via the TRACE project and MACCHINA (KU Leuven, C14/18/065). VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS # A Supplementary Materials # A.1 Cityscapes The encoder is a ResNet-50 model with dilated convolutions [? ], pre-trained on ImageNet. We use a PSP module [17] for the task-specific decoders. Every input image is rescaled to 512 x 256 pixels. We upsample the output of the PSP decoders back to the input resolu- tion during training. The outputs are upsampled to 2048 x 1024 pixels during testing. The semantic segmentation task is learned with a weighted pixel-wise cross-entropy loss. We reuse the approach from [20] for the instance segmentation task, i.e. we consider the proxy task of regressing each pixel to the center of the instance it belongs to. The depth estimation task is learned using an L1 loss. The losses are normalized to avoid having the loss of one task overwhelm the others during training. The hyperparameters were optimized with a grid search procedure to ensure a fair comparison across all compared approaches. Single-task models We tested batches of size 4, 6 and 8, poly learning rate decay vs step learning rate decay with decay factor 10 and step size 30 epochs, and Adam (initial learning rates 2e-4, 1e-4, 5e-5, 1e-5) vs stochastic gradient descent with momentum 0.9 (initial learn- ing rates 5e-2, 1e-2, 5e-3, 1e-3). This accounts for 48 hyperparameter settings in total. We repeated this procedure for every single task (semantic segmentation, instance segmentation and monocular depth estimation). Baseline multi-task network We train with the same set of hyperparameters as before, i.e. 48 settings in total. We calculate the multi-task performance in accordance with [32]. In particular, the multi-task performance of a model m is measured as the average per-task performance increase/drop w.r.t. the single task models b: An = (=1)" (Mini —Mb.i) /Mp.i, (1) T =] sI- where li = 1 if a lower value means better for measure Mi of task i, and 0 otherwise. Branched multi-task network We reuse the hyperparameter setting with the best result for the baseline multi-task network. The branched multi-task architectures that were used for the quantitative evaluation on Cityscapes are shown in Fig. S4. 11 12 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS # o # 2 a ResNet-50-Block 1 ResNet-50-Block 1 ResNet-50-Block 1 ® a ResNet-50-Block 2 ResNet-50-Block 2 ResNet-50-Block 2 ResNet-50-Block 3 ResNet-50-Block 3 ResNet-50-Block 3 eee ResNet-50-Block 4 ResNet-50-Block 4 ResNet-50-Block 4 Task Specific Decoder Task Specific Decoder Task Specific Decoder ot, Inst. Seg. eee Inst. Seg. e—e— 2 2 o 2 oe 2 Ae A BR aA & P g A g A a a G # Figure S1: Ours - 1 Figure S2: Ours - 2 # Figure S3: Ours - 3 Figure S4: Branched multi-task networks on Cityscapes that were generated by our method. Cross-stitch networks / NDDR-CNN We insert a cross-stitch/NDDR unit after every ResNet block. We also tried to leave out the cross-stitch/NDDR unit after the final ResNet block, but this decreased performance. We tested two different initialization schemes for the weights in the cross-stitch/NDDR units, i.e. α = 0.8, β = 0.1 and α = 0.9, β = 0.05. The model weights were initialized from the set of the best single-task models above. We found that the Adam optimizer broke the initialization and refrained from using it. The best results were obtained with stochastic gradient descent with initial learning rate 1e-3 and momentum 0.9. As also done in [13, 33], we set the weights of these units to have a learning rate that is 100 times higher than the base learning rate. MTAN We re-implemented the MTAN model [26] using a ResNet-50 backbone based on the code that was made publicly available by the authors. We obtained our best results when using an Adam optimizer. Other hyperparameters where set in accordance with our other experiments. # A.2 Taskonomy We reuse the setup from [46]. All input images were rescaled to 256 x 256 pixels. We use a ResNet-50 encoder and replace the last stride 2 convolution by a stride 1 convolution. A 15-layer fully-convolutional decoder is used for the pixel-to-pixel prediction tasks. The decoder is composed of five convolutional layers followed by alternating convolutional and transposed convolutional layers. We use ReLU as non-linearity. Batch normalization is included in every layer except for the output layer. We use Kaiming He’s initialization for both encoder and decoder. We use an L1 loss for the depth (D), edge detection (E) and keypoint detection (K) tasks. The scene categorization task is learned with a KL-divergence loss. We report performance on the scene categorization task by measuring the overlap in top-5 classes between the predictions and ground truth. The multi-task models were optimized with task weights ws = 1, wd = 1, wk = 10, we = 10 and wc = 1. Notice that the heatmaps were linearly rescaled to lie between 0 and 1. During training we normalize the depth map by the standard deviation. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS ResNet-50-Block 1 ResNet-50-Block 1 ResNet-50-Block 1 ResNet-50-Block 2 ResNet-50-Block 2 ResNet-50-Block 2 '@ ResNet-50-Block 3 ResNet-50-Block 3 ResNet-50-Block 3 ResNet-50-Block 4 i ResNet-50-Block 4 J J reaver se tick 4 eee ! ‘Task Specific Decoder @ » 8 J Jan Specific Decoder st | jx Specific Decoder 1o} ss OxNn gy Oo. 2 n _ ee # a Figure S5: Ours - 1 # Figure S6: Ours - 2 # Figure S7: Ours - 3 Figure S8: Task groupings generated by our method on the Taskonomy dataset. Single-task models We use an Adam optimizer with initial learning rate 1e-4. The learning rate is decayed by a factor of10 after 80000 iterations. We train the model for 120000 iterations. The batch size is set to 32. No additional data augmentation is applied. The weight decay term is set to 1e-4. Baseline multi-task model We use the same optimization procedure as for the single-task models. The multi-task performance is calculated using Eq. 1. Branched multi-task models We use the same optimization procedure as for the single- task models. The architectures that were generated by our method are shown in Fig. S8. Fig. S12 shows the architectures that are found when using the task grouping method from [31]. We show some of the predictions made by our third branched multi-task network in Fig. S16 for the purpose of qualitative evaluation. # ResNet-50-Block 1 ResNet-50-Block 1 e e ResNet-50-Block 1 ’ ResNet-50-Block 2 ResNet-50-Block 2 ResNet-50-Block 2 ResNet-50-Block 8 ResNet-50-Block 3 ResNet-50-Block 3 esNet-50-Block 3 j ResNet-50-Block 4 J J ResNet-50-Block 4 @ @ ResNet-50-Block 4 \ ! ‘Task Specific Decoder @ J J Jan Specific Decoder ; | nu Specific Decoder ‘. a n nO Aa mR “MD 2aAD # Figure S9: FAFS - 1 Figure S10: FAFS - 2 # Figure S11: FAFS - 3 Figure S12: Task groupings generated on the Taskonomy dataset using the FAFS method from [31]. Cross-stitch networks / NDDR-CNN We reuse the hyperparameter settings that were found optimal on Cityscapes. Note that, these are in agreement with what the authors re- ported in their original papers. The weights of the cross-stitch/NDDR units were initialized with α = 0.8 and β = 0.05. 13 14 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS MTAN Similar to the other models, we reused the hyperparameter settings that were found optimal on Cityscapes. K S Figure S13: Example - 1 E K S Figure S14: Example - 2 E Figure S15: Example - 3 Figure S16: Predictions made by our branched multi-task network on images from the Taskonomy test set. # A.3 CelebA We reuse the thin-ω model from [31]. The CNN architecture is based on the VGG-16 model [? ]. The number of convolutional features is set to the minimum between ω and the width of the corresponding layer in the VGG-16 model. The fully connected layers contain 2 · ω features. We train the branched multi-task network using stochastic gradient descent with momentum 0.9 and initial learning rate 0.05. We use batches of size 32 and weight decay 0.0001. The model is trained for 120000 iterations and the learning rate di- vided by 10 every 40000 iterations. The loss function is a sigmoid cross-entropy loss with uniform weighing scheme. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS Figure S17: Grouping of 40 person attribute classification tasks on CelebA in a thin VGG-16 architecture. # A.4 Computational Analysis We provide an analysis to identify the computational costs related to the different steps when calculating the task affinity scores. We adopt the notation from the main paper. The following three steps can be identified: • Train N single task networks. It is possible to use a subset of the available training data to reduce the training time. We verified that using a random subset of 500 train images on Cityscapes resulted in the same task groupings. • Compute the RDM matrix for all N networks at D pre-determined layers. This re- quires to compute the features for a set of K images at the D pre-determined layers in all N networks. The K images are usually taken as held-out images from the train set. We used K = 500 in our experiments. In practice this means that computing the image features comes down to evaluating every model on K images. The computed features are stored on disk afterwards. The RDM matrices are calculated from the stored features. This requires to calculate N × D × K × K correlations between two feature vectors (can be performed in parallel). We conclude that the computation time is negligible in comparison to training the single task networks. • Compute the RSA matrix at D locations for all N tasks. This requires to calculate D × N × N correlations between the lower triangle part of the K × K RDM matrices. The computation time is negligible in comparison to training the single task networks. We conclude that the computational cost of our method boils down to training N sin- gle task networks plus some overhead. Notice that cross-stitch networks [33] and NDDR- CNNs [13] also pre-train a set of single-task networks first, before combining them together using a soft parameter sharing mechanism. We conclude that our method only suffers from minor computational overhead compared to these methods. 15 16 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS # References [1] Jacob Abernethy, Francis Bach, Theodoros Evgeniou, and Jean-Philippe Vert. A new approach to collaborative filtering: Operator estimation with spectral regularization. JMLR, 10(Mar):803–826, 2009. [2] Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task em- bedding for meta-learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 6430–6439, 2019. [3] Arvind Agarwal, Samuel Gerber, and Hal Daume. Learning multiple tasks using man- ifold regularization. In NIPS, 2010. [4] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Multi-task feature learning. In NIPS, 2007. [5] Felix JS Bragman, Ryutaro Tanno, Sebastien Ourselin, Daniel C Alexander, and M Jorge Cardoso. Stochastic filter groups for multi-task cnns: Learning specialist and generalist convolution kernels. In ICCV, 2019. [6] Rich Caruana. Multitask learning. Machine learning, 28(1):41–75, 1997. [7] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: In Gradient normalization for adaptive loss balancing in deep multitask networks. ICML, 2018. [8] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. [9] Yin Cui, Yang Song, Chen Sun, Andrew Howard, and Serge Belongie. Large scale In Proceedings of fine-grained categorization and domain-specific transfer learning. the IEEE conference on computer vision and pattern recognition, pages 4109–4118, 2018. [10] Kshitij Dwivedi and Gemma Roig. Representation similarity analysis for efficient task taxonomy & transfer learning. In CVPR, pages 12387–12396, 2019. [11] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. JMLR, 20(55):1–21, 2019. [12] Theodoros Evgeniou and Massimiliano Pontil. Regularized multi–task learning. KDD, 2004. In [13] Yuan Gao, Jiayi Ma, Mingbo Zhao, Wei Liu, and Alan L Yuille. Nddr-cnn: Layerwise feature fusing in multi-task cnns by neural discriminative dimensionality reduction. In CVPR, pages 3205–3214, 2019. [14] Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. Dynamic task prioritization for multitask learning. In ECCV, 2018. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS [15] Moshe Gur and D Max Snodderly. Direction selectivity in v1 of alert monkeys: evi- dence for parallel pathways for motion processing. The Journal of physiology, 585(2): 383–400, 2007. [16] Emily M Hand and Rama Chellappa. Attributes for improved attributes: A multi-task network utilizing implicit and explicit relationships for facial attribute classification. In AAAI, 2017. [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 37(9):1904–1916, 2015. [18] Siyu Huang, Xi Li, Zhiqi Cheng, Alexander Hauptmann, et al. Gnas: A greedy neural architecture search method for multi-attribute learning. 2018. [19] Ali Jalali, Sujay Sanghavi, Chao Ruan, and Pradeep K Ravikumar. A dirty model for multi-task learning. In NIPS, 2010. [20] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, 2018. [21] Iasonas Kokkinos. Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In CVPR, 2017. [22] Abhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning. 2012. [23] Jason Liang, Elliot Meyerson, and Risto Miikkulainen. Evolutionary architecture search for deep multitask networks. In GECCO, 2018. [24] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural ar- chitecture search. In ECCV, 2018. [25] Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. In ICLR, 2018. [26] Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In CVPR, 2019. [27] Sulin Liu, Sinno Jialin Pan, and Qirong Ho. Distributed multi-task relationship learn- ing. In KDD, 2017. [28] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. [29] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and S Yu Philip. Learning multiple tasks with multilinear relationship networks. In NIPS, pages 1594–1603, 2017. [30] Karim Lounici, Massimiliano Pontil, Alexandre B Tsybakov, and Sara Van De Geer. Taking advantage of sparsity in multi-task learning. In COLT, 2009. 17 18 VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS [31] Yongxi Lu, Abhishek Kumar, Shuangfei Zhai, Yu Cheng, Tara Javidi, and Rogerio Feris. Fully-adaptive feature sharing in multi-task networks with applications in person attribute classification. In CVPR, 2017. [32] Kevis-Kokitsi Maninis, Ilija Radosavovic, and Iasonas Kokkinos. Attentive single- tasking of multiple tasks. In CVPR, pages 1851–1860, 2019. [33] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In CVPR, 2016. [34] Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans, and Luc Van Gool. Fast scene understanding for autonomous driving. In IV Workshops, 2017. [35] Alejandro Newell, Lu Jiang, Chong Wang, Li-Jia Li, and Jia Deng. Feature partitioning for efficient multi-task architectures. arXiv preprint arXiv:1908.04339, 2019. [36] Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. TKDE, 22(10): 1345–1359, 2010. [37] Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, 2018. [38] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI, 2019. [39] Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. Routing networks: Adaptive selection of non-linear functions for multi-task learning. In ICLR, 2018. [40] Ethan M Rudd, Manuel Günther, and Terrance E Boult. Moon: A mixed objective optimization network for the recognition of facial attributes. In ECCV, 2016. [41] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. [42] Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders Søgaard. s. AAAI, 2019. In [43] Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. In NIPS, 2018. [44] Dan Xu, Wanli Ouyang, Xiaogang Wang, and Nicu Sebe. Pad-net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene pars- ing. In CVPR, pages 675–684, 2018. [45] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006. [46] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling task transfer learning. In CVPR, 2018. [47] Xiangyun Zhao, Haoxiang Li, Xiaohui Shen, Xiaodan Liang, and Ying Wu. A mod- ulation module for multi-task learning with applications in image retrieval. In ECCV, 2018. VANDENHENDE ET AL.: BRANCHED MULTI-TASK NETWORKS [48] Jiayu Zhou, Jianhui Chen, and Jieping Ye. Malsar: Multi-task learning via structural regularization. Arizona State University, 21, 2011. [49] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017. 19
{ "id": "1706.05098" }
1904.03310
Gender Bias in Contextualized Word Embeddings
In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities. Then, we show that a state-of-the-art coreference system that depends on ELMo inherits its bias and demonstrates significant bias on the WinoBias probing corpus. Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated.
http://arxiv.org/pdf/1904.03310
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, Kai-Wei Chang
cs.CL
null
null
cs.CL
20190405
20190405
9 1 0 2 r p A 5 ] L C . s c [ 1 v 0 1 3 3 0 . 4 0 9 1 : v i X r a # Gender Bias in Contextualized Word Embeddings # Jieyu Zhao§ Ryan Cotterellℵ Tianlu Wang† Vicente Ordonez† Mark Yatskar‡ Kai-Wei Chang§ {jyzhao, kwchang}@cs.ucla.edu # §University of California, Los Angeles †University of Virginia University of California, Los Angeles {jyzhao, kwchang} @cs.ucla.edu {tw8bc, vicente}@virginia.edu ‡Allen Institute for Artificial Intelligence ℵUniversity of Cambridge [email protected] [email protected] # Abstract In this paper, we quantify, analyze and miti- gate gender bias exhibited in ELMo’s contex- tualized word vectors. First, we conduct sev- eral intrinsic analyses and find that (1) train- ing data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender in- formation and (3) ELMo unequally encodes gender information about male and female en- tities. Then, we show that a state-of-the-art coreference system that depends on ELMo in- herits its bias and demonstrates significant bias on the WinoBias probing corpus. Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated. # 1 Introduction Distributed representations of words in the form of word embeddings (Mikolov et al., 2013; Pen- nington et al., 2014) and contextualized word em- beddings (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2018; McCann et al., 2017; Radford et al., 2019) have led to huge performance improve- ment on many NLP tasks. However, several re- cent studies show that training word embeddings in large corpora could lead to encoding societal biases present in these human-produced data (Bolukbasi et al., 2016; Caliskan et al., 2017). In this work, we extend these analyses to the ELMo contextualized word embeddings. Our work provides a new intrinsic analysis of how ELMo represents gender in biased ways. First, the corpus used for training ELMo has a significant gender skew: male entities are nearly three times more common than female entities, which leads to gender bias in the downloadable pre-trained con- textualized embeddings. Then, we apply princi- pal component analysis (PCA) to show that after training on such biased corpora, there exists a low- dimensional subspace that captures much of the gender information in the contextualized embed- dings. Finally, we evaluate how faithfully ELMo preserves gender information in sentences by mea- suring how predictable gender is from ELMo repre- sentations of occupation words that co-occur with gender revealing pronouns. Our results show that ELMo embeddings perform unequally on male and female pronouns: male entities can be predicted from occupation words 14% more accurately than female entities. In addition, we examine how gender bias in ELMo propagates to the downstream applications. Specifically, we evaluate a state-of-the-art coref- erence resolution system (Lee et al., 2018) that makes use of ELMo’s contextual embeddings on WinoBias (Zhao et al., 2018a), a coreference di- agnostic dataset that evaluates whether systems behave differently on decisions involving male and female entities of stereotyped or anti-stereotyped occupations. We find that in the most challenging setting, the ELMo-based system has a disparity in accuracy between pro- and anti-stereotypical pre- dictions, which is nearly 30% higher than a similar system based on GloVe (Lee et al., 2017). Finally, we investigate approaches for mitigating the bias which propagates from the contextualized word embeddings to a coreference resolution sys- tem. We explore two different strategies: (1) a training-time data augmentation technique (Zhao et al., 2018a), where we augment the corpus for training the coreference system with its gender- swapped variant (female entities are swapped to male entities and vice versa) and, afterwards, re- train the coreference system; and (2) a test-time embedding neutralization technique, where input contextualized word representations are averaged with word representations of a sentence with enti- ties of the opposite gender. Results show that test- time embedding neutralization is only partially ef- fective, while data augmentation largely mitigates bias demonstrated on WinoBias by the coreference system. # 2 Related Work Gender bias has been shown to affect several real- world applications relying on automatic language analysis, including online news (Ross and Carter, 2011), advertisements (Sweeney, 2013), abusive language detection (Park et al., 2018), machine translation (Font and Costa-juss`a, 2019; Vanmassen- hove et al., 2018), and web search (Kay et al., 2015). In many cases, a model not only replicates bias in the training data but also amplifies it (Zhao et al., 2017). For word representations, Bolukbasi et al. (2016) and Caliskan et al. (2017) show that word embed- dings encode societal biases about gender roles and occupations, e.g. engineers are stereotypically men, and nurses are stereotypically women. As a con- sequence, downstream applications that use these pretrained word embeddings also reflect this bias. For example, Zhao et al. (2018a) and Rudinger et al. (2018) show that coreference resolution systems relying on word embeddings encode such occupa- tional stereotypes. In concurrent work, May et al. (2019) measure gender bias in sentence embed- dings, but their evaluation is on the aggregation of word representations. In contrast, we analyze bias in contextualized word representations and its effect on a downstream task. To mitigate bias from word embeddings, Boluk- basi et al. (2016) propose a post-processing method to project out the bias subspace from the pre-trained embeddings. Their method is shown to reduce the gender information from the embeddings of gender-neutral words, and, remarkably, maintains the same level of performance on different down- stream NLP tasks. Zhao et al. (2018b) further pro- pose a training mechanism to separate gender in- formation from other factors. However, Gonen and Goldberg (2019) argue that entirely removing bias is difficult, if not impossible, and the gender bias information can be often recovered. This paper investigates a natural follow-up question: What are effective bias mitigation techniques for contextual- ized embeddings? # 3 Gender Bias in ELMo In this section we describe three intrinsic analyses highlighting gender bias in trained ELMo contex- tual word embeddings (Peters et al., 2018). We show that (1) training data for ELMo contains sig- M F #occurrence 5,300,000 1,600,000 #M-biased occs. 170,000 33,000 #F-biased occs. 81,000 36,000 Table 1: Training corpus for ELMo. We show to- tal counts for male (M) and female (F) pronouns in the corpus, and counts corresponding to their co- occurrence with occupation words where the occupa- tions are stereotypically male (M-biased) or female (F- biased). nificantly more male entities compared to female entities leading to gender bias in the pre-trained contextual word embeddings (2) the geometry of trained ELMo embeddings systematically encodes gender information and (3) ELMo propagates gen- der information about male and female entities un- equally. # 3.1 Training Data Bias Table 1 lists the data analysis on the One Billion Word Benchmark (Chelba et al., 2013) corpus, the training corpus for ELMo. We show counts for the number of occurrences of male pronouns (he, his and him) and female pronouns (she and her) in the corpus as well as the co-occurrence of occu- pation words with those pronouns. We use the set of occupation words defined in the WinoBias cor- pus and their assignments as prototypically male or female (Zhao et al., 2018a). The analysis shows that the Billion Word corpus contains a significant skew with respect to gender: (1) male pronouns occur three times more than female pronouns and (2) male pronouns co-occur more frequently with occupation words, irrespective of whether they are prototypically male or female. # 3.2 Geometry of Gender Next, we analyze the gender subspace in ELMo. We first sample 400 sentences with at least one gen- dered word (e.g., he or she from the OntoNotes 5.0 dataset (Weischedel et al., 2012) and generate the corresponding gender-swapped variants (changing he to she and vice-versa). We then calculate the dif- ference of ELMo embeddings between occupation words in corresponding sentences and conduct prin- cipal component analysis for all pairs of sentences. Figure 1 shows there are two principal components for gender in ELMo, in contrast to GloVe which only has one (Bolukbasi et al., 2016). The two principal components in ELMo seem to represent the gender from the contextual information (Con- textual Gender) as well as the gender embedded in the word itself (Occupational Gender). 0.30 0.25 0.20 0.15 0.10 0.05 0.00 + “1 Seveloper 2 developer iler ecretary 5 cchigbywer eran Z -3 @river 5 etavbysician & _,| esi at er erince 3 actor 5 3 Princadbrarian co 5-6 dibrarian atte 8 *Rurse 6-7 eashier “a cashier waitress actress “4-30-2018 1 2 3 Contextual Gender Figure 1: Left: Percentage of explained variance in PCA in the embedding differences. Right: Selected words projecting to the first two principle components where the blue dots are the sentences with male context and the orange dots are from the sentences with female context. To visualize the gender subspace, we pick a few sentence pairs from WinoBias (Zhao et al., 2018a). Each sentence in the corpus contains one gendered pronoun and two occupation words, such as “The developer corrected the secretary because she made a mistake” and also the same sentence with the op- posite pronoun (he). In Figure 1 on the right, we project the ELMo embeddings of occupation words that are co-referent with the pronoun (e.g. secre- tary in the above example) for when the pronoun is male (blue dots) and female (orange dots) on the two principal components from the PCA analy- sis. Qualitatively, we can see the first component separates male and female contexts while the sec- ond component groups male related words such as lawyer and developer and female related words such as cashier and nurse. and female entities are balanced. We first test if ELMo embedding vectors carry gender information. We train an SVM classifier with an RBF kernel2 to predict the gender of a men- tion (i.e., an occupation word) based on its ELMo embedding. On development data, this classifier achieves 95.1% and 80.6% accuracy on sentences where the true gender was male and female respec- tively. For both male and female contexts, the accu- racy is much larger than 50%, demonstrating that ELMo does propagate gender information to other words. However, male information is more than 14% more accurately represented in ELMo than fe- male information, showing that ELMo propagates the information unequally for male and female en- tities. # 4 Bias in Coreference Resolution # 3.3 Unequal Treatment of Gender To test how ELMo embeds gender information in contextualized word embeddings, we train a clas- sifier to predict the gender of entities from occu- pation words in the same sentence. We collect sentences containing gendered words (e.g., he-she, father-mother) and occupation words (e.g., doc- tor)1 from the OntoNotes 5.0 corpus (Weischedel et al., 2012), where we treat occupation words as a mention to an entity, and the gender of that entity is taken to the gender of a co-referring gendered word, if one exists. For example, in the sentence “the engineer went back to her home,” we take engi- neer to be a female mention. Then we split all such instances into training and test, with 539 and 62 in- stances, respectively and augment these sentences by swapping all the gendered words with words of the opposite gender such that the numbers of male In this section, we establish that coreference sys- tems that depend on ELMo embeddings exhibit significant gender bias. Then we evaluate two sim- ple methods for removing the bias from the systems and show that the bias can largely be reduced. # 4.1 Setup We evaluate bias with respect to the WinoBias dataset (Zhao et al., 2018a), a benchmark of paired male and female coreference resolution examples following the Winograd format (Hirst, 1981; Rah- man and Ng, 2012; Peng et al., 2015). It contains two different subsets, pro-stereotype, where pro- nouns are associated with occupations predomi- nately associated with the gender of the pronoun, or anti-stereotype, when the opposite relation is true. 1We use the list collected in (Zhao et al., 2018a) 2We use the ν-SVC formulation and tune the hyper- parameter ν (Chang and Lin, 2011) in the range of [0.1, 1] with a step 0.1. Embeddings GloVe GloVe GloVe+ELMo GloVe+ELMo GloVe+ELMo GloVe+ELMo Data Augmentation Neutralization ELMo GloVe OntoNotes 67.7 65.8 72.7 71.0 71.0 71.1 Semantics Only Pro. Anti. Avg. 62.7 49.4 76.0 63.4 62.8 63.9 64.3 49.5 79.1 65.4 64.9 65.9 64.9 57.8 72.6 66.2 60.6 71.7 | Diff | 26.6* 1.1 29.6* 1.0 14.3* 11.1* w/ Syntactic Cues Pro. Anti. Avg. 82.0 75.2 88.7 82.4 83.4 81.3 89.5 85.9 93.0 88.4 88.9 87.8 89.4 88.6 90.2 89.8 89.2 90.3 | Diff | 13.5* 2.1 7.1* 1.2 1.6 1.1 Table 2: F1 on OntoNotes and WinoBias development sets. WinoBias dataset is split Semantics Only and w/ Syntactic Cues subsets. ELMo improves the performance on the OntoNotes dataset by 5% but shows stronger bias on the WinoBias dataset. Avg. stands for averaged F1 score on the pro- and anti-stereotype subsets while “Diff.” is the absolute difference between these two subsets. * indicates the difference between pro/anti stereotypical conditions is significant (p < .05) under an approximate randomized test (Graham et al., 2014). Mitigating bias by data augmentation reduces all the bias from the coreference model to a neglect level. However, the neutralizing ELMo approach only mitigates bias when there are other strong learning signals for the task. Each subset consists of two types of sentences: one that requires semantic understanding of the sen- tence to make coreference resolution (Semantics Only) and another that relies on syntactic cues (w/ Syntactic Cues). Gender bias is measured by taking the difference of the performance in pro- and anti- stereotypical subsets. Previous work (Zhao et al., 2018a) evaluated the systems based on GloVe em- beddings but here we evaluate a state-of-the-art system that trained on the OntoNotes corpus with ELMo embeddings (Lee et al., 2018). # 4.2 Bias Mitigation Methods and the gender-swapped sentences and use their average as the final representations. # 4.3 Results Table 2 summarizes our results on WinoBias. ELMo Bias Transfers to Coreference Row 3 in Table 2 summarizes performance of the ELMo based coreference system on WinoBias. While ELMo helps to boost the coreference resolution F1 score (OntoNotes) it also propagates bias to the task. It exhibits large differences between pro- and anti-stereotyped sets (|Diff|) on both semantic and syntactic examples in WinoBias. Next, we describe two methods for mitigating bias in ELMo for the purpose of coreference resolution: (1) a train-time data augmentation approach and (2) a test-time neutralization approach. Data Augmentation Zhao et al. (2018a) propose a method to reduce gender bias in coreference res- olution by augmenting the training corpus for this task. Data augmentation is performed by replacing gender revealing entities in the OntoNotes dataset with words indicating the opposite gender and then training on the union of the original data and this swapped data. In addition, they find it useful to also mitigate bias in supporting resources and there- fore replace standard GloVe embeddings with bias mitigated word embeddings from Bolukbasi et al. (2016). We evaluate the performance of both as- pects of this approach. Bias Mitigation Rows 4-6 in Table 2 summa- rize the effectiveness of the two bias mitigation approaches we consider. Data augmentation is largely effective at mitigating bias in the corefer- ence resolution system with ELMo (reducing |Diff | to insignificant levels) but requires retraining the system. Neutralization is less effective than aug- mentation and cannot fully remove gender bias on the Semantics Only portion of WinoBias, indi- cating it is effective only for simpler cases. This observation is consistent with Gonen and Goldberg (2019), where they show that entirely removing bias from an embedding is difficult and depends on the manner, by which one measures the bias. # 5 Conclusion and Future Work Neutralization We also investigate an approach to mitigate bias induced by ELMo embeddings without retraining the coreference model. Instead of augmenting training corpus by swapping gender words, we generate a gender-swapped version of the test instances. We then apply ELMo to obtain contextualized word representations of the original Like word embedding models, contextualized word embeddings inherit implicit gender bias. We ana- lyzed gender bias in ELMo, showing that the cor- pus it is trained on has significant gender skew and that ELMo is sensitive to gender, but unequally so for male and female entities. We also showed this bias transfers to downstream tasks, such as corefer- ence resolution, and explored two bias mitigation strategies: 1) data augmentation and 2) neutralizing embeddings, effectively eliminating the bias from ELMo in a state-of-the-art system. With increasing adoption of contextualized embeddings to get bet- ter results on core NLP tasks, e.g. BERT (Devlin et al., 2018), we must be careful how such unsu- pervised methods perpetuate bias to downstream applications and our work forms the basis of evalu- ating and mitigating such bias. # Acknowledgement This work was supported in part by National Sci- ence Foundation Grant IIS-1760523. RC was sup- ported by a Facebook Fellowship. We also ac- knowledge partial support from the Institute of the the Humanities and Global Cultures at the Univer- sity of Virginia. We thank all reviewers for their comments. # References James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? In NeurIPS. and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM transac- tions on intelligent systems and technology (TIST), 2(3):27. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for measur- ing progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805. Joel Escud´e Font and Marta R. Costa-juss`a. 2019. Equalizing gender biases in neural machine trans- arXiv lation with word embeddings techniques. preprint arXiv:1901.03116. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. CoRR, abs/1903.03862. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2014. Randomized significance tests in machine translation. In WMT@ ACL. Graeme Hirst. 1981. Anaphora in natural language un- derstanding. Berlin Springer Verlag. Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015. Unequal representation and gender stereo- types in image search results for occupations. In Human Factors in Computing Systems. ACM. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference resolu- tion. In EMNLP. Kenton Lee, Luheng He, and Luke S. Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In NAACL. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On mea- arXiv suring social biases in sentence encoders. preprint arXiv:1903.10561. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In NeurIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NeurIPS. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing gender bias in abusive language detection. In EMNLP. Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. In NAACL. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Resolving complex cases of definite pronouns: The winograd schema challenge. In EMNLP. Karen Ross and Cynthia Carter. 2011. Women and news: A long and winding road. Media, Culture & Society, 33(8). Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In NAACL. Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue, 11(3):10. Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting Gender Right in Neural Ma- chine Translation. In EMNLP. Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Jeff Kaufman, Michelle Franchini, Mohammed El- Bachouti, Nianwen Xue, Martha Palmer, Jena D Hwang, Claire Bonial, et al. 2012. Ontonotes re- lease 5.0. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In EMNLP. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. In EMNLP.
{ "id": "1903.10561" }
1904.03035
Identifying and Reducing Gender Bias in Word-Level Language Models
Many text corpora exhibit socially problematic biases, which can be propagated or amplified in the models trained on such data. For example, doctor cooccurs more frequently with male pronouns than female pronouns. In this study we (i) propose a metric to measure gender bias; (ii) measure bias in a text corpus and the text generated from a recurrent neural network language model trained on the text corpus; (iii) propose a regularization loss term for the language model that minimizes the projection of encoder-trained embeddings onto an embedding subspace that encodes gender; (iv) finally, evaluate efficacy of our proposed method on reducing gender bias. We find this regularization method to be effective in reducing gender bias up to an optimal weight assigned to the loss term, beyond which the model becomes unstable as the perplexity increases. We replicate this study on three training corpora---Penn Treebank, WikiText-2, and CNN/Daily Mail---resulting in similar conclusions.
http://arxiv.org/pdf/1904.03035
Shikha Bordia, Samuel R. Bowman
cs.CL
12 pages with 8 tables and 1 figure; Published at NAACL SRW 2019
null
cs.CL
20190405
20190405
9 1 0 2 r p A 5 ] L C . s c [ 1 v 5 3 0 3 0 . 4 0 9 1 : v i X r a # Identifying and Reducing Gender Bias in Word-Level Language Models # Shikha Bordia1 [email protected] # Samuel R. Bowman1,2,3 [email protected] 1Dept. of Computer Science New York University 251 Mercer St New York, NY 10012 2Center for Data Science New York University 60 Fifth Avenue New York, NY 10011 3Dept. of Linguistics New York University 10 Washington Place New York, NY 10003 # Abstract Many text corpora exhibit socially problematic biases, which can be propagated or amplified in the models trained on such data. For ex- ample, doctor cooccurs more frequently with male pronouns than female pronouns. In this study we (i) propose a metric to measure gen- der bias; (ii) measure bias in a text corpus and the text generated from a recurrent neural net- work language model trained on the text cor- pus; (iii) propose a regularization loss term for the language model that minimizes the pro- jection of encoder-trained embeddings onto an embedding subspace that encodes gender; (iv) finally, evaluate efficacy of our proposed method on reducing gender bias. We find this regularization method to be effective in re- ducing gender bias up to an optimal weight assigned to the loss term, beyond which the model becomes unstable as the perplexity in- creases. We replicate this study on three train- ing corpora—Penn Treebank, WikiText-2, and CNN/Daily Mail—resulting in similar conclu- sions. # Introduction Models automating resume screening have also proved to have a heavy gender bias favoring male candidates (Lambrecht and Tucker, 2018). Such data and algorithmic biases have become a grow- ing concern. Evaluation and mitigation of biases in data and models that use the data has been a growing field of research in recent years. One natural language understanding task vul- nerable to gender bias is language modeling. The task of language modeling has a number of prac- tical applications, such as word prediction used in If possible, we would like onscreen keyboards. to identify the bias in the data used to train these models and reduce its effect on model behavior. Towards this pursuit, we aim to evaluate the ef- fect of gender bias on word-level language models that are trained on a text corpus. Our contributions in this work include: (i) an analysis of the gen- der bias exhibited by publicly available datasets used in building state-of-the-art language models; (ii) an analysis of the effect of this bias on re- current neural networks (RNNs) based word-level language models; (iii) a method for reducing bias learned in these models; and (iv) an analysis of the results of our method. Dealing with discriminatory bias in training data is a major issue concerning the mainstream im- plementation of machine learning. Existing bi- ases in data can be amplified by models and the resulting output consumed by the public can in- fluence them, encourage and reinforce harmful stereotypes, or distort the truth. Automated sys- tems that depend on these models can take prob- lematic actions based on biased profiling of indi- viduals. The National Institute for Standards and Technology (NIST) evaluated several facial recog- nition algorithms and found that they are systemat- ically biased based on gender (Ngan and Grother, 2015). Algorithms performed worse on faces la- beled as female than those labeled as male. # 2 Related Work A number of methods have been proposed for in evaluating and addressing biases that exist datasets and the models that use them. Recasens et al. (2013) studies the neutral point of view (NPOV) edit tags in the Wikipedia edit histories to understand linguistic realization of bias. Ac- cording to their study, bias can be broadly cat- egorized into two classes: framing and episte- mological. While the framing bias is more ex- the epistemological bias is implicit and plicit, subtle. Framing bias occurs when subjective or one-sided words are used. For example, in the ‘Training Corpus Cross Entropy Loss + M(N.B) pPOW aBenBue] [20971 prog Bias in Training Corpus Generated Text Bias in Generated Text Generated Text with Regularization Bias after Regularization Figure 1: Word level language model is a three layer LSTM model. λ controls the importance of minimizing bias in the embedding matrix. sentence—“Usually, smaller cottage-style houses have been demolished to make way for these Mc- the word McMansions has a neg- Mansions.”, ative connotation towards large and pretentious houses. Epistemological biases are entailed, as- serted or hedged in the text. For example, in the sentence—“Kuypers claimed that the main- stream press in America tends to favor liberal viewpoints,” the word claimed has a doubtful ef- fect on Kuypers statement as opposed to stated in the sentence—“Kuypers stated that the main- stream press in America tends to favor liberal viewpoints.” It may be possible to capture both of these kinds of biases through the distributions of co-occurrences. In this paper, we deal with iden- tifying and reducing gender bias based on words co-occurring in a context window. Bolukbasi et al. (2016) propose an approach to investigate gender bias present in popular word embeddings, such as word2vec (Mikolov et al., 2013). They construct a gender subspace using a set of binary gender pairs. For words that are not explicitly gendered, the component of the word embeddings that project onto this subspace can be removed to debias the embeddings in the gender direction. They also propose a softer variation that balances reconstruction of the original em- beddings while minimizing the part of the embed- dings that project onto the gender subspace. We use the softer variation to debias the embeddings while training our language model. serve gender bias in the training examples and that their model amplifies the bias in its predictions. They impose constraints on the optimization to re- duce bias amplification while incurring minimal degradation in their model’s performance. Word embeddings can capture the stereotypi- cal bias in human generated text leading to biases in NLP Applications. Caliskan et al. (2017) con- duct Word Embedding Association Test (WEAT). It is based on the hypothesis that word embeddings closer together in high dimensional space are se- mantically closer. They find strong evidence of social biases in pretrained word embeddings. Rudinger et al. (2018) introduce Winogender schemas1 and evaluate three coreference resolu- tion systems—rule-based, statistical and neural systems. They find that these systems’ predictions strongly prefer one gender over the other for occu- pations. Font and Costa-Juss`a (2019) study the impact of gender debiasing techniques by Bolukbasi et al. (2016) and Zhao et al. (2018) in machine trans- lation. They find these methods to be effective, and even a noted BLEU score improvement for the debiased model. Our work is closely related but while they use debiased pretrained embeddings, we train the word embeddings from scratch and debias them while the language model is trained. May et al. (2019) extend WEAT to state-of- the-art sentence encoders: the Sentence Encoder Association Test (SEAT). They show that these tests can provide an evidence for presence of bias. Zhao et al. (2017) look at gender bias in the con- text of using structured prediction for visual object classification and semantic role labeling. They ob- 1It is Winograd Schema-style coreference dataset consist- ing of pair of sentences that differ only by a gender pronoun However, the cosine similarity between sentences can be an inadequate measure of text similarity in sentences. In this paper, we attempt to mini- mize the cosine similarity between word embed- dings and gender direction. Gonen and Goldberg (2019) conduct experi- ments using the debiasing techniques proposed by Bolukbasi et al. (2016) and Zhao et al. (2018). They show that bias removal techniques based on gender direction are inefficient in removing all as- pects of bias. In a high dimensional space, spa- tial distribution of the gender neutral word embed- dings remain almost same after debiasing. This enables a gender-neutral classifier to still pick up the cues that encode other semantic aspects of bias. We use softer variation of the debiasing method proposed by Bolukbasi et al. (2016) and attempt to measure the debiasing effect from the minimal changes in the embedding space. # 3 Methods We first examine the bias existing in the datasets through qualitative and quantitative analysis of trained embeddings and cooccurrence patterns. We then train an LSTM word-level language model on a dataset and measure the bias of the generated outputs. As shown in Figure 1, we then apply a regularization procedure that encourages the embeddings learned by the model to depend minimally on gender. We debias the input and the output embeddings individually as well as simul- taneously. Finally, we assess the efficacy of the proposed method in reducing bias. We observe that when both input and output em- beddings are debiased together, the perplexity of the model shoots up by a much larger number than the input or the output embeddings debiased indi- vidually. We report our results when only input embeddings are debiased. This method, however, does not limit the model to capture other forms of bias being learned in other model parameters or output embeddings. The code implementing our methods can be found in our GitHub repository.2 # 3.1 Datasets and Text Preprocessing We compare the model on three datasets–Penn Treebank (PTB), WikiText-2 and CNN/Daily Mail. The first two have been used in language modeling for a long time. We include CNN/Daily 2https://github.com/BordiaS/language-model-bias Mail dataset in our experiments as it contains a more diverse range of topics. PTB Penn Treebank comprises of articles rang- ing from scientific abstracts, computer manuals, In our experiments, we etc. observe that PTB has a higher count of male words than female words. Following prior lan- guage modeling work, we use the Penn Treebank dataset (PTB; Marcus et al., 1993) preprocessed by Mikolov et al. (2010). WikiText-2 WikiText-2 is twice the size of the PTB and is sourced from curated Wikipedia ar- ticles. It is more diverse and therefore has a more balanced ratio of female to male gender words than PTB. We use preprocessed WikiText-2 (Wikitext-2; Merity et al., 2016). CNN/Daily Mail This dataset is curated from a diverse range of news articles on topics like sports, health, business, lifestyle, travel etc. This dataset has an even more balanced ratio of female to male gender words and thus, relatively less biased than the above two. However, this does not mean that the use of pronouns is not biased. This dataset was released as part of a summarization dataset by Hermann et al. (2015), and contains 219,506 arti- cles from the newspaper the Daily Mail. We sub- sample the sentences by a factor of 100 in order to make the dataset more manageable for experi- ments. # 3.2 Word-Level Language Model We use a three-layer LSTM word-level language model (AWD-LSTM; Merity et al., 2018) with 1150 hidden units implemented in PyTorch.3 These models have an embedding size of 400 and a learning rate of 30. We use a batch size of 80 for Wikitext-2 and 40 for PTB. Both are trained for 750 epochs. The PTB baseline model achieves a perplexity of 62.56. For WikiText-2, the baseline model achieves a perplexity of 67.67. For CNN/Daily Mail, we use a batch size of 80 and train it for 500 epochs. We do early stopping for this model. The hyperparameters are chosen through a systematic trial and error approach. The baseline model achieves a perplexity of 118.01. All three baseline models achieve reasonable perplexities indicating them to be good proxies for standard language models. 3https://github.com/salesforce/awd-lstm-lm λ Fixed Context µ σ β Infinite Context σ µ β P pl. train 0.0 0.001 0.01 0.1 0.5 0.8 1.0 0.83 0.74 0.69 0.63 0.64 0.70 0.76 0.84 1.00 0.91 0.88 0.81 0.82 0.91 0.96 0.94 0.40 0.34 0.31 0.33 0.39 0.45 0.38 3.81 2.23 2.43 2.56 2.30 2.91 3.43 2.42 4.65 2.90 2.98 3.40 3.09 3.76 4.06 3.02 0.38 0.35 0.36 0.24 0.38 0.26 -0.30 62.56 62.69 62.83 62.48 62.5 63.36 62.63 Table 1: Experimental results for Penn Treebank and generated text for different λ values λ Fixed Context µ σ β Infinite Context σ µ β P pl. train 0.0 0.001 0.01 0.1 0.5 0.8 1.0 0.80 0.70 0.69 0.61 0.65 0.70 0.65 0.74 1.00 0.84 0.84 0.79 0.82 0.88 0.84 0.92 0.29 0.27 0.20 0.24 0.31 0.28 0.27 3.70 3.48 2.32 1.88 2.26 2.25 2.07 2.32 4.60 4.29 3.12 2.69 3.11 3.17 2.98 3.21 0.15 0.16 0.14 0.06 0.20 0.18 -0.08 67.67 67.84 67.78 67.89 69.07 69.36 69.56 Table 2: Experimental results for WikiText-2 and generated text for different λ values λ Fixed Context µ σ β Infinite Context µ σ β P pl. train 0.0 0.1 0.5 0.8 1.0 0.72 0.51 0.38 0.34 0.40 0.62 0.94 0.68 0.52 0.48 0.56 0.83 0.22 0.19 0.14 0.19 0.21 0.77 0.43 0.85 0.79 0.96 1.71 1.05 0.59 1.38 1.31 1.57 2.65 0.29 0.22 0.20 0.23 0.31 118.01 116.49 116.19 121.00 120.55 Table 3: Experimental results for CNN/Daily Mail and generated text for different λ values # 3.3 Quantifying Biases For numeric data, bias can be caused simply by class imbalance, which is relatively easy to quan- tify and fix. For text and image data, the com- plexity in the nature of the data increases and it becomes difficult to quantify. Nonetheless, defin- ing relevant metrics is crucial in assessing the bias exhibited in a dataset or in a model’s behavior. # 3.3.1 Bias Score Definition In a text corpus, we can express the probability of a word occurring in context with gendered words as follows: P (w|g) = c(w, g)/Σic(wi, g) c(g)/Σic(wi) etc. w is any word in the corpus, excluding stop words and gendered words. The bias score of a specific word w is then defined as: Peal) biastrain(w) = log Gao This bias score is measured for each word in the text sampled from the training corpus and the text corpus generated by the language model. A posi- tive bias score implies that a word cooccurs more often with female words than male words. For an infinite context, the words doctor and nurse would cooccur as many times with a female gender as with male gender words and the bias scores for these words will be equal to zero. where c(w, g) is a context window and g is a set of gendered words that belongs to either of the two categories: male or female. For example, when g = f , such words would include she, her, woman We conduct two sets of experiments where we define context window c(w, g) as follows: Fixed Context In this scenario, we take a fixed context window size and measure the bias scores. We generated bias scores for several context win- dow sizes in the range (5, 15). For a context size k, there are k words before and k words after the tar- get word w for which the bias score is being mea- sured. Qualitatively, a smaller context window size has more focused information about the tar- get word. On the other hand, a larger window size captures topicality (Levy and Goldberg, 2014). By choosing an optimal window of k = 10, we give equal weight of 5% to the ten words before and the ten words after the target word. Infinite Context In this scenario, we take an in- finite window of context with weights diminish- ing exponentially based on the distance between the target word w and the gendered word g. This method emphasizes on the fact that the nearest word has more information about the target word. The farther the context gets away from a word, the less information it has about the word. We give 5% weight to the words adjacent to the target word as in Fixed Context but reduce the weights of the words following by 5% and 95% to the rest; this applied recursively gives a base of 0.95. This method of exponential weighting instead of equal weighting adds to the stability of the measure. # 3.3.2 Bias Reduction Measures To evaluate debiasing of each model, we measure the bias for the generated corpus. biasλ(w) = log( P (w|f ) P (w|m) ) To estimate the amplification or reduction of the bias, we fit a univariate linear regression model over bias scores of context words w as follows: biasλ(w) = β ∗ biastrain(w) + c where β is the scaled amplification measure rela- tive to the training data. Reducing β implies debi- asing the model. We also look at the distribution of the bias by evaluating mean absolute bias and deviation in bias scores for each context word in each of the generated corpora. µλ = mean(abs(biasλ)); σλ = stdev(biasλ) We take the mean of absolute bias score as the word can be biased in either of the two directions. # 3.4 Model Debiasing Machine learning techniques that capture patterns in data to make coherent predictions can uninten- tionally capture or even amplify the bias in data (Zhao et al., 2017). We consider a gender sub- space present in the learned embedding matrix in our model as introduced in the Bolukbasi et al. (2016) paper. We train these embeddings on the word level language model instead of using the debiased pretrained embeddings (Font and Costa- Juss`a, 2019). We conduct experiments for the three cases where we debias—input embeddings, output embeddings, and both the embeddings si- multaneously. Let w ∈ SW be a word embedding correspond- ing to a word in the word embedding matrix W . Let Di, . . . , Dn ⊂ SW be the defining sets4 that contain gender-opposing words, e.g. man and woman. The defining sets are designed separately for each corpus since cer- tain words may not appear in another corpus. We consider it a defining set if both gender-opposing words occur in the training corpus. If ui, vi are the embeddings corresponding to the words man and woman, then {ui, vi} = Di. We consider the matrix C which is defined as a stack of difference vectors between the pairs in the defining sets. We have: C = ( u1−v1 2 ... ( un−vn 2 ) = U ΣV ) The difference between the pairs encodes the gender information corresponding to the gender pair. We then perform singular value decomposi- tion on C, obtaining U ΣV . The gender subspace B is then defined as the first k columns (where k is chosen to capture 50% of the variation) of the right singular matrix V : B = V1:k Let N be the matrix consisting of the embed- dings for which we would like the corresponding words to exhibit unbiased behavior. If we want the embeddings in N to have minimal bias, then its projection onto the gender subspace B should be small in terms its the squared Frobenius norm. 4See the supplement for corpus-wise defining sets Target Word λ Sample From Generated Text crying 0.0 0.5 1.0 “she was put on her own machine to raise money for her own wedding <unk> route which saw her crying and <unk> down a programme today . effects began by bottom of her marrow the <unk>” “he <unk> in the americas with the <unk> which can spread a <unk> circumcision ceremony made last month . as he <unk> his mother s <unk> crying to those that” “he discovered peaceful facebook remains when he was caught crying officers but was arrested after they found the crash hire a man <unk> brown shocked his brother <unk> over” fragile 0.0 0.5 1.0 “camilla said she talked to anyone and had previously left her love of two young children . it all comes with her family in conviction of her son s death . it s been fragile . the <unk> and retail boy that was rik s same maker identified genuinely <unk> attacked all” “his children at nearby children s hospital in <unk> and went <unk> years after he was arrested on <unk> bail . she spent relaxed weeks in prison after being sharply in fragile <unk> while she was jailed and strangled when she was born in <unk> virginia” “could they possibly have a big barrier to jeff <unk> and <unk> my son all intelligence period that will contain the east country s world from all in the world the truth is when we moved clear before the split twenty days earlier that day . none of the distributed packs on the website can never <unk> re able to <unk> it the second time so that fitting fragile <unk> are and less the country is <unk> . it came as it was once <unk> million lead jobs mail yorkshire . adoption of these first product is ohio but it is currently almost impossible for the moon to address and fully offshore hotly ” leadership 0.0 0.5 1.0 “mr <unk> worked traditions at the squadron base in <unk> rbs to marry the us government .he referring to the mainland them in february <unk> he kept communist leadership from undergoing” “obama s first wife janet had a chance to run the opposition for a superbowl event for charity the majority of the south african people s travel stage <unk> leadership while it was married off christ- mas” “the woman s lungs and drinking the ryder of his daughters s leadership morris said businesses . however being of his mouth around wiltshire and burn talks from the hickey s <unk> employees” prisoner 0.0 0.5 1.0 “his legs and allegedly killed himself by suspicious points . in the latest case after an online page he left prisoner in his home in <unk> near <unk> manhattan on saturday when he was struck in his car operating in <unk> bay smoking <unk> and <unk> <unk> when he had” “it is something that the medicines can target prisoner and destroy <unk> firms in the uk but i hope that there are something into the on top getting older people who have more branded them as poor .” “the ankle follows a worker <unk> her <unk> prisoner she died this year before now an profile which clear her eye borrowed for her organ own role . it was a huge accident after the drugs she had” Table 4: Generated text comparison for CNN/Daily Mail for different λ values Therefore, to reduce the bias learned by the em- bedding layer in the model, we can add the follow- ing bias regularization term to the training loss: # 4 Experiments # 4.1 Model Le = X\|NB\|z After achieving the baseline results, we run exper- iments to tune λ as hyperparameter. We report an in-depth analysis of bias measure on the models with debiased input embeddings. # 4.2 Results and Text Examples where λ controls the importance of minimizing bias in the embedding matrix W (from which N and B are derived) relative to the other compo- nents of the model loss. The matrices N and C are updated each iteration during the model train- ing. We input 2000 random seeds in the language model as starting points to start word generation. We use the previous words as an input to the lan- guage model and perform multinomial selection to generate up the next word. We repeat this up to 500 times. In total, we generate 106 tokens for all three datasets for each λ and measure the bias. We calculate the measures stated in Section 3.3 for the three datasets and the generated corpora using the corresponding RNN models. The results are shown in Tables 1, 2 and 3. We see that the µ con- sistently decline as we increase λ until a point, be- yond which the model becomes unstable. So there is a scope of optimizing the λ values. The detailed analysis is presented in Section 4.3 Table 4 shows excerpts around selected target words from the generated corpora to demonstrate the effect of debiasing for different values of λ. We highlight the words crying and fragile that are typically associated with feminine qualities, along with the words leadership and prisoners that are stereotyped with male identity. These biases are reflected in the generated text for λ = 0. We no- tice increased mention of the less probable gen- der in the subsequent generated text with debias- ing (λ = 0.5, 1.0). For fragile, the generated text at λ = 1.0 has reduced the mention of stereotyped female words but had no mentions of male words; resulting in a large chunk of neutral text. Simi- larly, in prisoners, the generated text for λ = 0.5 has no gender words. However, these are small snippets and the bias scores presented in the supplementary table quan- tifies the distribution of gender words around the target word in the entire corpus. These target words are chosen as they are commonly perceived gender biases and in our study, they show promi- nent debiasing effect.5 # 4.3 Analysis and Discussion We consider a text corpus to be biased when it has a skewed distribution of words cooccuring with one gender vs another. Any dataset that has such demographic bias can lead to (potentially unin- tended) social exclusion (Hovy, 2015). PTB and WikiText-2 consist of news articles related to busi- ness, science, politics, and sports. These are all male dominated fields. However, CNN/Daily Mail consists of articles across diverse set of categories like entertainment, health, travel etc. Among the three corpora, Penn Treebank has more frequent mentions of male words with respect to female words and CNN/Daily Mail has the least. As defined, bias score of zero implies perfectly neutral word, any value higher/lower implies fe- male/male bias. Therefore, the absolute value of bias score signifies presence of bias. Overall bias in a dataset can be estimated as the average of ab- solute bias score (µ). The aggregated absolute bias scores µ of the three datasets—Penn Treebank, WikiText-2, and CNN/Daily Mail—are 0.83, 0.80, and 0.72 respectively. Higher µ value in this mea- sure means on-an-average the words in the entire corpus are more gender biased. As per the Tables 1, 2, and 3, we see that the µ consistently decline as we increase λ until a point, beyond which the model becomes unstable. So there is a scope of optimizing the λ values. The second measure we evaluated is the stan- dard deviation (σ) of the bias score distribution. 5For more examples, refer to the supplement Less biased dataset should have the bias score con- centrating closer to zero and hence lower σ value. We consistently see that, with the initial increase of λ, there is a decrease in σ of the bias score dis- tribution. The final measure to evaluate debiasing is com- parison of bias scores at individual word level. We regress the bias scores of the words in generated text against their bias scores in the training corpus after removing the outliers. The slope of regres- sion β signifies the amplification or dampening ef- fect of the model relative to the training corpus. Unlike the previous measures, this measure gives clarity at word level bias changes. A drop in β sig- nifies reduction in bias and vice versa. A negative β signifies inversion in bias assuming there are no other effects of the loss term. In our experiments, we observe β to increase with higher values of λ possibly due to instability in model and none of those values go beyond 1. We observe that corpus level bias scores like µ, σ are less effective measures to study efficacy of debiasing techniques because they fail to track the improvements at word level. Instead, we recom- mend a word level score comparison like β to eval- uate robustness of debiasing at corpus level. To choose the context window in a more ro- bust manner, we take exponential weightings to the cooccurrences. The results for aggregated av- erage of absolute bias and standard deviation show the same pattern as in fixed context window. As shown in the results above, we see that the standard deviation (σ), absolute mean (µ) and slope of regression (β) reduce for smaller λ rel- ative to those in training data and then increase with λ to match the variance in the original cor- pus. This holds for the experiments conducted with fixed context window as well as with expo- nential weightings. # 5 Conclusion In this paper, we quantify and reduce gender bias in word level language models by defining a gen- der subspace and penalizing the projection of the word embeddings onto that gender subspace. We device a metric to measure gender bias in the train- ing and the generated corpus. In this study, we quantify corpus level bias in two different metrics—absolute mean (µ) and standard deviation (σ). However, for evaluating debiasing effects, we propose a relative metric (β) to study the change in bias scores at word level in training corpus. To calculate generated text vs. β, we conduct an in-depth regression analysis of the word level bias measures in the generated text corpus over the same for the training corpus. Although we found mixed results on amplifica- tion of bias as stated by Zhao et al. (2017), the de- biasing method shown by Bolukbasi et al. (2016) was validated with the use of novel and robust bias measure designed in this paper. Our proposed methodology can deal with distribution of words in a vocabulary in word level language model and it targets one way to measure bias, but it’s highly likely that there is significant bias in the debiased models and data, just not bias that we can detect on this measure. It can be concluded different bias metrics show different kinds of bias (Gonen and Goldberg, 2019). We additionally observe a perplexity bias trade- off as a result of the additional bias regularization term. In order to reduce bias, there is a compro- mise on perplexity. Intuitively, as we reduce bias the perplexity is bound to increase due to the fact that, in an unbiased model, male and female words will be predicted with an equal probability. # 6 Acknowledgements We are grateful to Yu Wang and Jason Cramer for helping to initiate this project, to Nishant Subra- mani for helpful discussion, and to our reviewers for their thoughtful feedback. Bowman acknowl- edges support from Samsung Research. # References Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4349–4357. Curran Associates, Inc. and Arvind Joanna J. Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Joel Escud´e Font and Marta R. Costa-Juss`a. 2019. Equalizing gender biases in neural machine trans- lation with word embeddings techniques. CoRR, abs/1901.03116. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Dirk Hovy. 2015. Demographic factors improve clas- sification performance. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), volume 1, pages 752–762. Anja Lambrecht and Catherine E Tucker. 2018. Al- gorithmic bias? an empirical study into appar- ent gender-based discrimination in the display of stem career ads. Social Science Research Network (SSRN). Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), vol- ume 2, pages 302–308. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional Linguistics, 19(2):313–330. Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- In Proceed- ing social bias in sentence encoders. ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies. Associa- tion for Computational Linguistics. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM In International Conference on language models. Learning Representations. Stephen Merity, Caiming Xiong, James Bradbury, and Pointer sentinel mixture Richard Socher. 2016. models. arXiv preprint arXiv:1609.07843. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recur- rent neural network based language model. In IN- TERSPEECH, pages 1045–1048. ISCA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- In Advances in neural information processing ity. systems, pages 3111–3119. Mei Ngan and Patrick Grother. 2015. Face recogni- tion vendor test (FRVT) performance of automated gender classification algorithms. US Department of Commerce, National Institute of Standards and Technology. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic models for an- alyzing and detecting biased language. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 1650–1659. Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, Louisiana. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using In EMNLP, pages 2979– corpus-level constraints. 2989. Association for Computational Linguistics. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word In Proceedings of the 2018 Confer- embeddings. ence on Empirical Methods in Natural Language Processing, pages 4847–4853. Association for Com- putational Linguistics. # A Defining sets The gender pair list for each corpus is designed separately. We consider only those gender pairs that occur in the training corpus. Below are the gender lists corresponding to each corpus: # A.1 Penn Treebank Male Words: “actor” “boy” “father” “he” “him ” “his” “male” “man” “men” “son” “sons” “spokesman” “wife” “king” “brother” Female Words: “actress” “girl ” “mother” “she” “her ” “her” “female” “woman” “women” “daughter” “daughters” “spokeswoman” “hus- band” “queen” “sister” # A.2 WikiText-2 Male Words: “actor” “Actor” “boy” “Boy” “boyfriend” “Boys” “boys” “father” “Father” “Fa- thers” “fathers” “Gentleman” “gentleman” “gen- tlemen” “Gentlemen” “grandson” “he” “He” “hero” “him” “Him” “his” “His” “Husband” “husbands” “King” “kings” “Kings” “male” “Male” “males” “Males” “man” “Man” “men” “Men” “Mr.” “Prince” “prince” “son” “sons” “spokesman” “stepfather” “uncle” “wife” “king” Female Words: “actress” “Actress” “girl” “Girl” “girlfriend” “Girls” “girls” “mother” “Mother” “Mothers” “mothers” “Lady” “lady” “ladies” “Ladies” “granddaughter” “she” “She” “heroine” “her” “Her” “her” “Her” “Wife” “wives” “Queen” “queens” “Queens” “female” “Female” “fe- males” “Females” “woman” “Woman” “women” “Women” “Mrs.” “Princess” “princess” “daugh- ter” “daughters” “spokeswoman” “stepmother” “aunt” “husband” “queen” # A.3 CNN/Daily Mail Male Words: “actor” “boy” “boyfriend” “boys” “father” “gentlemen” “grandson” “he” “him” “his” “husbands” “kings” “male” “males” “man” “men” “prince” “son” “sons” “spokesman” “stepfather” “uncle” “wife” “king” “brother” “brothers” Female Words: “actress” “girl” “girlfriend” “ladies” “mothers” “mother” “girls” “wives” “granddaughter” “woman” “queens” “women” “daughters” “spokeswoman” “stepmother” “aunt” “husband” “queen” “sister” “sisters” # B Word Level Bias Examples Tables 5 and 6 show the bias scores at individual word level for selected words for Wikitext-2. The tables show how the scores vary for the training text and the generated text for different values of λ Tables 7 and 8 show the bias scores at individ- ual word level for selected words for CNN/Daily Mail. The tables show how the scores vary for the training text and the generated text for different values of λ Target Words training λ=0.0 λ=0.01 λ=0.1 λ=0.5 λ=0.8 λ=1.0 Arts Boston Edward George Henry Peter Royal Sir Stephen Taylor ambassador failed focused idea manager students university wife work youth -0.76 -0.95 -0.68 -0.52 -0.59 -0.69 -0.01 -0.01 -0.35 -0.84 -0.76 -0.46 -0.22 -0.20 -1.58 -0.60 -0.12 -0.92 -0.24 -0.39 -1.20 -1.06 -1.06 -0.91 -1.06 -2.06 -1.89 -1.76 -1.20 -0.91 -1.20 -2.06 -0.91 -1.06 -1.60 -0.79 -1.06 -1.29 -0.88 -1.20 -0.87 -0.23 0.09 -0.26 0.11 -0.09 -0.39 -0.99 -0.18 0.57 -0.23 0.03 -0.12 -0.36 -0.04 -0.31 0.17 -0.81 -0.48 0.54 -0.32 -1.06 -0.56 -0.22 -0.34 -0.14 -0.61 -0.86 -1.01 0.00 -0.63 -0.36 -0.41 -0.16 -0.30 -0.29 -1.01 -1.02 -0.23 -0.16 -0.17 -0.13 -0.14 -0.48 -0.84 -0.32 -0.64 -0.64 -0.84 -0.01 -0.74 -1.00 -0.40 -0.27 -1.08 -0.32 -0.79 -0.57 -0.49 -0.68 0.13 -0.37 -0.44 -0.26 -0.92 0.08 -1.14 -0.16 -0.11 -0.39 0.43 0.17 -0.57 -0.06 -0.30 -0.51 -0.95 -0.67 -0.52 0.58 -1.48 -0.94 -0.23 0.01 -0.61 0.53 -0.56 0.07 0.36 -0.83 -0.81 -0.53 -0.42 -1.06 -0.50 -0.70 -1.03 -0.13 Table 5: WikiText-2 bias scores for the words biased towards male gender for different λ values Target Words training λ=0.0 λ=0.01 λ=0.1 λ=0.5 λ=0.8 λ=1.0 Katherine Zenobia childhood cousin humor invitation parents partners performances producers readers stars talent wore 1.78 0.05 0.48 0.13 0.34 0.19 0.51 0.85 0.79 1.04 0.22 0.85 0.02 0.09 2.27 0.88 1.80 0.88 1.29 1.80 0.76 2.27 1.02 1.58 0.88 1.58 0.88 0.88 1.38 1.84 0.12 0.67 -0.87 0.77 -0.28 -0.20 0.33 0.28 0.16 -0.75 0.48 0.69 0.47 1.10 0.13 0.69 0.69 0.08 0.98 0.16 0.78 0.29 0.90 0.10 0.29 0.95 0.65 0.37 0.09 0.61 0.57 0.45 0.87 0.03 1.35 0.36 0.46 0.31 0.65 0.75 1.24 0.38 0.67 0.34 -0.44 0.57 -0.17 0.10 -1.45 -0.32 -0.28 -0.86 0.16 0.70 0.34 0.71 -0.25 1.11 3.22 -1.80 0.18 -1.29 -0.08 -0.69 Table 6: WikiText-2 bias scores for the words biased towards female gender for different λ values -1.93 0.60 -0.45 -0.69 0.21 0.69 -1.93 -0.27 0.04 -0.32 -0.43 -0.43 -0.52 0.26 -0.63 -0.36 -0.11 -0.63 -0.21 0.49 -0.50 -0.32 -0.99 -1.49 -0.11 -0.19 0.04 -1.30 0.40 -1.59 -1.68 -0.97 -0.83 -0.58 -1.12 abusers acting actions barrister battle beneficiary bills businessman cars citizen cocaine conspiracy controversial cooking cop drug executive fighter fraud friendly heroin journalists lawyer lead leadership notorious offensive officer outstanding parole pensioners prisoners religion reporters representatives research resignation sacrifice supervisor violent 0.66 -0.23 -0.27 -1.35 -0.27 -1.64 -0.32 -0.19 -0.43 -0.03 -0.59 -0.57 -0.21 -0.48 -1.30 -0.76 -0.04 -0.59 -0.17 -0.48 -0.57 -0.25 -0.39 -0.47 -0.25 -0.18 -0.17 -0.25 -0.25 -0.54 -0.48 -0.52 -0.41 -0.60 -0.07 -0.34 -0.95 -0.03 -0.66 -0.17 1.17 -0.81 -0.51 -2.00 -0.53 -1.87 -0.53 -1.81 -0.55 -0.30 -1.00 -0.73 -0.39 -0.53 -1.42 -0.82 -0.34 -0.90 -0.30 -0.53 -0.67 -1.08 -0.47 -0.50 -0.74 -0.64 -0.39 -0.29 -1.55 -0.86 -0.86 -0.99 -0.97 -0.93 -0.48 -0.46 -1.67 -1.08 -0.92 -0.54 0.56 -0.59 -0.06 -0.64 -0.10 -1.06 -0.18 -0.71 -0.32 -0.03 -0.84 -0.66 -0.39 -0.24 -0.77 -0.53 -0.22 -0.48 -0.16 -0.30 -0.28 -0.55 -0.14 -0.40 -0.28 -0.36 -0.28 -0.21 -0.98 0.00 -0.77 -0.18 -0.15 -0.26 -0.40 -0.05 -0.61 -0.38 -0.44 -0.07 0.77 -0.35 -0.07 -0.76 -0.32 -0.22 -0.50 -0.45 -0.11 -0.22 -0.44 -0.39 -0.02 -0.22 -0.72 -0.42 -0.04 -0.36 -0.19 -0.23 -0.26 -0.76 -0.10 -0.09 -0.68 -0.22 -0.17 -0.13 -0.50 -0.08 -0.07 -0.29 -0.48 -0.05 -0.18 -0.33 -0.58 -0.17 -0.25 -0.22 0.16 -0.54 -0.53 -0.08 -0.16 0.63 0.23 -0.53 -0.24 -0.01 -0.42 -0.83 -0.17 0.07 0.00 -0.54 -0.48 -0.89 0.10 0.36 -0.66 -0.44 -0.20 -0.07 -0.57 -0.12 -0.52 -0.17 0.03 0.07 0.64 -0.17 0.18 -0.52 -0.46 0.03 -0.40 -1.29 -0.17 -0.19 | | | | | 0.48 -0.19 Table 7: CNN/Daily Mail bias scores for the words biased towards male gender for different λ values Target Words training λ=0.0 λ=0.1 λ=0.5 λ=0.8 λ=1.0 -0.65 1.16 0.64 0.36 0.48 -0.25 0.27 -0.14 0.87 -1.53 1.11 1.36 0.88 0.26 -0.21 -0.94 0.55 0.06 0.29 0.26 0.25 0.25 -0.09 0.26 -0.26 -0.22 -0.14 -0.34 -0.83 0.42 0.08 0.59 0.96 0.43 0.45 0.17 0.58 -0.53 0.35 abusive appealing bags beloved carol chatted children comments crying designer designers distressed divorced dollar donated donating embracing encouragement endure expecting feeling festive fragile happy healthy hooked hurting indian kissed kissing loving luxurious makeup mannequin married models pictures pray relationship scholarship sharing sleeping stealing tears thanksgiving waist 0.00 0.44 0.34 0.17 0.76 0.03 0.29 0.17 0.28 0.73 0.44 0.15 0.68 0.44 0.52 1.29 1.13 0.85 0.85 1.01 0.21 0.15 0.44 0.32 0.52 0.78 0.75 0.18 0.31 0.26 0.41 0.59 1.60 0.95 0.29 0.35 0.08 0.62 0.53 0.80 0.58 0.18 0.10 0.50 0.85 1.33 0.40 1.22 1.42 0.35 1.41 1.83 0.46 0.46 0.70 0.80 2.14 0.53 0.70 1.63 0.57 1.38 1.78 0.94 0.94 1.07 0.84 0.53 0.94 0.66 0.64 1.38 1.13 0.28 1.03 1.14 0.73 0.82 1.63 1.92 0.37 1.22 0.50 1.58 0.62 1.16 0.73 0.71 0.48 0.58 2.14 1.45 0.06 0.23 0.48 0.27 0.20 0.20 0.36 0.04 0.19 0.57 1.29 0.23 0.18 0.65 0.06 0.27 0.74 0.22 0.26 0.26 0.16 0.52 0.20 0.10 0.26 0.12 0.33 0.15 0.17 0.54 0.43 0.17 0.07 0.70 0.34 0.28 0.10 0.25 0.39 0.80 0.33 0.27 0.32 0.44 1.14 0.68 0.39 0.30 0.05 0.15 0.39 0.19 0.26 0.02 0.57 0.69 0.76 0.26 0.10 0.59 0.15 0.80 0.55 0.50 0.29 0.12 0.25 0.14 0.45 0.11 0.45 0.12 0.34 0.02 0.19 0.61 0.18 0.44 0.22 0.04 0.09 0.38 0.04 0.35 0.32 0.70 0.67 0.35 0.18 0.12 1.08 0.02 0.48 -0.68 0.16 0.52 0.27 -0.14 0.41 -0.35 0.17 0.53 -0.11 -0.56 0.31 -0.24 0.68 -0.03 1.48 0.37 1.02 0.53 0.16 0.21 -0.20 0.11 0.24 -0.11 0.44 -0.02 0.28 0.44 0.15 -0.03 1.09 1.42 0.30 0.90 -0.06 -0.25 0.58 0.53 0.42 0.56 0.06 0.45 0.90 0.31 0.96 Table 8: CNN/Daily Mail bias scores for the words biased towards female gender for different λ values
{ "id": "1609.07843" }
1904.02342
Text Generation from Knowledge Graphs with Graph Transformers
Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce. In this work, we address the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph. Graphical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long-distance dependencies, and structural variety. We introduce a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints. Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text. Automatic and human evaluations show that our technique produces more informative texts which exhibit better document structure than competitive encoder-decoder methods.
http://arxiv.org/pdf/1904.02342
Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, Hannaneh Hajishirzi
cs.CL
Accepted as a long paper in NAACL 2019
null
cs.CL
20190404
20220324
2 2 0 2 r a M 4 2 ] L C . s c [ 3 v 2 4 3 2 0 . 4 0 9 1 : v i X r a # Text Generation from Knowledge Graphs with Graph Transformers Rik Koncel-Kedziorski1, Dhanush Bekal1, Yi Luan1, Mirella Lapata2, and Hannaneh Hajishirzi1,3 1University of Washington {kedzior,dhanush,luanyi,hannaneh}@uw.edu 2University of Edinburgh [email protected] 3Allen Institute for Artificial Intelligence # Abstract Title: Event Detection with Conditional Random Fields Generating texts which express complex ideas spanning multiple sentences requires a struc- tured representation of their content (docu- ment plan), but these representations are pro- hibitively expensive to manually produce. In this work, we address the problem of gener- ating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph. Graph- ical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long- distance dependencies, and structural variety. We introduce a novel graph transforming en- coder which can leverage the relational struc- ture of such knowledge graphs without impos- ing linearization or hierarchical constraints. Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text. Automatic and human evaluations show that our technique produces more informative texts which ex- hibit better document structure than competi- tive encoder-decoder methods. 1 Abstract Graph We present a CRF Model CRE Model —_ysed-for for Event Detecti Event Detection “used-for We evaluate this model, on SemEval 2010 Task 11 sonenemenninei Rade Our Model outperforms HMM models by 15% on comparison (this data. SemEval 2011 evaluate-for Task 11 HMM Models Figure 1: A scientific text showing the annotations of an information extraction system and the correspond- ing graphical representation. Coreference annotations shown in color. Our model learns to generate texts from automatically extracted knowledge using a graph en- coder decoder setup. one scientific discipline). Additionally, there are strong constraints on document structure, as sci- entific communication requires carefully ordered explanations of processes and phenomena. # Introduction Increases in computing power and model capac- ity have made it possible to generate mostly- grammatical sentence-length strings of natural language text. However, generating several sen- tences related to a topic and which display over- all coherence and discourse-relatedness is an open challenge. The difficulties are compounded in do- mains of interest such as scientific writing. Here the variety of possible topics is great (e.g. top- ics as diverse as driving, writing poetry, and pick- ing stocks are all referenced in one subfield of 1Data and code available at https://github.com/ rikdz/GraphWriter Many researchers have sought to address these issues by working with structured inputs. Data-to- text generation models (Konstas and Lapata, 2013; Lebret et al., 2016; Wiseman et al., 2017; Pudup- pully et al., 2019) condition text generation on table-structured inputs. Tabular input representa- tions provide more guidance for producing longer texts, but are only available for limited domains as they are assembled at great expense by manual annotation processes. The current work explores the possibility of us- ing information extraction (IE) systems to auto- matically provide context for generating longer texts (Figure 1). Robust IE systems are avail- able and have support over a large variety of tex- tual domains, and often provide rich annotations of relationships that extend beyond the scope of a single sentence. But due to their automatic na- ture, they also introduce challenges for generation such as erroneous annotations, structural variety, and significant abstraction of surface textual fea- tures (such as grammatical relations or predicate- argument structure). To effect our study, we use a collection of ab- stracts from a corpus of scientific articles (Ammar et al., 2018). We extract entity, coreference, and relation annotations for each abstract with a state- of-the-art information extraction system (Luan et al., 2018), and represent the annotations as a knowledge graph which collapses co-referential entities. An example of a text and graph are shown in Figure 1. We use these graph/text pairs to train a novel attention-based encoder-decoder model for knowledge-graph-to-text generation. Our model, GraphWriter, extends the successful Transformer for text encoding (Vaswani et al., 2017) to graph- structured inputs, building on the recent Graph Attention Network architecture (Veliˇckovi´c et al., 2018). The result is a powerful, general model for graph encoding which can incorporate global structural information when contextualizing ver- tices in their local neighborhoods. The main contributions of this work include: 1. We propose a new graph transformer encoder that applies the successful sequence trans- former to graph structured inputs. 2. We show how IE output can be formed as a connected unlabeled graph for use in attention-based encoders. 3. We provide a large dataset of knowledge- graphs paired with scientific texts for further study. Through detailed automatic and human evalua- tions, we demonstrate that automatically extracted knowledge can be used for multi-sentence text generation. We further show that structuring and encoding this knowledge as a graph leads to im- proved generation performance compared to other encoder-decoder setups. Finally, we show that GraphWriter’s transformer-style encoder is more effective than Graph Attention Networks on the knowledge-graph-to-text task. # 2 Related Work Our work falls under the larger scope of concept- to-text generation. Barzilay and Lapata (2005) in- troduced a collective content selection model for generating summaries of football games from ta- bles of game statistics. Liang et al. (2009) jointly learn to segment and align text with records, re- ducing the supervision needed for learning. Kim and Mooney (2010) improve this technique by learning a semantic parse to logical forms. Kon- stas and Lapata (2013) focus on the generation objective, jointly learning planning and generat- ing using a rhetorical (RST) grammar induction approach. These earlier works often focused on smaller record generation datasets such as WeatherGov and RoboCup, but recently Mei et al. (2016) showed how neural models can achieve strong re- sults on these standards, prompting researchers to investigate more challenging domains such as ours. Lebret et al. (2016) tackles the task of generat- ing the first sentence of a Wikipedia entry from the associated infobox. They provide a large dataset of such entries and a language model conditioned on tables. Our work focuses on a multi-sentence task where relations can extend beyond sentence boundaries. Wiseman et al. (2017) study the difficulty of ap- plying neural models to the data-to-text task. They introduce a large dataset where a text summary of a basketball game is paired with two tables of rel- evant statistics and show that neural models strug- gle to compete with template based methods over this data. We propose generating from graphs rather than tables, and show that graphs can be ef- fectively encoded to capture both local and global structure in the input. We show that modeling knowledge as a graph improves generation results, connecting our work to other graph-to-text tasks such as generating from Abstract Meaning Representation (AMR) graphs. Konstas et al. (2017) provide the first neu- ral model for this task, and show that pretrain- ing on a large dataset of noisy automatic parses can improve results. However, they do not di- rectly model the graph structure, relying on lin- earization and sequence encoding instead. Cur- rent works improve this through more sophisti- cated graph encoding techniques. Marcheggiani and Perez-Beltrachini (2018) encode input graphs directly using a graph convolution encoder (Kipf and Welling, 2017). Our model extends the graph attention networks of Veliˇckovi´c et al. (2018), a direct descendant of the convolutional approach which offers more modeling power and has been Vocab Tokens Entities Avg Length Avg #Vertices Avg #Edges Title Abstract 29K 413K - 9.9 - - KG 54K 77K 5.8M 1.2M 518K - 12.42 4.43 - 141.2 - - Table 1: Data statistics of our AGENDA dataset. Aver- ages are computed per instance. shown to improve performance. Song et al. (2018) uses a graph LSTM model to effect information propagation. At each timestep, a vertex is rep- resented by a gated combination of the vertices to which it is connected and the labeled edges connecting them. Beck et al. (2018) use a sim- ilar gated graph neural network. Both of these gated models make heavy use of label information, which is much sparser in our knowledge graphs than in AMR. Generally, AMR graphs are denser, rooted, and connected, whereas the knowledge our model works with lacks these characteristics. For this reason, we focus on attention-based models such as Veliˇckovi´c et al. (2018), which impose fewer constraints on their input. Finally, our work is related to Wang et al. (2018) who offer a method for generating sci- entific abstracts from titles. Their model uses a gated rewriter network to write and revise sev- eral draft outputs in several sequence-to-sequence steps. While we operate in the same general do- main as this work, our task setup is ultimately dif- ferent due to the use of extracted information as in- put. We argue that our setup improves the task de- fined in Wang et al. (2018), and our more general model can be applied across tasks and domains. # 3 The AGENDA Dataset We consider the problem of generating a text from automatically extracted information (knowledge). IE systems can produce high quality knowledge for a variety of domains, synthesizing information from across sentence and even document bound- aries. Generating coherent text from knowledge requires a model which considers global charac- teristics of the knowledge as well as local charac- teristics of each entity. This feature of the task mo- tivates our use of graphs for representing knowl- edge, where neighborhoods localize important in- formation and paths through the graph build con- nections between distant nodes through interme- diate ones. An example knowledge graph can be seen in Figure 1. We formulate our problem as follows: given the title of a scientific article and a knowledge graph constructed by an automatic information extrac- tion system, the goal is to generate an abstract that a) is appropriate for the given title and b) expresses the content of the knowledge graph in natural lan- guage text. To evaluate how well a model accom- plishes this goal, we introduce the Abstract GEN- eration DAtaset (AGENDA), a dataset of knowl- edge graphs paired with scientific abstracts. Our dataset consists of 40k paper titles and abstracts from the Semantic Scholar Corpus taken from the proceedings of 12 top AI conferences (Ammar et al., 2018). For each abstract, we create a knowledge graph in two steps. First, we apply the SciIE system of Luan et al. (2018), a state-of-the-art science- domain information extraction system. This sys- tem provides named entity recognition for scien- tific terms, with entity types Task, Method, Metric, Material, or Other Scientific Term. The model also produces co-reference annotations as well as seven relations that can obtain between different enti- ties (Compare, Used-for, Feature-of, Hyponym- of, Evaluate-for, and Conjunction). For exam- ple, in Figure 1, the node labeled “SemEval 2011 Task 11” is of type ‘Task’, “HMM Models” is of type ‘Model’, and there is a ‘Evaluate-For’ rela- tion showing that the models are evaluated on the task. We form these annotations into knowledge graphs. We collapse co-referential entities into a single node associated with the longest mention (on the assumption that these will be the most in- formative). We then connect nodes to one another using the relation annotations, treating these as la- beled edges in the graph. The result is a possibly unconnected graph representation of the SciIE an- notations for a given abstract. Statistics of the AGENDA dataset are available in Table 1. We split the AGENDA dataset into 38,720 training, 1000 validation, and 1000 test datapoints. We offer standardized data splits to fa- cilitate comparison. # 4 Model Following most work on neural generation we adopt an encoder-decoder architecture, shown in Figure 2: Converting disconnected labeled graph to connected unlabeled graph for use in attention-based encoder. vi refer to vertices, Rij to relations, and G is a global context node. Figure 3, which we call GraphWriter. The input to GraphWriter is a title and a knowledge graph which are encoded respectively with a bidirec- tional recurrent neural network and a novel Graph Transformer architecture (to be discussed in Sec- tion 4.1). At each decoder time step, we attend on encodings of the knowledge graph and document title using the decoder hidden state ht ∈ Rd. The resulting vectors are used to select output wt ei- ther from the decoder’s vocabulary or by copying an entity from the knowledge graph. Details of our decoding process are described in Section 4.2. The model is trained end-to-end to minimize the neg- ative log likelihood of the mixed copy and vocab- ulary probability distribution and the human au- thored text. # 4.1 Encoder The AGENDA dataset contains a knowledge graph for each datapoint, but our model requires unlabeled, connected graphs as input. To encode knowledge graphs with this model, we restructure each graph as an unlabeled connected graph, pre- serving label information by the method described below and sketched in Figure 2. Graph Preparation We convert each graph to an unlabeled connected bipartite graphs following a similar procedure to Beck et al. (2018). In this process, each labeled edge is replaced with two vertices: one representing the forward direction of the relation and one representing the reverse. These new vertices are then connected to the en- tity vertices so that the directionality of the former edge is maintained. This restructures the original knowledge graph as an unlabeled directed graph where all vertices correspond to entities and rela- tions in the SciIE annotations without loss of infor- “Text Generation From Knowledge Graphs” Transformer Title Encoder Peet Copy Mechanism NZ # t Figure 3: GraphWriter Model Overview mation. To promote information flow between dis- connected parts of the graph, we add a global ver- tex which connects all entity vertices. This global vertex will be used to initialize the decoder, analo- gously to the final encoder hidden state in a tra- ditional sequence to sequence model. The final result of these restructuring operations is a con- nected, unlabeled graph G = (V, E), where V is a list of entities, relations, and a global node and E is an adjacency matrix describing the directed edges. Graph Transformer Our model is most sim- ilar to the Graph Attention Network (GAT) of Veliˇckovi´c et al. (2018), which computes the hidden representations of each node in a graph by attending over its neighbors following a self- attention strategy. The use of self-attention in GAT addresses the shortcomings of prior meth- ods based on graph convolutions (Defferrard et al., 2016; Kipf and Welling, 2017), but limits vertex updates to information from adjacent nodes. Our model allows for a more global contextualization of each vertex through the use of a transformer- style architecture. The recently proposed Trans- former (Vaswani et al., 2017) addresses the inher- ent sequential computation shortcoming of recur- rent neural networks, enabling efficient and par- alleled computation by invoking a self-attention mechanism for global context modeling. These models have shown promising results in a variety of text processing tasks (Radford et al., 2018). Our Graph Transformer encoder starts with self- Block Network Graph Attention (Head 1 » Input: V 4-2 Graph Attention Norm & Add Feedforward Network v —>| Norm & Add \ Output: V4 / # «L Figure 4: Graph Transformer attention of local neighborhoods of vertices; the key difference with GAT is that our model in- cludes additional mechanisms for capturing global context. This additional modeling power allows the Graph Transformer to better articulate how a vertex should be updated given the content of its neighbors, as well as to learn global patterns of graph structure relevant to the model’s objective. Specifically, V is embedded in a dense contin- uous space by the embedding process described at the end of this section, resulting in matrix V0 = [vi], vi ∈ Rd which will serve as input to the graph transformer model shown in Figure 4. Each ver- tex representation vi is contextualized by attend- ing over the other vertices to which vi is connected in G. We use an N -headed self attention setup, where N independent attentions are calculated and concatenated before a residual connection is ap- plied: N y= wit || > ay Wi (1) n=1jEN;j aj; = a”(vi,v;) (2) Here, || denotes the concatenation of the N atten- tion heads, NV; denotes the neighborhood of v; in G, WY ¢ ROw)x¢, and where a” are attention mechanisms parameterized per head. In this work, we use attention functions of the following form: exp((Wxk,) 'Waai) Veen, exP((Wk-)' Woda) a(qi, kj) (3) Each a learns independent transformations WQ, WK ∈ R( d N )×d of q and k respectively, and the resulting product is normalized across all connected edges. To reduce the tendency of these dot products to impede gradient flow, we scale them by 1√ d The Graph Transformer then augments these multi-headed attention layers with block networks. Each block applies the following transformations: # LayerNorm(v; + vi = FFN(LayerNorm(%;)) i + LayerNorm(ˆvi)) (4) (5) LayerNorm(v; + LayerNorm(Â¥;)) (4) vi = FFN(LayerNorm(%;)) (5) Where FFN(x) is a two layer feedforward network with a non-linear transformation f between layers i.e. f (xW1 + b1)W2 + b2. Stacking multiple blocks allows information to propagate through the graph. Blocks are stacked L times, with the output of layer l − 1 taken as the input to layer l, so that vl . The resulting vertex encodings VL = [vL i ] represent entities, relations, and the global node contextualized by their relationships in the graph structure. We refer to the resulting encodings as graph contextualized vertex encodings. Embedding Vertices, Encoding Title As stated above, the vertices of our graph correspond to entities and relations from the SciIE annotations. Because each relation is represented as both a forward- and backward-looking vertex, we learn two embeddings per relation as well as an ini- tial embedding for the global node. Entities correspond to scientific terms which are often multi-word expressions. To produce a single d- dimensional embedding per phrase, we use the last hidden state of a bidirectional RNN run over em- beddings of each word in the entity phrase, i.e. BiRNN(x1 . . . xm) for dense embeddings x and phrase length m. The output of our embedding step is a collection V0 of d-dimensional vectors representing each vertex in V . The title input is also a short string, and so we encode it with another BiRNN to produce T = BiRNN(‘,...2/,,) for title word embedding x’. # 4.2 Decoder We decode with an attention-based decoder with a copy mechanism for copying input from the knowledge graph and title. At each decoding timestep t we use decoder hidden state ht to com- pute context vectors cg and cs for the graph and title sequence respectively. cg is computed using multi-headed attention contextualized by ht: N cy = n+ || aj = a(hy, v";) N cy = n+ || Doofwev'; © n=1jEV j) (7) for a as described in Equation (1) by attending over the graph contextualized encodings V". cs is computed similarly, attending over the title en- coding T. We then construct the final context vec- tor by concatenation, c; = [cg||cs]. We use an input-feeding decoder (Luong et al., 2015) where both h, and c; are passed as input to the next RNN timestep. We compute a probability p of copying from the input using ht and ct in a fashion similar to See et al. (2017), that is: p= 0 (Weopy [hu ||cz] + beopy) (8) The final next-token probability distribution is: p ∗ αcopy + (1 − p) ∗ αvocab, (9) Where the probability distribution a°°?” over en- tities and input tokens is computed as aj°?Y = a([h¢||c:],x;) forx; € V||T. The remaining 1—p probability is given to a”°°, which is calculated by scaling {h,||c;] to the vocabulary size and tak- ing a softmax. # 5 Experiments Evaluation Metrics We evaluate using a com- bination of human and automatic evaluations. For human evaluation, participants were asked to compare abstracts generated by various models and those written by the authors of the scien- tific articles. We used Best-Worst Scaling (BWS; (Louviere and Woodworth, 1991; Louviere et al., 2015)), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2016). Participants were presented with two or three abstracts and asked to decide which one was better and which one was worse in order of grammar and fluency (is the abstract written in well-formed English?), coherence (does the abstract have an introduction, state the prob- lem or task, describe a solution, and discuss eval- uations or results?), and informativeness (does the abstract relate to the provided title and make use of appropriate scientific terms?). We provided ex- amples of good and bad abstracts and explain how they succeed or fail to meet the defined criteria. Because our dataset is scientific in nature, eval- uations must be done by experts and we can only collect a limited number of these high quality dat- apoints.2 The study was conducted by 15 experts (i.e. computer science students) who were famil- iar with the abstract writing task and the content of the abstracts they judged. To supplement this, we also provide automatic metrics. We use BLEU (Papineni et al., 2002), an n-gram overlap measure popular in text generation tasks, and METEOR (Denkowski and Lavie, 2014), a machine transla- tion with paraphrase and language-specific consid- erations. Comparisons We compare our GraphWriter against several strong baselines. In GAT, we replace our Graph Transformer encoder with a Graph Attention Network of (Veliˇckovi´c et al., 2018). This encoder consists of PReLU activa- tions stacked between 6 self-attention layers. To determine the usefulness of including graph re- lations, we compare to a model which uses only entities and title (EntityWriter). Finally, we com- pare with the gated rewriter model of Wang et al. (2018) (Rewriter). This model uses only the docu- ment title to iteratively rewrite drafts of its output. 3 Implementation Details Our models are trained end-to-end to minimize the negative joint log like- lihood of the target text vocabulary and the copied entity indices. We use SGD optimization with mo- mentum (Qian, 1999) and “warm restarts”, a cycli- cal regiment that reduces the learning rate from 0.25 to 0.05 over the course of 5 epochs, then re- sets for the following epoch. Models are trained for 15 epochs with early stopping (Prechelt, 1998) based on the validation loss, with most models stopping between 8 and 13 epochs. We use single- layer LSTMs (Hochreiter and Schmidhuber, 1997) as recurrent networks. We use dropout (Srivas- tava et al., 2014) in self attention layers set to 0.3. Hidden states and embedding dimensions are fixed at 500 and attentions learn 500 dimen- 2Attempts to crowd source this evaluation failed. 3Due to the larger size and greater variety of our dataset and accompanying vocabularies compared to theirs, we were unable to train this model with the reported batch size of 240. We use batch size 24 instead, which is partially responsible for the lower performance. GraphWriter GAT EntityWriter Rewriter BLEU 14.3 ± 1.01 12.2 ± 0.44 10.38 1.05 METEOR 18.8 ± 0.28 17.2 ± 0.63 16.53 8.38 Table 2: Automatic Evaluations of Generation Sys- tems. In Block layers, the feedfor- sional projections. ward network has an intermediate size of 2000, and we use a PReLU activation function (He et al., 2015). GraphWriter and GAT use L = 6 lay- ers. The number of attention heads is set to 4. In all models, for both inputs and output, we replace words occurring fewer than 5 times with <unk> tokens. In each abstract, we replace all mentions in a coreference chain in the abstract with the canonical mention used in the graph. We decode with beam search (Graves, 2012; Sutskever et al., 2014) with a beam size of 4. A post-processing step deletes repeated sentences and repeated coor- dinated clauses. # 5.1 Results A comparison of all systems in terms of automatic metrics is shown in Table 2. Our GraphWriter model outperforms other methods. We see that models which leverage title, entities, and relations (GraphWriter and GAT) outperform models which use less information (EntityWriter and Rewriter). We see that GraphWriter outperforms GAT across metrics, indicating that the global contextu- alization provided by GraphWriter improves gen- eration. To verify the performance gap between GraphWriter and GAT, we report the average test metrics for 4 training runs of each model along with their variances. We see that the variance of the different models is non-overlapping, and in fact all training runs of GraphWriter outperformed all runs of GAT on these metrics. Does Knowledge Help? To evaluate the value of knowledge in the generation task we compare our GraphWriter model to a model which does not generate from knowledge. We provide expert annotators with 50 randomly-selected paper titles from the test set and ask them for a single judg- ment according to the criteria described in Sec- tion 5. We pair each paper title with the generated abstracts produced by GraphWriter (a knowledge- informed modes), Rewriter (a knowledge-agnostic model), and the gold abstract (with canonicalized Rewriter (No knowledge) GraphWriter (Knowledge) Human Authored Best Worst 12% 64% 24% 36% 0% 64% Table 3: Does knowledge improve generation? Human evaluations of best and worst abstract. Structure Informativeness Grammar Overall Win Tie Lose 63% 17% 20% 43% 23% 33% 63% 23% 13% 63% 17% 20% Table 4: Human Judgments of GraphWriter and Enti- tyWriter models. coreferential mentions). Results of this comparison can be seen in Ta- ble 3. We see that GraphWriter is selected as “Best” more often than Rewriter, and is less of- ten selected as “Worst”, attesting to the value of including knowledge in the text generation pro- cess. We see that sometimes generated texts are preferred to human authored text, which is due in part to the disfluencies introduced by canonical- ization of entity mentions. To further understand the advantages of using knowledge graphs, we provide a more detailed comparison of the GraphWriter and EntityWriter models. We select 30 additional test datapoints and ask experts to provide per-criterion judgments of the outputs of the two systems. Since both mod- els make use of extracted entities, we show this list along with the title for each datapoint, and mod- ify the description of Informativeness to include “making use of the provided entities”. Results of this evaluation are shown in Table 4. Here we see that including structured knowledge in the form of a graph improves abstract generation compared to generating from an unstructured collection of en- tities. The largest gains are made in terms of doc- ument structure and grammar, indicating that the structure of the input knowledge is being trans- lated into the surface form. Generating from Title The Rewriter model (Wang et al., 2018) considers the task of gener- ating an abstract with only the paper’s title as in- put. We compare against this model because it is among the first end-to-end systems to attempt to write scientific abstracts. However, the task setup used in Wang et al. (2018) differs significantly In order from the task introduced in this work. Title Knowledge Block and Group Regularized Sparse Modeling for Dictionary Learning (dictionary learning, CONJUNCTION, sparse coding) ; (optimization problems, USED-FOR, dictionary learning) ; (optimization problems, USED-FOR, sparse coding). . . GraphWriter GAT Sparse representations have recently been shown to be effective in many optimization problems. However, existing dictionary learning methods are limited in the number of dictionary blocks, which can be expensive to obtain. In this paper, we propose a novel approach to dictionary learning based on sparse coding . . . In this paper, we consider the problem of dictionary learning in well-known datasets. In particular, we consider the problem of dictionary learning, where the goal is to find a set of dictionary blocks that maximize the likelihood of a given set of dictionary blocks . . . EntityWriter We propose a novel dictionary learning framework for reconstructed block/group sparse coding schemes. The dictionary learning framework is based on the descent, which is a block structure of the group structure . . . This paper presents a new approach to the k-means of the algorithm. The proposed approach is based on the basis of the stationarity algorithm. The algorithm is based on the fact that the number of bits is a constant of the base of the base of the input . . . This paper proposes a dictionary learning framework that combines the proposed block/group (BGSC) or reconstructed block/group (R-BGSC) sparse coding schemes with the novel Intra-block Coherence Suppres- sion Dictionary Learning algorithm. An important and distinguishing feature of the proposed framework is that all dictionary blocks are trained simultaneously . . . Rewriter Gold Title Knowledge Image Interpolation with Directionlets (directionally adaptive image interpolation USED-FOR edge information) ; (numeric and visual quality, HYPONYM-OF, directionally adaptive image interpolation) ; (directionlets, EVALUATE-FOR, multiple- direction wavelet transform) . . . GraphWriter GAT In this paper, we propose a novel directionally adaptive image interpolation based on the multiple-direction wavelet transform, called directionlets, which can be used as a directionlets to improve the numeric and visual quality of the directionally adaptive image interpolation . . . In this paper, we propose a novel directionally adaptive image interpolation, called directionally adaptive image interpolation, for directionally adaptive image interpolation , which is based on the multiple-direction wavelet transform . . . EntityWriter We present a novel directionally adaptive image interpolation for numeric and visual quality. The wavelet transform is based on the wavelet transform between the low-resolution image and the interpolated image. The high-resolution image is represented by a wavelet transform . . . We present a new method for finding topic-specific data sets. The key technical contributions of our ap- proach is to be a function of the terrestrial distributed memory. The key idea is to be a function of the page that seeks to be ranked the buckets of the data. The basic idea is a new tool for the embedded space . . . We present a novel directionally adaptive image interpolation based on a multiple-direction wavelet trans- form, called directionlets. The directionally adaptive image interpolation uses directionlets to efficiently capture directional features and to extract edge information along different directions from the low- resolution image . . . Rewriter Gold Table 5: Example outputs of various systems versus Gold. to make a fair comparison, we construct a variant of our model which is only provided with a title as input. We develop a model that predicts entities from the title, and then uses our knowledge-aware model to generate the abstract. For this compari- son we use the EntityWriter model with a collec- tion of entities inferred from the title alone (Infer- EntityWriter). To infer relevant entities, we learn to embed ti- tles and entities extracted from the corresponding abstract in a shared dense vector space by min- imizing their cosine distance. We use negative sampling to provide definition to this vector space. At test time, we use the title embedding to infer the K = 12 closest entities to feed into the InferEn- tityWriter model. Results are shown in Table 6, which shows that InferEntityWriter achieves bet- Rewriter InferEntityWriter BLEU METEOR 1.05 3.60 8.38 12.2 Table 6: Comparison of generation without knowledge and with Inferred Knowledge (InferEntityWriter) ter results than Rewriter, indicating that the inter- mediate entity prediction step is helpful in abstract generation. # 5.2 Analysis Table 5 shows examples of various system outputs for a particular test instance.We see that Graph- Writer makes use of more entities from the input, arranged with more articulated textual context. It demonstrates less repetition than GAT. Both GraphWriter and GAT show much better coher- ence than EntityWriter, which copies entities from the input into unreasonable contexts. Rewriter, while fluent and grammatical, jumps from topic to topic, failing to relate as strongly to the input as the knowledge-aware models. To determine the shortcomings of our model, we calculate rough error statistics over the out- puts of the GraphWriter on the test set. We no- tice that 40% of entities in the knowledge graphs do not appear in the generated text. Future work should address this coverage problem, perhaps through modifications to the inference procedure or a coverage loss (Tu et al., 2016) modified to the specifics of this task. We find that 18% of all sentences generated by our model repeat sentences or clauses and are subjected to the post-processing pruning mentioned in Section 5. While this step is a simple solution to improve generated outputs, a more advanced solution is required. # 6 Conclusion We have studied the problem of generating multi- sentence text from the output of automatic infor- mation extraction systems, and have shown that incorporating knowledge as graphs improves per- formance. We introduced GraphWriter, featuring a new attention model for graph encoding, and demonstrated its utility through human and au- tomatic evaluation compared to strong baselines. Lastly, we provide a new resource for the genera- tion community, the AGENDA dataset of abstracts and knowledge. Future work could address the problem of repetition and entity coverage in the generated texts. # Acknowledgments This research was supported by the Office of Naval Research under the MURI grant N00014-18-1- 2670, NSF (IIS 1616112, III 1703166), Allen Dis- tinguished Investigator Award, Samsung GRO and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg. We gratefully acknowledge the support of the European Research Council (Lap- ata; award number 681760). We also thank the anonymous reviewers and the UW-NLP group for their helpful comments. # References Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat- ula, Iz Beltagy, Miles Crawford, Doug Downey, Ja- son Dunkelberger, Ahmed Elgohary, Sergey Feld- man, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Pe- ters, Joanna Power, Sam Skjonsberg, Lucy Lu Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the Litera- ture Graph in Semantic Scholar. In NAACL. Regina Barzilay and Mirella Lapata. 2005. Collective Content Selection for Concept-to-Text Generation. In EMNLP, pages 331–338. Association for Com- putational Linguistics. Daniel Edward Robert Beck, Gholamreza Haffari, and Trevor Cohn. 2018. Graph-to-Sequence Learning using Gated Graph Neural Networks. In ACL. Micha¨el Defferrard, Xavier Bresson, and Pierre Van- dergheynst. 2016. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In NeurIPS. Michael J. Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation Evalua- tion for Any Target Language. In Workshop on Sta- tistical Machine Translation. Alex Graves. 2012. Sequence Transduction with arXiv preprint Recurrent Neural Networks. arXiv:1211.3711. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving Deep into Rectifiers: Surpass- ing Human-Level Performance on ImageNet Classi- fication. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput., 9(8):1735– 1780. Joohyun Kim and Raymond J Mooney. 2010. Gen- erative Alignment and Semantic Parsing for Learn- In Proceedings ing from Ambiguous Supervision. of the 23rd International Conference on Computa- tional Linguistics: Posters, pages 543–551. Semi- Supervised Classification with Graph Convolutional Networks. In ICLR. Svetlana Kiritchenko and Saif Mohammad. 2016. Cap- turing Reliable Fine-Grained Sentiment Associa- tions by Crowdsourcing and Best-Worst Scaling. In NAACL-HLT. Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke S. Zettlemoyer. 2017. Neural AMR: Sequence-to-Sequence Models for Parsing and Gen- eration. In ACL. Inducing Document Plans for Concept-to-Text Generation. In EMNLP, pages 1503–1514. R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In EMNLP. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning Semantic Correspondences with Less Su- pervision. In ACL/AFNLP, pages 91–99. Jordan J Louviere, Terry N Flynn, and Anthony Al- fred John Marley. 2015. Best-Worst Scaling: The- ory, Methods and Applications. Cambridge Univer- sity Press. Jordan J Louviere and George G Woodworth. 1991. Best-Worst Scaling: A Model for the Largest Dif- ference Judgments. University of Alberta: Working Paper. Yi Luan, Luheng He, Mari Ostendorf, and Han- naneh Hajishirzi. 2018. Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction. In EMNLP, pages 3219–3232. Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In EMNLP. Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep Graph Convolutional Encoders for Structured Data to Text Generation. INLG. Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. What to talk about and how? Selective Gener- ation using LSTMs with Coarse-to-Fine Alignment. In NAACL-HLT, pages 720–730. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL. Lutz Prechelt. 1998. Early Stopping - but when? In Neural Networks: Tricks of the Trade, pages 55–69, London, UK, UK. Springer-Verlag. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-Text Generation with Content Selection and Planning. In AAAI. Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks : the official journal of the International Neural Network Society, 12 1:145–151. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Un- derstanding by Generative Pre-Training. Accessed https://s3-us-west-2.amazonaws. at com/openai-assets/research-covers/ language-unsupervised/language_ understanding_paper.pdf. Abigail See, Peter J Liu, and Christopher D Man- ning. 2017. Get to the Point: Summarization with Pointer-Generator Networks. arXiv preprint arXiv:1704.04368. Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A Graph-to-Sequence Model for AMR-to-Text Generation. In ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Net- J. Mach. Learn. Res., works from Overfitting. 15(1):1929–1958. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Net- works. In NeurIPS, pages 3104–3112. Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling Coverage arXiv preprint for Neural Machine Translation. arXiv:1601.04811. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NeurIPS, pages 5998–6008. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR. Qingyun Wang, Zhihao Zhou, Lifu Huang, Spencer Whitehead, Boliang Zhang, Heng Ji, and Kevin Knight. 2018. Paper Abstract Writing through Edit- In Proceedings of the 56th An- ing Mechanism. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 260– 265, Melbourne, Australia. Association for Compu- tational Linguistics. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in Data-to-document Gen- eration. In EMNLP.
{ "id": "1601.04811" }
1904.02232
BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis
Question-answering plays an important role in e-commerce as it allows potential customers to actively seek crucial information about products or services to help their purchase decision making. Inspired by the recent success of machine reading comprehension (MRC) on formal documents, this paper explores the potential of turning customer reviews into a large source of knowledge that can be exploited to answer user questions.~We call this problem Review Reading Comprehension (RRC). To the best of our knowledge, no existing work has been done on RRC. In this work, we first build an RRC dataset called ReviewRC based on a popular benchmark for aspect-based sentiment analysis. Since ReviewRC has limited training examples for RRC (and also for aspect-based sentiment analysis), we then explore a novel post-training approach on the popular language model BERT to enhance the performance of fine-tuning of BERT for RRC. To show the generality of the approach, the proposed post-training is also applied to some other review-based tasks such as aspect extraction and aspect sentiment classification in aspect-based sentiment analysis. Experimental results demonstrate that the proposed post-training is highly effective. The datasets and code are available at https://www.cs.uic.edu/~hxu/.
http://arxiv.org/pdf/1904.02232
Hu Xu, Bing Liu, Lei Shu, Philip S. Yu
cs.CL
accepted by NAACL 2019
null
cs.CL
20190403
20190504
9 1 0 2 y a M 4 ] L C . s c [ 2 v 2 3 2 2 0 . 4 0 9 1 : v i X r a # BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis Hu Xu1, Bing Liu1, Lei Shu1 and Philip S. Yu1,2 1Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA 2Institute for Data Science, Tsinghua University, Beijing, China {hxu48, liub, lshu3, psyu}@uic.edu # Abstract Question-answering plays an important role in e-commerce as it allows potential customers to actively seek crucial information about prod- ucts or services to help their purchase decision making. Inspired by the recent success of ma- chine reading comprehension (MRC) on for- mal documents, this paper explores the poten- tial of turning customer reviews into a large source of knowledge that can be exploited to answer user questions. We call this problem Review Reading Comprehension (RRC). To the best of our knowledge, no existing work has been done on RRC. In this work, we first build an RRC dataset called ReviewRC based on a popular benchmark for aspect- based sentiment analysis. Since ReviewRC has limited training examples for RRC (and also for aspect-based sentiment analysis), we then explore a novel post-training approach on the popular language model BERT to enhance the performance of fine-tuning of BERT for RRC. To show the generality of the approach, the proposed post-training is also applied to some other review-based tasks such as aspect ex- traction and aspect sentiment classification in aspect-based sentiment analysis. Experimen- tal results demonstrate that the proposed post- training is highly effective1. services), which are vital for customer decision making. As such, an intelligent agent that can automatically answer customers’ questions is very important for the success of online businesses. Given the ever-changing environment of prod- ucts and services, it is very hard, if not impossible, to pre-compile an up-to-date and reliable knowl- edge base to cover a wide assortment of ques- tions that customers may ask, such as in factoid- based KB-QA (Xu et al., 2016; Fader et al., 2014; Kwok et al., 2001; Yin et al., 2015). As a compro- mise, many online businesses leverage community question-answering (CQA) (McAuley and Yang, 2016) to crowdsource answers from existing cus- tomers. However, the problem with this approach is that many questions are not answered, and if they are answered, the answers are delayed, which is not suitable for interactive QA. In this paper, we explore the potential of using product reviews as a large source of user experiences that can be ex- ploited to obtain answers to user questions. Al- though there are existing studies that have used in- formation retrieval (IR) techniques (McAuley and Yang, 2016; Yu and Lam, 2018) to find a whole review as the response to a user question, giving the whole review to the user is undesirable as it is quite time-consuming for the user to read it. # Introduction For online commerce, question-answering (QA) serves either as a standalone application of cus- tomer service or as a crucial component of a dia- logue system that answers user questions. Many intelligent personal assistants (such as Amazon Alexa and Google Assistant) support online shop- ping by allowing the user to speak directly to the assistants. One major hindrance for this mode of shopping is that such systems have limited capa- bility to answer user questions about products (or Inspired by the success of Machine Reading Comphrenesions (MRC) (Rajpurkar et al., 2016, 2018), we propose a novel task called Review Reading Comprehension (RRC) as following. Problem Definition: Given a question q = (q1, . . . , qm) from a customer (or user) about a product and a review d = (d1, . . . , dn) for that product containing the information to answer q, find a sequence of tokens (a text span) a = (ds, . . . , de) in d that answers q correctly, where 1 ≤ s ≤ n, 1 ≤ e ≤ n, and s ≤ e. 1The datasets and code are available at https://www. cs.uic.edu/˜hxu/. A sample laptop review is shown in Table 1. We can see that customers may not only ask factoid Questions QI: Does it have an internal hard drive ? Q2: How large is the internal hard drive ? Q3: is the capacity of the internal hard drive OK ? Questions Q1: Does it have an internal hard drive ? Q2: How large is the internal hard drive ? Q3: is the capacity of the internal hard drive OK ? Review Excellent value and a must buy for someone looking for a Macbook . You ca n’t get any better than this price and it come withA1 an internal disk drive . All the newer MacBooks do not . Plus you get 500GBA2 which is also a greatA3 feature . Also , the resale value on this will keep . I highly recommend you get one before they are gone . Table 1: An example of review reading comprehension: we show 3 questions and their corresponding answer spans from a review. questions such as the specs about some aspects of the laptop as in the first and second questions but also subjective or opinion questions about some aspects (capacity of the hard drive), as in the third question. RRC poses some domain challenges compared to the traditional MRC on Wikipedia, such as the need for rich product knowledge, in- formal text, and fine-grained opinions (there is al- most no subjective content in Wikipedia articles). Research also shows that yes/no questions are very frequent for products with complicated specifica- tions (McAuley and Yang, 2016; Xu et al., 2018b). To the best of our knowledge, no existing work has been done in RRC. This work first builds an RRC dataset called ReviewRC, using reviews from SemEval 2016 Task 52, which is a pop- ular dataset for aspect-based sentiment analysis (ABSA) (Hu and Liu, 2004) in the domains of lap- top and restaurant. We detail ReviewRC in Sec. 5. Given the wide spectrum of domains (types of products or services) in online businesses and the prohibitive cost of annotation, ReviewRC can only be considered to have a limited number of anno- tated examples for supervised training, which still leaves the domain challenges partially unresolved. This work adopts BERT (Devlin et al., 2018) as the base model as it achieves the state-of- the-art performance on MRC (Rajpurkar et al., 2016, 2018). Although BERT aims to learn con- textualized representations across a wide range of NLP tasks (to be task-agnostic), leveraging BERT alone still leaves the domain challenges un- # 2http://alt.qcri.org/semeval2016/ task5/. We choose these review datasets to align RRC with existing research on sentiment analysis. resolved (as BERT is trained on Wikipedia ar- ticles and has almost no understanding of opin- ion text), and it also introduces another challenge of task-awareness (the RRC task), called the task challenge. This challenge arises when the task- agnostic BERT meets the limited number of fine- tuning examples in ReviewRC (see Sec. 5) for RRC, which is insufficient to fine-tune BERT to ensure full task-awareness of the system3. To ad- dress all the above challenges, we propose a novel joint post-training technique that takes BERT’s pre-trained weights as the initialization4 for ba- sic language understanding and adapt BERT with both domain knowledge and task (MRC) knowl- edge before fine-tuning using the domain end task annotated data for the domain RRC. This tech- nique leverages knowledge from two sources: un- supervised domain reviews and supervised (yet out-of-domain) MRC data 5, where the former en- hances domain-awareness and the latter strength- ens MRC task-awareness. As a general-purpose approach, we show that the proposed method can also benefit ABSA tasks such as aspect extraction (AE) and aspect sentiment classification (ASC). The main contributions of this paper are as fol- lows. (1) It proposes the new problem of review reading comprehension (RRC). (2) To solve this new problem, an annotated dataset for RRC is created. (3) It proposes a general-purpose post- training approach to improve RRC, AE, and ASC. Experimental results demonstrate that the pro- posed approach is effective. # 2 Related Works Many datasets have been created for MRC from formally written and objective texts, e.g., Wikipedia (WikiReading (Hewlett et al., 2016), SQuAD (Rajpurkar et al., 2016, 2018), Wiki- Hop (Welbl et al., 2018), DRCD (Shao et al., 2018), QuAC (Choi et al., 2018), HotpotQA (Yang et al., 2018)) news and other articles (CNN/Daily Mail (Hermann et al., 2015), NewsQA (Trischler et al., 2016), RACE (Lai et al., 2017)), fic- tional stories (MCTest (Richardson et al., 2013), 3The end tasks from the original BERT paper typically use tens of thousands of examples to ensure that the system is task-aware. 4Due to limited computation resources, it is impractical for us to pre-train BERT directly on reviews from scratch (Devlin et al., 2018). 5To simplify the writing, we refer MRC as a general- purpose RC task on formal text (non-review) and RRC as an end-task specifically focused on reviews. CBT (Hill et al., 2015), NarrativeQA (Koˇcisk`y et al., 2018)), and general Web documents (MS MARCO (Nguyen et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017) ). Also, CoQA (Reddy et al., 2018) is built from mul- tiple sources, such as Wikipedia, Reddit, News, Mid/High School Exams, Literature, etc. To the best of our knowledge, MRC has not been used on reviews, which are primarily subjective. As such, we created a review-based MRC dataset called Re- viewRC. Answers from ReviewRC are extractive (similar to SQuAD (Rajpurkar et al., 2016, 2018)) rather than abstractive (or generative) (such as in MS MARCO (Nguyen et al., 2016) and CoQA (Reddy et al., 2018)). This is crucial because on- line businesses are typically cost-sensitive and ex- tractive answers written by humans can avoid gen- erating incorrect answers beyond the contents in reviews by an AI agent. Community QA (CQA) is widely adopted by online businesses (McAuley and Yang, 2016) to help users. However, since it solely relies on hu- mans to give answers, it often takes a long time to get a question answered or even not answered at all as we discussed in the introduction. Although there exist researches that align reviews to ques- tions as an information retrieval task (McAuley and Yang, 2016; Yu and Lam, 2018), giving a whole review to the user to read is time-consuming and not suitable for customer service settings that require interactive responses. Knowledge bases (KBs) (such as Freebase (Dong et al., 2015; Xu et al., 2016; Yao and Van Durme, 2014) or DBpedia (Lopez et al., 2010; Unger et al., 2012)) have been used for question answering (Yu and Lam, 2018). How- ever, the ever-changing nature of online busi- nesses, where new products and services appear constantly, makes it prohibitive to build a high- quality KB to cover all new products and services. Reviews also serve as a rich resource for sen- timent analysis (Pang et al., 2002; Hu and Liu, 2004; Liu, 2012, 2015). Although document- level (review) sentiment classification may be con- sidered as a solved problem (given ratings are largely available), aspect-based sentiment analysis (ABSA) is still an open challenge, where alleviat- ing the cost of human annotation is also a major issue. ABSA aims to turn unstructured reviews into structured fine-grained aspects (such as the “battery” of a laptop) and their associated opinions (e.g., “good battery” is positive about the aspect battery). Two important tasks in ABSA are aspect extraction (AE) and aspect sentiment classification (ASC) (Hu and Liu, 2004), where the former aims to extract aspects (e.g., “battery”) and the latter targets to identify the polarity for a given aspect (e.g., positive for battery). Recently, supervised deep learning models dominate both tasks (Wang et al., 2016, 2017; Xu et al., 2018a; Tang et al., 2016; He et al., 2018) and many of these mod- els use handcrafted features, lexicons, and compli- cated neural network architectures to remedy the insufficient training examples from both tasks. Al- though these approaches may achieve better per- formances by manually injecting human knowl- edge into the model, human baby-sat models may not be intelligent enough6 and automated repre- sentation learning from review corpora is always preferred (Xu et al., 2018a; He et al., 2018). We push forward this trend with the recent advance in pre-trained language models from deep learn- ing (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2018; Radford et al., 2018a,b). Al- though it is practical to train domain word embed- dings from scratch on large-scale review corpora (Xu et al., 2018a), it is impractical to train lan- guage models from scratch with limited computa- tional resources. As such, we show that it is practi- cal to adapt language models pre-trained from for- mal texts to domain reviews. # 3 BERT and Review-based Tasks In this section, we briefly review BERT and derive its fine-tuning formulation on three (3) review- based end tasks. # 3.1 BERT BERT is one of the key innovations in the recent progress of contextualized representation learning (Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018a; Devlin et al., 2018). The idea behind the progress is that even though the word embedding (Mikolov et al., 2013; Penning- ton et al., 2014) layer (in a typical neural network for NLP) is trained from large-scale corpora, train- ing a wide variety of neural architectures that en- code contextual representations only from the lim- ited supervised data on end tasks is insufficient. Unlike ELMo (Peters et al., 2018) and ULMFiT 6http://www.incompleteideas.net/ IncIdeas/BitterLesson.html Start Pointer End Pointer hfeLs] | hqt oo ham | AISEP] |) hd fa hdn |) hISEP] cafe [Jw Joe] o | Lo J (1) Review Reading Comprehension h{CLS] oo oo fr fa fa hm |) HISEP] (2) Aspect Extraction Positive/Negative/Neutral {CLs} hqt hqm h[SEP] hd1 hdn h[SEP] es[o [Jo [oe] o |. [= [ae (8) Aspect Sentiment Classification Figure 1: Overview of BERT settings for review read- ing comprehension (RRC), aspect extraction (AE) and aspect sentiment classification (ASC). (Howard and Ruder, 2018) that are intended to provide additional features for a particular archi- tecture that bears human’s understanding of the end task, BERT adopts a fine-tuning approach that requires almost no specific architecture for each end task. This is desired as an intelligent agent should minimize the use of prior human knowl- edge in the model design. Instead, it should learn such knowledge from data. BERT has two param- eter intensive settings: BERTBASE: 12 layers, 768 hidden dimensions and 12 attention heads (in transformer) with the total number of parameters, 110M; BERTLARGE: 24 layers, 1024 hidden dimensions and 16 attention heads (in transformer) with the total number of parameters, 340M. We only extend BERT with one extra task- specific layer and fine-tune BERT on each end task. We focus on three (3) review-based tasks: review reading comprehension (RRC), aspect ex- traction (AE) and aspect sentiment classification (ASC). The inputs/outputs settings are depicted in Figure 1 and detailed in the following subsections. # 3.2 Review Reading Comprehension (RRC) Following the success of SQuAD (Rajpurkar et al., 2016) and BERT’s SQuAD implementation, we design review reading comprehension as fol- lows. Given a question q = (q1, . . . , qm) asking for an answer from a review d = (d1, . . . , dn), we formulate the input as a sequence x = ([CLS], q1, . . . , qm, [SEP], d1, . . . , dn, [SEP]), where [CLS] is a dummy token not used for RRC and [SEP] is intended to separate q and d. Let BERT(·) be the pre-trained (or post- trained as in the next section) BERT model. We first obtain the hidden representation as h = BERT(x) ∈ Rrh∗|x|, where |x| is the length of the input sequence and rh is the size of the hidden dimension. Then the hidden representation is passed to two separate dense layers followed by softmax functions: l1 = softmax(W1 · h + b1) and l2 = softmax(W2 · h + b2), where W1, W2 ∈ Rrh and b1, b2 ∈ R. The softmax is applied along the dimension of the sequence. The output is a span across the positions in d (after the [SEP] token of the input), indicated by two pointers (indexes) s and e computed from l1 and l2: s = arg maxIdx[SEP]<s<|x|(l1) and e = arg maxs≤e<|x|(l2), where Idx[SEP] is the position of token [SEP] (so the pointers will never point to tokens from the question). As such, the final answer will always be a valid text span from the review as a = (ds, . . . , de). Training the RRC model involves minimizing the loss that is designed as the averaged cross en- tropy on the two pointers: Slog I(s) + ¥- log leI(e) 5) ; LRRC where I(s) and I(e) are one-hot vectors represent- ing the ground truths of pointers. RRC may suffer from the prohibitive cost of an- notating large-scale training data covering a wide range of domains. And BERT severely lacks two kinds of prior knowledge: (1) large-scale domain knowledge (e.g., about a specific prod- uct category), and (2) task-awareness knowledge (MRC/RRC in this case). We detail the technique of jointly incorporating these two types of knowl- edge in Sec. 4. # 3.3 Aspect Extraction As a core task in ABSA, aspect extraction (AE) aims to find aspects that reviewers have expressed opinions on (Hu and Liu, 2004). In supervised set- tings, it is typically modeled as a sequence label- ing task, where each token from a sentence is la- beled as one of {Begin, Inside, Outside}. A con- tinuous chunk of tokens that are labeled as one B and followed by zero or more Is forms an as- pect. The input sentence with m words is con- structed as x = ([CLS], x1, . . . , xm, [SEP]). After h = BERT(x), we apply a dense layer and a softmax for each position of the sequence: l3 = softmax(W3 ·h+b3), where W3 ∈ R3∗rh and b3 ∈ R3 (3 is the total number of labels (BIO)). Softmax is applied along the dimension of labels for each position and l3 ∈ [0, 1]3∗|x|. The labels are predicted as taking argmax function at each position of l3 and the loss function is the averaged cross entropy across all positions of a sequence. AE is a task that requires intensive domain knowledge (e.g., knowing that “screen” is a part of a laptop). Previous study (Xu et al., 2018a) has shown that incorporating domain word embed- dings greatly improve the performance. Adapting BERT’s general language models to domain re- views is crucial for AE, as shown in Sec. 5. # 3.4 Aspect Sentiment Classification As a subsequent task of AE, aspect sentiment clas- sification (ASC) aims to classify the sentiment po- larity (positive, negative, or neutral) expressed on an aspect extracted from a review sentence. There are two inputs to ASC: an aspect and a review sen- tence mentioning that aspect. Consequently, ASC is close to RRC as the question is just about an as- pect and the review is just a review sentence but ASC only needs to output a class of polarity in- stead of a textual span. Let x = ([CLS], q1, . . . , qm, [SEP], d1, . . . , dn, [SEP]), where q1, . . . , qm now is an aspect (with m tokens) and d1, . . . , dn is a review sen- tence containing that aspect. After h = BERT(x), we leverage the representations of [CLS] h[CLS], which is the aspect-aware representation of the whole input. The distribution of polarity is pre- dicted as l4 = softmax(W4 · h[CLS] + b4), where W4 ∈ R3∗rh and b4 ∈ R3 (3 is the number of po- larities). Softmax is applied along the dimension of labels on [CLS]: l4 ∈ [0, 1]3. Training loss is the cross entropy on the polarities. As a summary of these tasks, insufficient super- vised training data significantly limits the perfor- mance gain across these 3 review-based tasks. Al- though BERT’s pre-trained weights strongly boost the performance of many other NLP tasks on for- mal texts, we observe in Sec. 5 that BERT’s weights only result in limited gain or worse per- formance compared with existing baselines. In the next section, we introduce the post-training step to boost the performance of all these 3 tasks. # 4 Post-training As discussed in the introduction, fine-tuning BERT directly on the end task that has limited tun- ing data faces both domain challenges and task- awareness challenge. To enhance the performance of RRC (and also AE and ASC), we may need to reduce the bias introduced by non-review knowl- edge (e.g., from Wikipedia corpora) and fuse do- main knowledge (DK) (from unsupervised domain data) and task knowledge (from supervised MRC task but out-of-domain data). Given MRC is a general task with answers of questions covering almost all document contents, a large-scale MRC supervised corpus may also benefit AE and ASC. Eventually, we aim to have a general-purpose post-training strategy that can exploit the above two kinds of knowledge for end tasks. To post-train on domain knowledge, we lever- age the two novel pre-training objectives from BERT: masked language model (MLM) and next sentence7 prediction (NSP). The former predicts randomly masked words and the latter detects whether two sides of the input are from the same document or not. A training example is formulated as ([CLS], x1:j, [SEP], xj+1:n, [SEP]), where x1:n is a document (with randomly masked words) split into two sides x1:j and xj+1:n and [SEP] separates those two. MLM is crucial for injecting review domain knowledge and for alleviating the bias of the knowledge from Wikipedia. For example, in the Wikipedia domain, BERT may learn to guess the [MASK] in “The [MASK] is bright” as “sun”. But in a laptop domain, it could be “screen”. Fur- ther, if the [MASK]ed word is an opinion word in “The touch screen is [MASK]”, this objective challenges BERT to learn the representations for fine-grained opinion words like “great” or “terri- ble” for [MASK]. The objective of NSP further encourages BERT to learn contextual representa- tion beyond word-level. In the context of reviews, 7The BERT paper refers a sentence as a piece of text with one to many natural language sentences. NSP formulates a task of “artificial review predic- tion”, where a negative example is an original re- view but a positive example is a synthesized fake review by combining two different reviews. This task exploits the rich relationships between two sides in the input, such as whether two sides of texts have the same rating or not (when two re- views with different ratings are combined as a pos- itive example), or whether two sides are targeting the same product or not (when two reviews from different products are merged as a positive exam- ple). In summary, these two objectives encourage to learn a myriad of fine-grained features for po- tential end tasks. We let the loss function of MLM be LMLM and the loss function of next text piece prediction be LNSP, the total loss of the domain knowledge post- training is LDK = LMLM + LNSP. To post-train BERT on task-aware knowledge, we use SQuAD (1.1), which is a popular large- scale MRC dataset. Although BERT gains great success on SQuAD, this success is based on the huge amount of training examples of SQuAD (100,000+). This amount is large enough to ame- liorate the flaws of BERT that has almost no ques- tions on the left side and no textual span predic- tions based on both the question and the document on the right side. However, a small amount of fine- tuning examples is not sufficient to turn BERT to be more task-aware, as shown in Sec. 5. We let the loss on SQuAD be LMRC, which is in a sim- ilar setting as the loss LRRC for RRC. As a re- sult, the joint loss of post-training is defined as L = LDK + LMRC. One major issue of post-training on such a loss is the prohibitive cost of GPU memory usage. In- stead of updating parameters over a batch, we di- vide a batch into multiple sub-batches and accu- mulate gradients on those sub-batches before pa- rameter updates. This allows for a smaller sub- batch to be consumed in each iteration. Algorithm 1 describes one training step and takes one batch of data on domain knowledge (DK) DDK and one batch of MRC training data DMRC to update the parameters Θ of BERT. In line 1, it first initializes the gradients ∇Θ of all param- eters as 0 to prepare gradient computation. Then in lines 2 and 3, each batch of training data is split into u sub-batches. Lines 4-7 spread the calcu- lation of gradients to u iterations, where the data from each iteration of sub-batches are supposed Algorithm 1: Post-training Algorithm Input: DDK: one batch of DK data; DMRC one batch of MRC data; u: number of sub-batches. 1 ∇ΘL ← 0 2 {DDK,1, . . . , DDK,u} ← Split(DDK, u) 3 {DMRC,1, . . . , DMRC,u} ← Split(DMRC, u) 4 for i ∈ {1, . . . , u} do Lpartial ← LDK(DDK,i)+LMRC(DMRC,i) ∇ΘL ← ∇ΘL + BackProp(Lpartial) 5 u 6 7 end 8 Θ ← ParameterUpdates(∇ΘL) to be able to fit into GPU memory. In line 5, it computes the partial joint loss Lpartial of two sub- batches DDK,i and DMRC,i from the i-th iteration through forward pass. Note that the summation of two sub-batches’ losses is divided by u, which compensate the scale change introduced by gradi- ent accumulation in line 6. Line 6 accumulates the gradients produced by backpropagation from the partial joint loss. To this end, accumulating the gradients u times is equivalent to computing the gradients on the whole batch once. But the sub- batches and their intermediate hidden representa- tions during the i-th forward pass can be discarded to save memory space. Only the gradients ∇Θ are kept throughout all iterations and used to update parameters (based on the chosen optimizer) in line 8. We detail the hyper-parameter settings of this algorithm in Sec. 5.3. # 5 Experiments We aim to answer the following research questions (RQs) in the experiment: RQ1: what is the performance gain of post- training for each review-based task, with respect to the state-of-the-art performance? RQ2: what is the performance of BERT’s pre- trained weights on three review-based tasks with- out any domain and task adaptation? RQ3: upon ablation studies of separate domain knowledge post-training and task-awareness post- training, what is their respective contribution to the whole post-training performance gain? # 5.1 End Task Datasets As there are no existing datasets for RRC and to be consistent with existing research on sentiment analysis, we adopt the laptop and restaurant re- views of SemEval 2016 Task 5 as the source to cre- ate datasets for RRC. We do not use SemEval 2014 Task 4 or SemEval 2015 Task 12 because these datasets do not come with the review(document)- level XML tags to recover whole reviews from re- view sentences. We keep the split of training and testing of the SemEval 2016 Task 5 datasets and annotate multiple QAs for each review following the way of constructing QAs for the SQuAD 1.1 datasets (Rajpurkar et al., 2016). To make sure our questions are close to real- world questions, 2 annotators are first exposed to 400 QAs from CQA (under the laptop category in Amazon.com or popular restaurants in Yelp.com) to get familiar with real questions. Then they are asked to read reviews and independently label tex- tual spans and ask corresponding questions when they feel the textual spans contain valuable infor- mation that customers may care about. The tex- tual spans are labeled to be as concise as possi- ble but still human-readable. Note that the annota- tions for sentiment analysis tasks are not exposed to annotators to avoid biased annotation on RRC. Since it is unlikely that the two annotators can la- bel the same QAs (the same questions with the same answer spans), they further mutually check each other’s annotations and disagreements are discussed until agreements are reached. Annota- tors are encouraged to label as many questions as possible from testing reviews to get more test ex- amples. A training review is encouraged to have 2 questions (training examples) on average to have good coverage of reviews. The annotated data is in the format of SQuAD 1.1 (Rajpurkar et al., 2016) to ensure compatibil- ity with existing implementations of MRC models. The statistics of the RRC dataset (ReviewRC) are shown in Table 2. Since SemEval datasets do not come with a validation set, we further split 20% of reviews from the training set for validation. Statistics of datasets for AE and ASC are given in Table 3. For AE, we choose SemEval 2014 Task 4 for laptop and SemEval-2016 Task 5 for restau- rant to be consistent with (Xu et al., 2018a) and other previous works. For ASC, we use SemEval 2014 Task 4 for both laptop and restaurant as ex- isting research frequently uses this version. We use 150 examples from the training set of all these datasets for validation. # 5.2 Post-training datasets For domain knowledge post-training, we use Amazon laptop reviews (He and McAuley, 2016) and Yelp Dataset Challenge reviews8. For laptop, we filtered out reviewed products that have ap- peared in the validation/test reviews to avoid train- ing bias for test data (Yelp reviews do not have this issue as the source reviews of SemEval are not from Yelp). Since the number of reviews is small, we choose a duplicate factor of 5 (each review generates about 5 training examples) during BERT data pre-processing. This gives us 1,151,863 post- training examples for laptop domain knowledge. For the restaurant domain, we use Yelp reviews from restaurant categories that the SemEval re- views also belong to (Xu et al., 2018a). We choose 700K reviews to ensure it is large enough to gen- erate training examples (with a duplicate factor of 1) to cover all post-training steps that we can afford (discussed in Section 5.3)9. This gives us 2,677,025 post-training examples for restaurant domain knowledge learning. For MRC task-awareness post-training, we leverage SQuAD 1.1 (Rajpurkar et al., 2016) that come with 87,599 training examples from 442 Wikipedia articles. # 5.3 Hyper-parameters We adopt BERTBASE (uncased) as the basis for all experiments10. Since post-training may take a large footprint on GPU memory (as BERT pre- training), we leverage FP16 computation11 to re- duce the size of both the model and hidden repre- sentations of data. We set a static loss scale of 2 in FP16, which can avoid any over/under-flow of floating point computation. The maximum length of post-training is set to 320 with a batch size of 16 for each type of knowledge. The number of sub- batch u is set to 2, which is good enough to store each sub-batch iteration into a GPU memory of 11G. We use Adam optimizer and set the learn- ing rate to be 3e-5. We train 70,000 steps for the laptop domain and 140,000 steps for the restaurant 8https://www.yelp.com/dataset/ challenge 9We expect that using more reviews can have even bet- ter results but we limit the amount of reviews based on our computational power. 10We expect BERTLARGE to have better performance but leave that to future work due to limited computational power. 11https://docs.nvidia.com/deeplearning/ sdk/mixed-precision-training/index.html Dataset Laptop Training Laptop Testing Restaurant Training Restaurant Testing Num. of Questions Num. of Reviews 1015 351 799 431 443 79 347 90 Table 2: Statistics of the ReviewRC Dataset. Reviews with no questions are ignored. AE SemEval14 Task4 3045 S./2358 A. 800 S./654 A. Laptop Training Testing Restaurant SemEval16 Task5 2000 S./1743 A. Training Testing 676 S./622 A. ASC SemEval14 Task4 987 P./866 N./460 Ne. 341 P./128 N./169 Ne. SemEval14 Task4 2164 P./805 N./633 Ne. 728 P./196 N./196 Ne. Table 3: Summary of datasets on aspect extraction and aspect sentiment classification. S: number of sen- tences; A: number of aspects; P., N., and Ne.: number of positive, negative and neutral polarities. domain, which roughly have one pass over the pre- processed data on the respective domain. # 5.4 Compared Methods As BERT outperforms existing open source MRC baselines by a large margin, we do not intend to exhaust existing implementations but focus on variants of BERT introduced in this paper. DrQA is a baseline from the document reader12 this of DrQA (Chen et al., 2017). We adopt baseline because of its simple implementation for reproducibility. We run the document reader with random initialization and train it directly on ReviewRC. We use all default hyper-parameter settings for this baseline except the number of epochs, which is set as 60 for better convergence. DrQA+MRC is derived from the above base- line with official pre-trained weights on SQuAD. We fine-tune document reader with ReviewRC. We expand the vocabulary of the embedding layer from the pre-trained model on ReviewRC since re- views may have words that are rare in Wikipedia and keep other hyper-parameters as their defaults. For AE and ASC, we summarize the scores of the state-of-the-arts on SemEval (based the best of our knowledge) for brevity. DE-CNN (Xu et al., 2018a) reaches the state-of- the-arts for AE by leveraging domain embeddings. MGAN (Li et al., 2018) reaches the state-of-the- art ASC on SemEval 2014 task 4. Lastly, to answer RQ1, RQ2, and RQ3, we have the following BERT variants. BERT leverages the vanilla BERT pre-trained # 12https://github.com/facebookresearch/DrQA weights and fine-tunes on all 3 end tasks. We use this baseline to answer RQ2 and show that BERT’s pre-trained weights alone have limited performance gains on review-based tasks. BERT-DK post-trains BERT’s weights only on domain knowledge (reviews) and fine-tunes on the 3 end tasks. We use BERT-DK and the following BERT-MRC to answer RQ3. BERT-MRC post-trains BERT’s weights on SQuAD 1.1 and then fine-tunes on the 3 end tasks. BERT-PT (proposed method) post-trains BERT’s weights using the joint post-training algorithm in Section 4 and then fine-tunes on the 3 end tasks. # 5.5 Evaluation Metrics and Model Selection To be consistent with existing research on MRC, we use the same evaluation script from SQuAD 1.1 (Rajpurkar et al., 2016) for RRC, which re- ports Exact Match (EM) and F1 scores. EM re- quires the answers to have exact string match with human annotated answer spans. F1 score is the averaged F1 scores of individual answers, which is typically higher than EM and is the major met- ric. Each individual F1 score is the harmonic mean of individual precision and recall computed based on the number of overlapped words between the predicted answer and human annotated answers. For AE, we use the standard evaluation scripts come with the SemEval datasets and report the F1 score. For ASC, we compute both accuracy and Macro-F1 over 3 classes of polarities, where Macro-F1 is the major metric as the imbalanced classes introduce biases on accuracy. To be con- sistent with existing research (Tang et al., 2016), examples belonging to the conflict polarity are dropped due to a very small number of examples. We set the maximum number of epochs to 4 for BERT variants, though most runs converge just within 2 epochs. Results are reported as averages of 9 runs (9 different random seeds for random batch generation).13 # 5.6 Result Analysis The results of RRC, AE and ASC are shown in Tables 4, 5 and 6, respectively. To answer RQ1, we observed that the proposed joint post-training (BERT-PT) has the best performance over all tasks in all domains, which show the benefits of having two types of knowledge. 13We notice that adopting 5 runs used by existing re- searches still has a high variance for a fair comparison. Domain Methods DrQA(Chen et al., 2017) DrQA+MRC(Chen et al., 2017) BERT BERT-DK BERT-MRC BERT-PT Laptop EM 38.26 40.43 39.54 42.67 47.01 48.05 F1 50.99 58.16 54.72 57.56 63.87 64.51 Rest. EM 49.52 52.39 44.39 48.93 54.78 59.22 F1 63.73 67.77 58.76 62.81 68.84 73.08 Table 4: RRC in EM (Exact Match) and F1. Domain Methods DE-CNN(Xu et al., 2018a) BERT BERT-DK BERT-MRC BERT-PT Laptop Rest. F1 81.59 79.28 83.55 81.06 84.26 F1 74.37 74.1 77.02 74.21 77.97 Table 5: AE in F1. to our surprise we found that the vanilla pre-trained weights of BERT do not work well for review-based tasks, although it achieves state-of-the-art results on many other NLP tasks (Devlin et al., 2018). This justifies the need to adapt BERT to review-based tasks. To answer RQ3, we noticed that the roles of do- main knowledge and task knowledge vary for dif- ferent tasks and domains. For RRC, we found that the performance gain of BERT-PT mostly comes from task-awareness (MRC) post-training (as in- dicated by BERT-MRC). The domain knowledge helps more for restaurant than for laptop. We suspect the reason is that certain types of knowl- edge (such as specifications) of laptop are already present in Wikipedia, whereas Wikipedia has lit- tle knowledge about restaurant. We further in- vestigated the examples improved by BERT-MRC and found that the boundaries of spans (especially short spans) were greatly improved. For AE, we found that great performance boost comes mostly from domain knowledge post- training, which indicates that contextualized rep- resentations of domain knowledge are very impor- tant for AE. BERT-MRC has almost no improve- ment on restaurant, which indicates Wikipedia may have no knowledge about aspects of restau- rant. We suspect that the improvements on lap- top come from the fact that many answer spans in SQuAD are noun terms, which bear a closer rela- tionship with laptop aspects. For ASC, we observed that large-scale anno- tated MRC data is very useful. We suspect the reason is that ASC can be interpreted as a special MRC problem, where all questions are about the Domain Methods MGAN (Li et al., 2018) BERT BERT-DK BERT-MRC BERT-PT Laptop Rest. Acc. MF1 Acc. MF1 71.48 76.21 71.94 75.29 75.45 77.01 74.97 77.19 76.96 78.07 71.42 71.91 73.72 74.1 75.08 81.49 81.54 83.96 83.17 84.95 Table 6: ASC in Accuracy and Macro-F1(MF1). polarity of a given aspect. MRC training data may help BERT to understand the input format of ASC given their closer input formulation. Again, do- main knowledge post-training also helps ASC. We further investigated the errors from BERT- PT over the 3 tasks. The errors on RRC mainly come from boundaries of spans that are not con- cise enough and incorrect location of spans that may have certain nearby words related to the ques- tion. We believe precisely understanding user’s experience is challenging from only domain post- training given limited help from the RRC data and no help from the Wikipedia data. For AE, errors mostly come from annotation inconsistency and boundaries of aspects (e.g., apple OS is predicted as OS). Restaurant suffers from rare aspects like the names of dishes. ASC tends to have more er- rors as the decision boundary between the negative and neutral examples is unclear (e.g., even annota- tors may not sure whether the reviewer shows no opinion or slight negative opinion when mention- ing an aspect). Also, BERT-PT has the problem of dealing with one sentence with two opposite opin- ions (“The screen is good but not for windows.”). We believe that such training examples are rare. # 6 Conclusions We proposed a new task called review reading comprehension (RRC) and investigated the possi- bility of turning reviews as a valuable resource for answering user questions. We adopted BERT as our base model and proposed a joint post-training approach to enhancing both the domain and task knowledge. We further explored the use of this ap- proach in two other review-based tasks: aspect ex- traction and aspect sentiment classification. Ex- perimental results show that the post-training ap- proach before fine-tuning is effective. # Acknowledgments Bing Liu’s work was partially supported by the National Science Foundation (NSF IIS 1838770) and by a research gift from Huawei. # References Danqi Chen, Adam Fisch, Jason Weston, and An- Reading wikipedia to an- arXiv preprint toine Bordes. 2017. swer open-domain questions. arXiv:1704.00051. Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over freebase with multi- column convolutional neural networks. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), volume 1, pages 260–269. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and In Proceedings of the extracted knowledge bases. 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1156– 1165. ACM. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Exploiting document knowl- edge for aspect-level sentiment classification. arXiv preprint arXiv:1806.04346. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In World Wide Web. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems, pages 1693– 1701. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprint arXiv:1608.03542. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representa- tions. arXiv preprint arXiv:1511.02301. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 328–339. Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168–177. ACM. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. arXiv preprint arXiv:1705.03551. Tom´aˇs Koˇcisk`y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´aabor Melis, and Edward Grefenstette. 2018. The narrativeqa Transactions reading comprehension challenge. of the Association of Computational Linguistics, 6:317–328. Cody Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. ACM Trans- actions on Information Systems (TOIS), 19(3):242– 262. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Zheng Li, Ying Wei, Yu Zhang, Xiang Zhang, Xin Li, and Qiang Yang. 2018. Exploiting coarse-to- fine task transfer for aspect-level sentiment classi- fication. arXiv preprint arXiv:1811.10999. Bing Liu. 2012. Sentiment analysis and opinion min- ing. Synthesis lectures on human language tech- nologies, 5(1):1–167. Bing Liu. 2015. Sentiment analysis: Mining opinions, sentiments, and emotions. Cambridge University Press. Vanessa Lopez, Andriy Nikolov, Marta Sabou, Victo- ria Uren, Enrico Motta, and Mathieu dAquin. 2010. Scaling up question-answering to linked data. In International Conference on Knowledge Engineer- ing and Knowledge Management, pages 193–210. Springer. Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with In Proceedings of the 25th In- customer reviews. ternational Conference on World Wide Web, pages 625–635. International World Wide Web Confer- ences Steering Committee. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- In Advances in neural information processing ity. systems, pages 3111–3119. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. As- sociation for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018a. Improving language under- standing by generative pre-training. URL https://s3- us-west-2.amazonaws.com/openai-assets/research- under- covers/languageunsupervised/language standing paper.pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018b. Lan- guage models are unsupervised multitask learners. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Siva Reddy, Danqi Chen, and Christopher D Manning. 2018. Coqa: A conversational question answering challenge. arXiv preprint arXiv:1808.07042. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 193–203. Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. Drcd: a chinese machine arXiv preprint reading comprehension dataset. arXiv:1806.00920. Duyu Tang, Bing Qin, and Ting Liu. 2016. Aspect level sentiment classification with deep memory net- work. arXiv preprint arXiv:1605.08900. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2016. Newsqa: A machine compre- hension dataset. arXiv preprint arXiv:1611.09830. Christina Unger, Lorenz B¨uhmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012. Template-based question answering over rdf data. In Proceedings of the 21st international conference on World Wide Web, pages 639–648. ACM. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. arXiv preprint arXiv:1603.06679. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions In for co-extraction of aspect and opinion terms. Thirty-First AAAI Conference on Artificial Intelli- gence. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association of Computational Linguis- tics, 6:287–302. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018a. Double embeddings and cnn-based sequence label- In Proceedings of the ing for aspect extraction. 56th Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics. Hu Xu, Sihong Xie, Lei Shu, and Philip S. Yu. 2018b. Dual attention network for product compatibility In Proceed- and function satisfiability analysis. ings of AAAI Conference on Artificial Intelligence (AAAI). Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, and Dongyan Zhao. 2016. Question answering on freebase via relation extraction and textual evidence. arXiv preprint arXiv:1603.00957. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600. Infor- mation extraction over structured data: Question an- swering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), vol- ume 1, pages 956–966. Jun Yin, Xin Jiang, Zhengdong Lu, Lifeng Shang, Neural arXiv preprint Hang Li, generative question answering. arXiv:1512.01337. and Xiaoming Li. 2015. Qian Yu and Wai Lam. 2018. Aware answer prediction for product-related questions incorporating aspects. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 691–699. ACM.
{ "id": "1511.02301" }
1904.02225
Revisiting Visual Grounding
We revisit a particular visual grounding method: the "Image Retrieval Using Scene Graphs" (IRSG) system of Johnson et al. (2015). Our experiments indicate that the system does not effectively use its learned object-relationship models. We also look closely at the IRSG dataset, as well as the widely used Visual Relationship Dataset (VRD) that is adapted from it. We find that these datasets exhibit biases that allow methods that ignore relationships to perform relatively well. We also describe several other problems with the IRSG dataset, and report on experiments using a subset of the dataset in which the biases and other problems are removed. Our studies contribute to a more general effort: that of better understanding what machine learning methods that combine language and vision actually learn and what popular datasets actually test.
http://arxiv.org/pdf/1904.02225
Erik Conser, Kennedy Hahn, Chandler M. Watson, Melanie Mitchell
cs.CV
To appear in Proceedings of the Workshop on Shortcomings in Vision and Language, NAACL-2019, ACL
null
cs.CV
20190403
20190403
9 1 0 2 r p A 3 ] V C . s c [ 1 v 5 2 2 2 0 . 4 0 9 1 : v i X r a # Revisiting Visual Grounding Erik Conser Computer Science Department Portland State University [email protected] # Kennedy Hahn Computer Science Department Portland State University [email protected] # Chandler M. Watson Computer Science Department Stanford University [email protected] Melanie Mitchell Computer Science Department Portland State University and Santa Fe Institute [email protected] # Abstract We revisit a particular visual grounding the “Image Retrieval Using Scene method: Graphs” (IRSG) system of Johnson et al. (2015). Our experiments indicate that the sys- tem does not effectively use its learned object- relationship models. We also look closely at the IRSG dataset, as well as the widely used Visual Relationship Dataset (VRD) that is adapted from it. We find that these datasets exhibit biases that allow methods that ignore relationships to perform relatively well. We also describe several other problems with the IRSG dataset, and report on experiments us- ing a subset of the dataset in which the biases and other problems are removed. Our stud- ies contribute to a more general effort: that of better understanding what machine learning methods that combine language and vision ac- tually learn and what popular datasets actually test. # Introduction and an input image, the grounding task is to cre- ate bounding boxes corresponding to the speci- fied objects, such that the located objects have the specified attributes and relationships (left). A final energy score reflects the quality of the match between the scene graph and the located boxes (lower is better), and can be used to rank images in a retrieval task. A second example of visual grounding, is the “Referring Relationships” (RR) task of Kr- ishna et al. (2018). Here, a sentence (e.g., “A horse following a person”) is represented as a subject-predicate-object triple (“horse”, “follow- ing”, “person”). Given a triple and an input im- age, the task is to create bounding boxes corre- sponding to the named subject and object, such that the located boxes fit the specified predicate. Visual grounding tasks—at the intersection of vi- sion and language—have become a popular area of research in machine learning, with the poten- tial of improving automated image editing, cap- tioning, retrieval, and question-answering, among other tasks. Visual grounding is the general task of locating the components of a structured description in an the image. structured description is often a natural-language phrase that has been parsed as a scene graph or as a subject-predicate-object triple. As one exam- ple of a visual-grounding challenge, Figure 1 illus- trates the “Image Retrieval using Scene Graphs” (IRSG) task (Johnson et al., 2015). Here the sentence “A standing woman wearing dark sun- glasses” is converted to a scene-graph representa- tion (right) with nodes corresponding to objects, attributes, and relationships. Given a scene graph While deep neural networks have produced impressive progress in object detection, visual- grounding tasks remain highly challenging. On the language side, accurately transforming a natu- ral language phrase to a structured description can be difficult. On the vision side, the challenge is to learn—in a way that can be generalized—visual features of objects and attributes as well as flexi- ble models of spatial and other relationships, and then to apply these models to figure out which of a given object class (e.g., woman) is the one referred to, sometimes locating small objects and recog- Scene Graph Grounding Energy Score: 0.05 ‘Scene Graph em objects mam attributes om relationships Figure 1: An example of the scene-graph-grounding task of Johnson et al. (2015). Right: A phrase represented as a scene graph. Left: A candidate grounding of the scene graph in a test image, here yielding a low energy score (lower is better). Referring Relationship Grounding oz yw ubject: horse redicate: follow Figure 2: An example of the referring-relationship-grounding task of Krishna et al. (2018). Right: A phrase broken into subject, predicate, and object categories. Left: a candidate grounding of subject and object in a test image. nizing hard-to-see attributes (e.g., dark vs. clear glasses). To date, the performance of machine learning systems on visual-grounding tasks with real-world datasets has been relatively low com- pared to human performance. In addition, some in the machine-vision com- munity have questioned the effectiveness of pop- ular datasets that have been developed to evaluate the performance of systems on visual grounding tasks like the ones illustrated in Figures 1 and 2. Recently Cirik et al. (2018b) showed that for the widely used dataset Google-Ref (Mao et al., 2016), the task of grounding referring expressions has exploitable biases: for example, a system that predicts only object categories—ignoring relation- ships and attributes—still performs well on this task. Jabri et al. (2016) report related biases in visual question-answering datasets. In this paper we re-examine the visual ground- ing approach of Johnson et al. (2015) to deter- mine how well this system is actually performing scene-graph grounding. In particular, we compare this system with a simple baseline method to test if the original system is using information from object relationships, as claimed by Johnson et al. (2015). In addition, we investigate possible biases and other problems with the dataset used by John- son et al. (2015), a version of which has also been used in many later studies. We briefly survey re- lated work in visual grounding, and discuss possi- ble future studies in this area. # Image Retrieval Using Scene Graphs # 2.1 Methods The “Image Retrieval Using Scene Graphs” (IRSG) method (Johnson et al., 2015) performs the task illustrated in Figure 1: given an input image and a scene graph, output a grounding of the scene graph in the image and an accompanying energy score. The grounding consists of a set of bounding boxes, each one corresponding to an object named in the scene graph, with the goal that the ground- ing gives the the best possible fit to the objects, attributes, and relationships specified in the scene graph. Note that the system described in (Johnson et al., 2015) does not perform any linguistic analy- sis; it assumes that a natural-language description has already been transformed into a scene graph. The IRSG system is trained on a set of human- annotated images in which bounding boxes are labeled with object categories and attributes, and pairs of bounding boxes are labeled with relation- ships. The system learns appearance models for all object and attribute categories in the training set, and relationship models for all training-set re- lationships. The appearance model for object cate- gories is learned as a convolutional neural network (CNN), which inputs an bounding box from an im- age and outputs a probability distribution over all object categories. The appearance model for ob- ject attributes is also learned as a CNN; it inputs an image bounding box and outputs a probabil- ity distribution over all attribute categories. The pairwise spatial relationship models are learned as Gaussian mixture models (GMMs); each GMM inputs a pair of bounding boxes from an image and outputs a probability density reflecting how well the GMM judges the input boxes to fit the model’s corresponding spatial relationship (e.g., “woman wearing sunglasses”). Details of the training pro- cedures are given in (Johnson et al., 2015). After training is completed, the IRSG system can be run on test images. Given a test image and a scene graph, IRSG attempts to ground the scene graph in the image as follows. First the system cre- ates a set of candidate bounding boxes using the Geodesic Object Proposal method (Kr¨ahenb¨uhl and Koltun, 2014). The object and attribute CNNs are then used to assign probability distributions over all object and attribute categories to each can- didate bounding box. Next, for each relationship in the scene graph, the GMM corresponding to that relationship assigns a probability density to each pair of candidate bounding boxes. The probability density is calibrated by Platt scaling (Platt, 2000) to provide a value representing the probability that the given pair of boxes is in the specified relation- # ship. Finally, these object and relationship probabil- ities are used to configure a conditional random field, implemented as factor graph. The objects and attributes are unary factors in the factor graph, each with one value for each image bounding box. The relationships are binary factors, with one value for each pair of bounding boxes. This factor graph represents the probability distribution of groundings conditioned on the scene graph and bounding boxes. Belief propagation (Andres et al., 2012) is then run on the factor graph to deter- mine which candidate bounding boxes produce the lowest-energy grounding of the given scene graph. The output of the system is this grounding, along with its energy. The lower the energy, the better the predicted fit between the image and the scene graph. To use the IRSG system in image retrieval, with a query represented as a scene graph, the IRSG system applies the grounding procedure for the given scene graph to every image in the test set, and ranks the resulting images in order of increas- ing energy. The highest ranking (lowest energy) images can be returned as the results of the query. Johnson et al. (2015) trained and tested the IRSG method on an image dataset consisting of 5,000 images, split into 4,000 training images and 1,000 testing images. The objects, attributes, and relationships in each image were annotated the au- by Amazon Mechanical Turk workers; thors created scene graphs that captured the anno- tations. IRSG was tested on two types of scene- graph queries: full and partial. Each full scene- graph query was a highly detailed description of a single image in the test set—the average full scene graph consisted of 14 objects, 19 attributes, and 22 relationships. The partial scene graphs were generated by examination of subgraphs of the full scene graphs. Each combination of two objects, one relation, and one or two attributes was drawn from each full scene graph, and any partial scene graph that was found at least five times was added to the collection of partial queries. Johnson et al. randomly selected 119 partial queries to constitute the test set for partial queries. # 2.2 Original Results Johnson et al. (2015) used a “recall at k” metric to measure their their system’s image retrieval per- formance. In experiments on both full and partial scene-graph queries, the authors found that their In par- method outperformed several baselines. ticular, it outperformed—by a small degree—two “ablated” forms of their method: the first in which only object probabilities were used (attribute and relationship probabilities were ignored), and the second in which both object and attribute probabil- ities were used but relationship probabilities were ignored. # 3 Revisiting IRSG We obtained the IRSG code from the authors (Johnson et al., 2015), and attempted to replicate their reported results on the partial scene graphs. (Our study included only the partial scene graphs, which seemed to us to be a more realistic use case for image retrieval than the complex full graphs, each of which described only one image in the set.) We performed additional experiments in or- der to answer the following questions: (1) Does using relationship information in addition to ob- ject information actually help the system’s perfor- mance? (2) Does the dataset used in this study have exploitable biases, similar to the findings of Cirik et al. (2018b) on the Google-Ref dataset? Note that here we use the term “bias” to mean any aspect of the dataset that allows a learning algorithm to rely on shallow correlations, rather than actually solving the intended task. (3) If the dataset does contain biases, how would IRSG per- form on a dataset that did not contain such biases? # 3.1 Comparing IRSG with an Object-Only Baseline To investigate the first two questions, we created a baseline image-retrieval method that uses infor- mation only from object probabilities. Given a test image and a scene-graph query, we ran IRSG’s Geodesic Object Proposal method on the test im- age to obtain bounding boxes, and we ran IRSG’s trained CNN on each bounding box to obtain a probability for each object category. For each object category named in the query, our baseline method simply selects the bounding box with the highest probability for that query. No attribute or relationship information is used. We then use a recall at k (R@k) metric to compare the perfor- mance of our baseline method to that of the IRSG method. Our R@k metric was calculated as follows. For a given scene-graph query, let Sp be the set of pos- itive images in the test set, where a positive image is one whose ground-truth object, attribute, and re- lationship labels match the query. Let Sn be the set of negative images in the test set. For each scene-graph query, IRSG was run on both Sp and Sn, returning an energy score for each image with respect to the scene graph. For each image we also computed a second score: the geometric mean of the highest object-category probabilities, as de- scribed above. The latter score ignored attribute and relationship information. We then rank-order each image in the test set by its score: for the IRSG method, scores (energy values—lower is better) are ranked in ascending order; for the baseline method, scores (geometric mean values—higher is better) are ranked in descending order. Because the size of Sp is different for different queries, we consider each positive image Ip ∈ Sp separately. We put Ip alone in a pool with all the negative im- ages, and ask if Ip is ranked in the top k. We define R@k as the fraction of images in Sp that are top- k in this sense. For example, R@1 = .2 would mean that 20% of the positive images are ranked above all of the negative images for this query; R@2 = .3 would mean that 30% of the positive images are ranked above all but at most one of the negative images, and so on. This metric is slightly different from—and, we believe, provides a more useful evaluation than—the recall at k metric used in (Johnson et al., 2015), which only counted the position of the top-ranked positive image for each query in calculating R@k. We computed R@k in this way for each of the 150 partial scene graphs that were available in the test set provided by Johnson et al., and then av- eraged the 150 values at each k. The results are shown in Figure 3, for k = 1, ..., 1000. It can be seen that the two curves are nearly identical. Our result differs in a small degree from the re- sults reported in (Johnson et al., 2015), in which IRSG performed slightly but noticeably better than an object-only version. The difference might be due to differences in the particular subset of scene- graph queries they used (they randomly selected 119, which were not listed in their paper), or to the slightly different R@k metrics. Our results imply that, contrary to expectations, IRSG performance does not benefit from the sys- (IRSG performance tem’s relationship models. also does not seem to benefit from the system’s attribute models, but here we focus on the role of 1.0 0.8 4 wy 064 re s a S vo “044 0.24 — geometric mean — IRSG 0.0 ; + + 1 : t) 200 400 600 800 1000 Figure 3: Recall at k values for IRSG and the geometric-mean baseline on the partial query dataset from (Johnson et al., 2015). This figure shows the averaged R@k values for all partial scene-graph queries. relationships.) There are two possible reasons for this: (1) the object-relationship models (Gaussian mixture models) in IRSG are not capturing useful information; or (2) there are biases in the dataset that allow successful scene-graph grounding with- out any information from object relationships. Our studies show that both hypotheses are correct. tion to the final grounding energy. We found that over 90% of the queries exhibited very strong lin- ear relationships (r2 ≥ 0.8) of this kind. This sug- gests that the relationship probabilities computed by the GMMs are not capturing useful informa- tion. Figure 4 shows results that support the first hypothesis. If, for a given scene-graph query, we look at IRSG’s lowest-energy configuration of bounding boxes for every image, and com- pare the full (object-attribute-relationship) factor- ization (product of probabilities) to the factoriza- tion without relationships, we can see that the amount of information provided by the relation- ships is quite small. For example, for the query “clear glasses on woman”, Figure 4 is a scatter plot in which each point represents an image in the test set. The x-axis values give the products of IRSG-assigned probabilities for objects and at- tributes in the scene graph, and the y-axis values give the full product—that is, including the rela- tionship probabilities. If the relationship probabil- ities added useful information, we would expect a non-linear relationship between the x- and y-axis values. However, the plot generally shows a sim- ple linear relationship (linear regression goodness- of-fit r2 = 0.97), which indicates that the relation- ship distribution is not adding significant informa- We investigated the second hypothesis—that there are biases in the dataset that allow suc- cessful object grounding without relationship information—by a manual inspection of the 150 scene-graph queries and a sample of the 1,000 test images. We found two types of such biases. In the first type, a positive test image for a given query contains only one instance of each query ob- ject, which makes relationship information super- fluous. For example, when given a query such as “standing man wearing shirt” there is no need to distinguish which is the particular “standing man” there is only one of who is wearing a “shirt”: each. In the second type of bias, a positive image for a given query contains multiple instances of the query objects, but any of the instances would be a correct grounding for the query. For exam- ple, when given the query “black tire on road”, even if there are many different tires in the image, all of them are black and all of them are on the road. Thus any black-tire grounding will be cor- rect. Time constraints prevented us from making a precise count of instances of these biases for each 1.0 a went 0.0 0.2 0.4 e & A 3 084 & o g e r @ 064 5 z= 8 wJ a, 044 = x e Bs] 2 0.24 aQ o . S a =) ee . 0.0 r 0.6 08 1.0 II p(objects) + IT p(attributes) Figure 4: A scatterplot of the factorizations for a single query in the original dataset (”clear glasses on woman”), each point representing a single image. The x-axis value is the product of the object and attribute probability values from IRSG’s lowest-energy grounding on this image. The y-axis value includes the product of the relationship probabilities. A strong relationship model would modify the object-attribute factorization and create a larger spread of values than what is evident in this figure. We found similar strongly linear relationships for over 90% of the queries in the test set. query, but our sampling suggested that examples of such biases occur in the positive test images for at least half of the queries. the median number is 5. The dataset would ben- efit from a greater number of positive results for more thorough testing results. A closer look at the dataset and queries revealed several additional issues that make it difficult to evaluate the performance of a visual grounding system. While Johnson et al. (2015) reported averages over many partial scene-graph queries, these averages were biased by the fact that in sev- eral cases essentially same query appeared more than once in the set, sometimes using synonymous terms (e.g., “bus on gray street” and “bus on gray road” are counted as separate queries, as are “man on bench” and “sitting man on bench”). Removing duplicates of this kind decreases the original set of 150 queries to 105 unique queries. Going further, we found that some queries included two instances of a single object class: for example, “standing man next to man”. We found that when given such queries, the IRSG system would typically create two bounding boxes around the same object in the image (e.g., the “standing man” and the other man would be grounded as the same person). Additionally, there are typically very few pos- itive images per query in the test set. The mean number of positive images per query is 6.5, and The dataset was annotated by Amazon Me- chanical Turk workers using an open annotation scheme, rather than directing the workers to se- lect from a specific set of classes, attributes, and relationships. Due to the open scheme, there are numerous errors that affect a system’s learning potential, including mislabeled objects and relationships, as well as typographical er- rors (refridgerator [sic]), synonyms (kid/child, man/guy/boy/person), and many prominent ob- jects left unlabeled. These errors can lead to false negatives during testing. # 3.2 Testing IRSG on “Clean” Queries and Data To assess the performance of IRSG without the complications of many of these data and query issues, we created seven queries—involving only objects and relationships, no attributes—that avoided many of the ambiguities described above. We made sure that there were at least 10 positive test-set examples for each query, and we fixed the labeling in the training and test data to make sure that all objects named in these queries were cor- rectly labeled. The queries (and number of posi- tive examples for each in the test set) are the fol- lowing: • Person Has Beard: 96 • Person Wearing Helmet: 81 • Person Wearing Sunglasses: 79 • Pillow On Couch: 38 • Person On Skateboard: 29 • Person On Bench: 18 • Person On Horse: 13 We call this set of queries, along with their training and test examples, the “clean dataset”. Using only these queries, we repeated the com- parison between IRSG and the geometric-mean baseline described above. The R@k results are shown in Figure 5. These results are very sim- ilar to those in Figure 3. This result indicates that, while the original dataset exhibits biases and other problems that make the original system hard to evaluate, it still seems that relationship proba- bilities do not provide strongly distinguishing in- formation to the other components of the IRSG method. The lack of strong relationship perfor- mance was also seen in (Quinn et al., 2018) where the IRSG and object-only baseline method showed almost identical R@k performance on a different, larger dataset. # 4 Revisiting “Referring Relationship” Grounding The IRSG task is closely related to the “Refer- ring Relationships” (RR) task, proposed by Kr- ishna et al. (2018) and illustrated in Figure 2. The method developed by Krishna et al. uses iterative attention to shift between image regions accord- ing to the given predicate, in order to locate sub- ject and object. The authors evaluated their model on several datasets, including the same images as were in the IRSG dataset (here called “VRD” or “visual relationship dataset”), but with 4710 referring-relationship queries (several per test im- age). The evaluation metric they reported was mean intersection over union (IOU) of the sub- ject and object detections with ground-truth boxes. This metric does not give information about the detection rate. To investigate whether biases ap- pear in this dataset and queries similar to the ones we described above, we again created a baseline method that used only object information. In par- ticular, we used the VRD training set to fine-tune a pre-trained version1 of the faster-RCNN object- detection method (Ren et al., 2015) on the object categories that appear in the VRD dataset. We then ran faster-RCNN on each test image, and for each query selected the highest-confidence bound- ing box for the subject and object categories. (If the query subject and object were the same cate- gory, we randomly assigned subject and object to the highest and second-highest confidence boxes.) Finally, for each query, we manually examined visualizations of the predicted subject and object boxes in each test image to determine whether the subject and object boxes fit the subject, object, and predicate of the query. We found that for 56% of the image/query pairs, faster-RCNN had iden- tified correct subject and object boxes. In short, our object-only baseline was able to correctly lo- cate the subject and object 56% of the time, using no relationship information. This indicates signif- icant biases in the dataset, which calls into ques- tion any published referring-relationship results on this dataset that does not compare with this base- line. In future work we plan to replicate the re- sults reported by Krishna et al. (2018) and to com- pare it with our object-only baseline. We hope to do the same for other published results on refer- ring relationships using the VRD dataset, among other datasets (Cirik et al., 2018a; Liu et al., 2019; Raboh et al., 2019). # 5 Related Work Other groups have explored grounding single ob- jects referred to by natural-language expressions (Hu et al., 2016; Nagaraja et al., 2016; Hu et al., 2017; Zhang et al., 2018) and grounding all nouns mentioned in a natural language phrase (Rohrbach et al., 2016; Plummer et al., 2017, 2018; Yeh et al., 2017). Visual grounding is different from, though re- lated to, tasks such as visual relationship detec- tion (Lu et al., 2016), in which the task is not to ground a particular phrase in an image, but to de- tect all known relationships. The VRD dataset we 1We used faster rcnn resnet101 coco from https://github.com/tensorflow/models/ blob/master/research/object_detection/ g3doc/detection_model_zoo.md. Recall at k — geometric mean — IRSG 0.0 (°) 200 400 600 800 1000 Figure 5: R@k values for the IRSG model and geometric mean model on the clean dataset. This figure shows, for each k, the averaged R@k values over the seven queries. described above is commonly used in visual re- lationship detection tasks, and to our knowledge there are no prior studies of bias and other prob- lems in this dataset. It should be noted that visual grounding also differs from automated caption generation (Xu et al., 2015) and automated scene graph genera- tion (Xu et al., 2017), which input an image and output a natural language phrase or a scene graph, respectively. The diversity of datasets used in these various studies as well as the known biases and other prob- lems in many widely used datasets makes it dif- ficult to determine the state of the art in visual grounding tasks as well as related tasks such as visual relationship detection. in Krishna et al. (2018). Our work can be seen as a contribution to the effort promoted by Cirik et al. (2018b): “to make meaningful progress on grounded language tasks, we need to pay careful attention to what and how our models are learning, and whether or datasets contain exploitable bias.” In future work, we plan to investigate other prominent algorithms and datasets for visual grounding, as well as to curate benchmarks without the biases and problems we described above. Some researchers have used syn- thetically generated data, such as the CLEVR set (Johnson et al., 2017); however to date the high performances of visual grounding systems on this dataset have not translated to high performance on real-world datasets (e.g., Krishna et al. (2018)). We also plan to explore alternative approaches to visual grounding tasks, such as the “active” ap- proach described by Quinn et al. (2018). # 6 Conclusions and Future Work We have closely investigated one highly cited ap- proach to visual grounding, the IRSG method of (Johnson et al., 2015). We demonstrated that this method does not perform better than a sim- ple object-only baseline, and does not seem to use information from relationships between ob- jects, contrary to the authors’ claims, at least on the original dataset of partial scene graphs as well as on our “clean” version. We have also identified exploitable biases and other problems associated with this dataset, as well as with the version used # Acknowledgments We are grateful to Justin Johnson of Stanford Uni- versity for sharing the source code for the IRSG project, and to NVIDIA corporation for donation of a GPU used for this work. We also thank the anonymous reviewers of this paper for several helpful suggestions for improvement. This ma- terial is based upon work supported by the Na- tional Science Foundation under Grant Number IIS-1423651. Any opinions, findings, and conclu- sions or recommendations expressed in this mate- rial are those of the authors and do not necessarily reflect the views of the National Science Founda- tion. # References Bjoern Andres, Thorsten Beier, and J¨org H. Kappes. 2012. OpenGM: A C++ library for discrete graphi- cal models. arXiv preprint arXiv:1206.0111. Volkan Cirik, Taylor Berg-Kirkpatrick, and Louis- Philippe Morency. 2018a. Using syntax to ground referring expressions in natural images. In Proceed- ings of the Thirty-Second Conference on Artificial Intelligence (AAAI), pages 6756–6764. AAAI. Volkan Cirik, Louis-Philippe Morency, and Taylor Berg-Kirkpatrick. 2018b. Visual referring expres- sion recognition: What do systems actually learn? In Proceedings of NAACL-HLT 2018, pages 781– 787. Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2017. Modeling relationships in referential expressions with compo- In Proceedings of the sitional modular networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1115–1124. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Nat- ural language object retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4555–4564. and Laurens Van Der Maaten. 2016. Revisiting visual question an- In European Conference on swering baselines. Computer Vision (ECCV), pages 727–739. Springer. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual rea- soning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2901–2910. Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li Fei- Fei. 2015. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 3668–3678. and Vladlen Koltun. 2014. In Proceedings of the Geodesic object proposals. European Conference on Computer Vision (ECCV), pages 725–739. Ranjay Krishna, Ines Chami, Michael Bernstein, and In Pro- Li Fei-Fei. 2018. Referring relationships. ceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 6867– 6876. Xihui Liu, Wang Zihao, Jing Shao, Xiaogang Wang, and Hongsheng Li. 2019. Improving referring expression grounding with cross-modal attention- guided erasing. arXiv preprint arXiv:1903.00839. Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. 2016. Visual relationship detection with language priors. In European Conference on Com- puter Vision (ECCV), pages 852–869. Springer. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 11–20. Varun K. Nagaraja, Vlad I. Morariu, and Larry S. Davis. 2016. Modeling context between objects In Eu- for referring expression understanding. ropean Conference on Computer Vision (ECCV), pages 792–807. Springer. John C. Platt. 2000. Probabilistic outputs for sup- port vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers. MIT Press. Bryan A. Plummer, Arun Mallya, Christopher M. Cer- vantes, Julia Hockenmaier, and Svetlana Lazebnik. 2017. Phrase localization and visual relationship detection with comprehensive image-language cues. In Proceedings of the IEEE International Confer- ence on Computer Vision, pages 1928–1937. Bryan A. Plummer, Kevin J. Shih, Yichen Li, Ke Xu, Svetlana Lazebnik, Stan Sclaroff, and Kate Saenko. arXiv 2018. Open-vocabulary phrase detection. preprint arXiv:1811.07212. Max H. Quinn, Erik Conser, Jordan M. Witte, and Melanie Mitchell. 2018. Semantic image retrieval via active grounding of visual situations. In Interna- tional Conference on Semantic Computing (ICSC), pages 172–179. IEEE. Moshiko Raboh, Roei Herzig, Gal Chechik, Jonathan Berant, and Amir Globerson. 2019. Learning la- tent scene-graph representations for referring rela- tionships. arXiv preprint arXiv:1902.10200. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time ob- ject detection with region proposal networks. In Ad- vances in Neural Information Processing Systems, pages 91–99. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. 2016. Grounding In of textual phrases in images by reconstruction. European Conference on Computer Vision (ECCV), pages 817–834. Springer. Danfei Xu, Yuke Zhu, Christopher B. Choy, and Li Fei- Fei. 2017. Scene graph generation by iterative mes- In Proceedings of the IEEE Confer- sage passing. ence on Computer Vision and Pattern Recognition, pages 5410–5419. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In International conference on machine learn- ing, pages 2048–2057. Raymond Yeh, Jinjun Xiong, Wen-Mei Hwu, Minh Do, and Alexander Schwing. 2017. Interpretable and globally optimal prediction for textual grounding us- ing image concepts. In Advances in Neural Informa- tion Processing Systems, pages 1912–1922. Hanwang Zhang, Yulei Niu, and Shih-Fu Chang. 2018. Grounding referring expressions in images by varia- tional context. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 4158–4166.
{ "id": "1811.07212" }
1904.01766
VideoBERT: A Joint Model for Video and Language Representation Learning
Self-supervised learning has become increasingly important to leverage the abundance of unlabeled data available on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use VideoBERT in numerous tasks, including action classification and video captioning. We show that it can be applied directly to open-vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the-art on video captioning, and quantitative results verify that the model learns high-level semantic features.
http://arxiv.org/pdf/1904.01766
Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid
cs.CV, cs.AI
ICCV 2019 camera ready
null
cs.CV
20190403
20190911
9 1 0 2 p e S 1 1 ] V C . s c [ 2 v 6 6 7 1 0 . 4 0 9 1 : v i X r a # VideoBERT: A Joint Model for Video and Language Representation Learning Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid # Google Research Season the steak with input text pues salt and pepper. to the pan. Carefully place the steak Now let it rest and enjoy the delicious steak. Flip the steak to the other side. VideoBERT, Figure 1: VideoBERT text-to-video generation and future forecasting. (Above) Given some recipe text divided into sentences, y = y1:T , we generate a sequence of video tokens x = x1:T by computing x∗ t = arg maxk p(xt = k|y) using VideoBERT. (Below) Given a video token, we show the top three future tokens forecasted by VideoBERT at different time scales. In this case, VideoBERT predicts that a bowl of flour and cocoa powder may be baked in an oven, and may become a brownie or cupcake. We visualize video tokens using the images from the training set closest to centroids in feature space. # Abstract # 1. Introduction Self-supervised learning has become increasingly impor- tant to leverage the abundance of unlabeled data avail- able on platforms like YouTube. Whereas most existing approaches learn low-level representations, we propose a joint visual-linguistic model to learn high-level features without any explicit supervision. In particular, inspired by its recent success in language modeling, we build upon the BERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens, derived from vector quantization of video data and off-the-shelf speech recognition outputs, respectively. We use VideoBERT in nu- merous tasks, including action classification and video cap- tioning. We show that it can be applied directly to open- vocabulary classification, and confirm that large amounts of training data and cross-modal information are critical to performance. Furthermore, we outperform the state-of-the- art on video captioning, and quantitative results verify that the model learns high-level semantic features. Deep learning can benefit a lot from labeled data [24], but this is hard to acquire at scale. Consequently there has been a lot of recent interest in “self supervised learning”, where we train a model on various “proxy tasks”, which we hope will result in the discovery of features or representa- tions that can be used in downstream tasks. A wide variety of such proxy tasks have been proposed in the image and video domains. However, most of these methods focus on low level features (e.g., textures) and short temporal scales (e.g., motion patterns that last a second or less). We are in- terested in discovering high-level semantic features which correspond to actions and events that unfold over longer time scales (e.g. minutes), since such representations would be useful for various video understanding tasks. In this paper, we exploit the key insight that human language has evolved words to describe high-level objects and events, and thus provides a natural source of “self” supervision. In particular, we present a simple way to model the relationship between the visual domain and the 1 Cut the cabbage into pieces. input text and stir fry. Put cabbage in the wok Add soy sauce and ... then keep stir frying. Put on a plate the dish is now ready to be served. Figure 2: Additional text-to-video generation and future forecasting examples from VideoBERT, see Figure 1 for details. linguistic domain by combining three off-the-shelf meth- ods: an automatic speech recognition (ASR) system to con- vert speech into text; vector quantization (VQ) applied to low-level spatio-temporal visual features derived from pre- trained video classfication models; and the recently pro- posed BERT model [6] for learning joint distributions over sequences of discrete tokens. More precisely, our approach is to apply BERT to learn a model of the form p(x, y), where x is a sequence of “visual words”, and y is a sequence of spoken words. Given such a joint model, we can easily tackle a variety of interesting tasks. For example, we can perform text-to-video predic- tion, which can be used to automatically illustrate a set of instructions (such as a recipe), as shown in the top examples of Figure 1 and 2. We can also perform the more traditional video-to-text task of dense video captioning [10] as shown in Figure 6. In Section 4.6, we show that our approach to video captioning significantly outperforms the previous state-of-the-art [39] on the YouCook II dataset [38]. Section 4 presents results on activity recognition and video captioning tasks; and Section 5 concludes. # 2. Related Work Supervised learning. Some of the most successful ap- proaches for video representation learning have leveraged large labeled datasets (e.g., [9, 19, 36, 7]) to train convolu- tional neural networks for video classification. However, it is very expensive to collect such labeled data, and the cor- responding label vocabularies are often small and not ca- pable of representing the nuances of many kinds of actions (e.g., “sipping” is slightly different than “drinking” which is slightly different than “gulping”). In addition, these ap- proaches are designed for representing short video clips, typically a few seconds long. The main difference to our work is that we focus on the long-term evolution of events in video, and we do not use manually provided labels. We can also use our model in a “unimodal” fashion. For example, the implied marginal distribution p(x) is a lan- guage model for visual words, which we can use for long- range forecasting. This is illustrated in the bottom examples of Figure 1 and 2. Of course, there is uncertainty about the future, but the model can generate plausible guesses at a much higher level of abstraction than other deep generative models for video, such as those based on VAEs or GANs (see e.g., [4, 5, 13, 27]), which tend to predict small changes to low level aspects of the scene, such as the location or pose of a small number of objects. In summary, our main contribution in this paper is a simple way to learn high level video representations that capture semantically meaningful and temporally long-range structure. The remainder of this paper describes this con- tribution in detail. In particular, Section 2 briefly reviews related work; Section 3 describes how we adapt the recent progress in natural language modeling to the video domain; Unsupervised learning. Recently, a variety of ap- proaches for learning density models from video have been proposed. Some use a single static stochastic variable, which is then “decoded” into a sequence using an RNN, either using a VAE-style loss [32, 35] or a GAN-style loss [31, 17]. More recent work uses temporal stochastic vari- ables, e.g., the SV2P model of [4] and the SVGLP model of [5]. There are also various GAN-based approaches, such as the SAVP approach of [13] and the MoCoGAN approach of [27]. We differ from this work in that we use the BERT model, without any explicit stochastic latent variables, ap- plied to visual tokens derived from the video. Thus our model is not a generative model of pixels, but it is a gen- erative model of features derived from pixels, which is an approach that has been used in other work (e.g., [30]). Self-supervised learning. To avoid the difficulties of learning a joint model p(x1:T ), it has become popular to learn conditional models of the form p(xt+1:T |x1:t), where we partition the signal into two or more blocks, such as gray scale and color, or previous frame and next frame (e.g., [18]), and try to predict one from the other (see e.g., [23] for an overview). Our approach is similar, except we use quantized visual words instead of pixels. Furthermore, al- though we learn a set conditional distributions, our model is a proper joint generative model, as explained in Section 3. Cross-modal learning. The multi-modal nature of video has also been an extensive source of supervision for learn- ing video representations, which our paper builds on. Since most videos contain synchronized audio and visual signals, the two modalities can supervise each other to learn strong self-supervised video representations [3, 20, 21]. In this work, we use speech (provided by ASR) rather than low- level sounds as a source of cross-modal supervision. language models. We build upon recent progress in the NLP community, where large-scale lan- guage models such as ELMO [22] and BERT [6] have shown state-of-the-art results for various NLP tasks, both at the word level (e.g., POS tagging) and sentence level (e.g., semantic classification). The BERT model is then extended to pre-train on multi-lingual data [12]. Our paper builds on the BERT model to capture structure in both the linguistic and visual domains. Image and video captioning. There has been much re- cent work on image captioning (see e.g., [11, 8, 15]), which is a model of the form p(y|x), where y is the manually pro- vided caption and x is the image. There has also been some work on video captioning, using either manually provided temporal segmentation or estimated segmentations (see e.g., [10, 39]). We use our joint p(x, y) model and apply it to video captioning, and achieve state-of-the-art results, as we discuss in Section 4.6. Instructional videos. Various papers (e.g., [16, 2, 10, 38, 39]) have trained models to analyse instructional videos, such as cooking. We differ from this work in that we do not use any manual labeling, and we learn a large-scale genera- tive model of both words and (discretized) visual signals. # 3. Models In this section, we briefly summarize the BERT model, and then describe how we extend it to jointly model video and language data. # 3.1. The BERT model BERT [6] proposes to learn language representations by In using a “masked language model” training objective. more detail, let x = {x1, . . . , xL} be a set of discrete to- kens, xl ∈ X . We can define a joint probability distribution over this set as follows: L L p(z|0) = 7H Il o1(2|0) x exp (>: leet) i=1 l=1 where φl(x) is the l’th potential function, with parameters θ, and Z is the partition function. The above model is permutation invariant. In order to capture order information, we can “tag” each word with its position in the sentence. The BERT model learns an embed- ding for each of the word tokens, as well as for these tags, and then sums the embedding vectors to get a continuous representation for each token. The log potential (energy) functions for each location are defined by log φl(x|θ) = xT l fθ(x\l) where xl is a one-hot vector for the l’th token (and its tag), and x\l = (x1, . . . , xl−1, MASK, xl+1, . . . , xL) The function f (x\l) is a multi-layer bidirectional trans- former model [28] that takes an L × D1 tensor, contain- ing the D1-dimensional embedding vectors corresponding to x\l, and returns an L × D2 tensor, where D2 is the size of the output of each transformer node. See [6] for details. The model is trained to approximately maximize the pseudo log-likelihood L L(0) = Exxv Slog p(aifru; 2) I=1 In practice, we can stochastically optimize the logloss (computed from the softmax predicted by the f function) by sampling locations as well as training sentences. BERT can be extended to model two sentences by con- catenating them together. However, we are often not only interested in simply modeling the extended sequence, but rather relationships between the two sentences (e.g., is this a pair of consecutive or randomly selected sentences). BERT accomplishes this by prepending every sequence with a spe- cial classification token, [CLS], and by joining sentences with a special separator token, [SEP]. The final hidden state corresponding to the [CLS] token is used as the aggregate sequence representation from which we predict a label for classification tasks, or which may otherwise be ignored. In addition to differentiating sentences with the [SEP] token, BERT also optionally tags each token by the sentence it comes from. The corresponding joint model can be written as p(x, y, c), where x is the first sentence, y is the second, and c = {0, 1} is a label indicating whether the sentences were separate or consecutive in the source document. For consistency with the original paper, we also add a [SEP] token to the end of the sequence, even though it is not strictly needed. So, a typical masked-out training sentence pair may look like this: [CLS] let’s make a traditional [MASK] cuisine [SEP] orange corre- chicken with [MASK] sauce [SEP]. sponding class label in this case would be c = 1, indicating that x and y are consecutive. deoBERT the pan E, | E, {CLS} | Place E = i Eu | 4y— 4s ay 4 ss [CLS] }| Place }| the (wast) [- ts J) eo) a es eK Figure 3: Illustration of VideoBERT in the context of a video and text masked token prediction, or cloze, task. This task also allows for training with text-only and video-only data, and VideoBERT can furthermore be trained using a linguistic-visual alignment classification objective (not shown here, see text for details). # 3.2. The VideoBERT model To extend BERT to video, in such a way that we may still leverage pretrained language models and scalable im- plementations for inference and learning, we decided to make minimal changes, and transform the raw visual data into a discrete sequence of tokens. To this end, we propose to generate a sequence of “visual words” by applying hi- erarchical vector quantization to features derived from the video using a pretrained model. See Section 4.2 for details. Besides its simplicity, this approach encourages the model to focus on high level semantics and longer-range temporal dynamics in the video. This is in contrast to most existing self-supervised approaches to video representation learning, which learn low-level properties such as local textures and motions, as discussed in Section 2. We can combine the linguistic sentence (derived from the video using ASR) with the visual sentence to generate data such as this: [CLS] orange chicken with [MASK] sauce [>] v01 [MASK] v08 v72 [SEP], where v01 and v08 are visual tokens, and [>] is a special token we in- troduce to combine text and video sentences. See Figure 3 for an illustration. While this cloze task extends naturally to sequences of linguistic and visual tokens, applying a next sentence pre- diction task, as used by BERT, is less straightforward. We propose a linguistic-visual alignment task, where we use the final hidden state of the [CLS] token to predict whether the linguistic sentence is temporally aligned with the visual sen- tence. Note that this is a noisy indicator of semantic relat- edness, since even in instructional videos, the speaker may be referring to something that is not visually present. tween different videos, we randomly pick a subsampling rate of 1 to 5 steps for the video tokens. This not only helps the model be more robust to variations in video speeds, but also allows the model to capture temporal dynamics over greater time horizons and learn longer-term state transi- tions. We leave investigation into other ways of combining video and text to future work. Overall, we have three training regimes corresponding to the different input data modalities: text-only, video-only and video-text. For text-only and video-only, the standard mask-completion objectives are used for training the model. For text-video, we use the linguistic-visual alignment clas- sification objective described above. The overall training objective is a weighted sum of the individual objectives. The text objective forces VideoBERT to do well at language modeling; the video objective forces it to learn a “language model for video”, which can be used for learning dynam- ics and forecasting; and the text-video objective forces it to learn a correspondence between the two domains. Once we have trained the model, we can use it in a va- riety of downstream tasks, and in this work we quantita- tively evaluate two applications. In the first application, we treat it as a probabilistic model, and ask it to predict or im- pute the symbols that have been MASKed out. We illustrate this in Section 4.4, where we perform “zero-shot” classifi- cation. In the second application, we extract the predicted representation (derived from the internal activations of the model) for the [CLS] token, and use that dense vector as a representation of the entire input. This can be combined with other features derived from the input to be used in a downstream supervised learning task. We demonstrate this in Section 4.6, where we perform video captioning. To combat this, we first randomly concatenate neighbor- ing sentences into a single long sentence, to allow the model to learn semantic correspondence even if the two are not well aligned temporally. Second, since the pace of state transitions for even the same action can vary greatly be- # 4. Experiments and Analysis In this section we describe our experimental setup, and show quantitative and qualitative results. # 4.1. Dataset Deep learning models, in both language and vision do- mains, have consistently demonstrated dramatic gains in performance with increasingly large datasets. For example, the “large” BERT model (which we use) was pretrained on the concatenation of the BooksCorpus (800M words) and English Wikipedia (2,500M words). Therefore, we would like to train VideoBERT with a comparably large-scale video dataset. Since we are inter- ested in the connection between language and vision, we would like to find videos where the spoken words are more likely to refer to visual content. Intuitively, this is often the case for instructional videos, and we focus on cooking videos specifically, since it is a well studied domain with existing annotated datasets available for evaluation. Unfor- tunately, such datasets are relatively small, so we turn to YouTube to collect a large-scale video dataset for training. We extract a set of publicly available cooking videos from YouTube using the YouTube video annotation sys- tem to retrieve videos with topics related to “cooking” and “recipe”. We also filter videos by their duration, removing videos longer than 15 minutes, resulting in a set of 312K videos. The total duration of this dataset is 23,186 hours, or roughly 966 days. For reference, this is more than two or- ders of magnitude larger than the next largest cooking video dataset, YouCook II, which consists of 2K videos with a to- tal duration of 176 hours [38]. To obtain text from the videos, we utilize YouTube’s au- tomatic speech recognition (ASR) toolkit provided by the YouTube Data API [1] to retrieve timestamped speech in- formation. The API returns word sequences and the pre- dicted language type. Among the 312K videos, 180K have ASR that can be retrieved by the API, and 120K of these are predicted to be in English. In our experiments, while we use all videos for the video-only objective, we only use text from English ASR for VideoBERT’s text-only and video- text objectives. We evaluate VideoBERT on the YouCook II dataset [38], which contains 2000 YouTube videos averaging 5.26 min- utes in duration, for a total of 176 hours. The videos have manually annotated segmentation boundaries and captions. On average there are 7.7 segments per video, and 8.8 words per caption. We use the provided dataset split, with 1333 videos for training and 457 for validation. To avoid po- tential bias during pretraining, we also remove any videos which appear in YouCook II from our pretraining set. # 4.2. Video and Language Preprocessing For each input video, we sample frames at 20 fps, and create clips from 30-frame (1.5 seconds) non-overlapping windows over the video. For each 30-frame clip, we apply a pretrained video ConvNet to extract its features. In this work, we use the S3D [34] which adds separable temporal convolutions to an Inception network [25] backbone. We take the feature activations before the final linear classifier and apply 3D average pooling to obtain a 1024-dimension feature vector. We pretrain the S3D network on the Kinet- ics [9] dataset, which covers a wide spectrum of actions from YouTube videos, and serves as a generic representa- tion for each individual clip. We tokenize the visual features using hierarchical k- means. We adjust the number of hierarchy levels d and the number of clusters per level k by visually inspecting the co- herence and representativeness of the clusters. We set d = 4 and k = 12, which yields 124 = 20736 clusters in total. Figure 4 illustrates the result of this “vector quantization” process. For each ASR word sequence, we break the stream of words into sentences by adding punctuation using an off-the-shelf LSTM-based language model. For each sen- tence, we follow the standard text preprocessing steps from BERT [6] and tokenize the text into WordPieces [33]. We use the same vocabulary provided by the authors of BERT, which contains 30,000 tokens. Unlike language which can be naturally broken into sen- tences, it is unclear how to break videos into semantically coherent segments. We use a simple heuristic to address this problem: when an ASR sentence is available, it is as- sociated with starting and ending timestamps, and we treat video tokens that fall into that time period as a segment. When ASR is not available, we simply treat 16 tokens as a segment. # 4.3. Model Pre-training We initialize the BERT weights from a text pre-trained checkpoint. Specifically, we use the BERTLARGE model re- leased by the authors of [6], using the same backbone archi- tecture: it has 24 layers of Transformer blocks, where each block has 1024 hidden units and 16 self-attention heads. We add support for video tokens by appending 20,736 entries to the word embedding lookup table for each of our new “visual words”. We initialize these entries with the S3D features from their corresponding cluster centroids. The input embeddings are frozen during pretraining. Our model training process largely follows the setup of BERT: we use 4 Cloud TPUs in the Pod configuration with a total batch size of 128, and we train the model for 0.5 million iterations, or roughly 8 epochs. We use the Adam optimizer with an initial learning rate of 1e-5, and a linear decay learning rate schedule. The training process takes around 2 days. # 4.4. Zero-shot action classification Once pretrained, the VideoBERT model can be used for “zero-shot” classification on novel datasets, such as YouCook II (By “zero-shot” we mean the model is not “put in the meantime, you're just kind of moving around your cake board and you can keep reusing make sure you're working on a clean service so you can just get these all out of your way but it's just a really fun thing to do especially for a birthday party.” “apply a little bit of butter on one side and place a portion of the stuffing and spread evenly cover with another slice of the bread and apply some more butter on top since we're gonna grill the sandwiches.” Figure 4: Examples of video sentence pairs from the pretraining videos. We quantize each video segment into a token, and then represent it by the corresponding visual centroid. For each row, we show the original frames (left) and visual centroids (right). We can see that the tokenization process preserves semantic information rather than low-level visual appearance. trained on YouCook II data nor with the same label ontol- ogy used in YouCook II). More precisely, we want to com- pute p(y|x) where x is the sequence visual tokens, and y is a sequence of words. Since the model is trained to predict sentences, we define y to be the fixed sentence, “now let me show you how to [MASK] the [MASK],” and ex- tract the verb and noun labels from the tokens predicted in the first and second masked slots, respectively. See Figure 5 for some qualitative results. For quantitative evaluation, we use the YouCook II dataset. In [37], the authors collected ground truth bound- ing boxes for the 63 most common objects for the validation set of YouCook II. However, there are no ground truth la- bels for actions, and many other common objects are not labeled. So, we collect action and object labels, derived from the ground truth captions, to address this shortcoming. We run an off-the-shelf part-of-speech tagger on the ground truth captions to retrieve the 100 most common nouns and 45 most common verbs, and use these to derive ground truth labels. While VideoBERT’s word piece vocabulary gives it the power to effectively perform open-vocabulary clas- sification, it is thus more likely to make semantically cor- rect predictions that do not exactly match the more limited ground truth. So, we report both top-1 and top-5 classifica- tion accuracy metrics, where the latter is intended to miti- gate this issue, and we leave more sophisticated evaluation techniques for future work. Lastly, if there is more than one verb or noun associated with a video clip, we deem a prediction correct if it matches any of those. We report the performance on the validation set of YouCook II. Table 1 shows the top-1 and top-5 accuracies of VideoBERT and its ablations. To verify that VideoBERT actually makes use of video inputs, we first remove the video inputs to VideoBERT, and use just the language Top verbs: make, assemble, prepare Top nouns: pizza, sauce, pasta Top verbs: make, do, pour Top nouns: cocktail, drink, glass Top verbs: make, prepare, bake Top nouns: cake, crust, dough Figure 5: Using VideoBERT to predict nouns and verbs given a video clip. See text for details. The video clip is first converted into video tokens (two are shown here for each example), and then visualized using their centroids. Method Supervision verb top-1 (%) verb top-5 (%) object top-1 (%) S3D [34] yes 16.1 46.9 13.2 30.9 no no no 0.0 0.4 3.2 0.0 6.9 43.3 0.0 7.7 13.1 0.0 15.3 33.7 # object top-5 (%) BERT (language prior) VideoBERT (language prior) VideoBERT (cross modal) Table 1: Action classification performance on YouCook II dataset. See text for details. Method Data size verb top-1 (%) verb top-5 (%) object top-1 (%) object top-5 (%) VideoBERT VideoBERT VideoBERT VideoBERT 10K 50K 100K 300K 0.4 1.1 2.9 3.2 15.5 15.7 24.5 43.3 2.9 8.7 11.2 13.1 17.8 27.3 30.6 33.7 Table 2: Action classification performance on YouCook II dataset as a function of pre-training data size. model p(y) to perform prediction. We also use the lan- guage prior from the text-only BERT model, that was not fine-tuned on cooking videos. We can see that VideoBERT significantly outperforms both baselines. As expected, the language prior of VideoBERT is adapted to cooking sen- tences, and is better than the vanilla BERT model. We then compare with a fully supervised classifier that was trained using the training split of YouCook II. We use the pre-computed S3D features (same as the inputs to VideoBERT), applying average pooling over time, followed by a linear classifier. Table 1 shows the results. As we can see, the supervised framework outperforms VideoBERT in top-1 verb accuracy, which is not surprising given that VideoBERT has an effectively open vocabulary. (See Fig- ure 5 for an illustration of the ambiguity of the action la- bels.) However, the top-5 accuracy metric reveals that VideoBERT achieves comparable performance to the fully supervised S3D baseline, without using any supervision from YouCook II, indicating that the model is able to per- form competitively in this “zero-shot” setting. tures for the video tokens and the masked out text tokens, take their average and concatenate the two together, to be used by a supervised model in a downstream task. We evaluate the extracted features on video captioning, following the setup from [39], where the ground truth video segmentations are used to train a supervised model map- ping video segments to captions. We use the same model that they do, namely a transformer encoder-decoder, but we replace the inputs to the encoder with the features derived from VideoBERT described above. We also concatenate the VideoBERT features with average-pooled S3D features; as a baseline, we also consider using just S3D features without VideoBERT. We set the number of Transformer block lay- ers to 2, the hidden unit size to 128, and Dropout probability to 0.4. We use a 5-fold cross validation on the training split to set the hyper-parameters, and report performance on the validation set. We train the model for 40K iterations with batch size of 128. We use the same Adam optimizer as in VideoBERT pre-training, and set the initial learning rate to 1e-3 with a linear decay schedule. # 4.5. Benefits of large training sets We also studied the impact of the size of the pretrain- ing dataset. For this experiment, we take random subsets of 10K, 50K and 100K videos from the pretraining set, and pretrain VideoBERT using the same setup as above, for the same number of epochs. Table 2 shows the perfor- mance. We can see that the accuracy grows monotonically as the amount of data increases, showing no signs of satura- tion. This indicates that VideoBERT may benefit from even larger pretraining datasets. # 4.6. Transfer learning for captioning We further demonstrate the effectiveness of VideoBERT when used as a feature extractor. To extract features given only video inputs, we again use a simple fill-in-the-blank task, by appending the video tokens to a template sentence “now let’s [MASK] the [MASK] to the [MASK], and then [MASK] the [MASK].” We extract Table 3 shows the results. We follow the standard prac- tice in machine translation and compute BLEU and ME- TEOR scores micro-averaged at corpus level, and also re- port ROUGE-L [14] and CIDEr [29] scores. For the base- line method [39], we recompute the metrics using the predictions provided by the authors. We can see that VideoBERT consistently outperforms the S3D baseline, es- pecially for CIDEr. We can also see that cross-modal pre- training outperforms the video-only version. Furthermore, by concatenating the features from VideoBERT and S3D, the model achieves the best performance across all metrics1. Figure 6 shows some qualitative results. We note that the predicted word sequence is rarely exactly equal to the ground truth, which explains why the metrics in Table 3 (which measure n-gram overlap) are all low in absolute value. However, semantically the results seem reasonable. 1The metrics used by [39] are macro-averaged at video level and may suffer from undesirable sparsity artifacts. Using their provided evaluation code, VideoBERT + S3D has B@4 of 1.79, and METEOR of 10.80. Method BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr 7.53 6.12 6.33 6.80 7.59 3.84 3.24 3.81 4.04 4.33 11.55 9.52 10.81 11.01 11.94 27.44 26.09 27.14 27.50 28.80 0.38 0.31 0.47 0.49 0.55 Table 3: Video captioning performance on YouCook II. We follow the setup from [39] and report captioning performance on the validation set, given ground truth video segments. Higher numbers are better. ee GT: add some chopped basil leaves into it VideoBERT: chop the basil and add to the bowl $3D: cut the tomatoes into thin slices GT: cut yu choy into diagonally medium pieces VideoBERT: chop the cabbage $3D: cut the roll into thin slices GT: cut the top off of a french loaf VideoBERT: cut the bread into thin slices $3D: place the bread on the pan a GT: remove the calamari and set it on paper towel VideoBERT: fry the squid in the pan $3D: add the noodles to the pot Figure 6: Examples of generated captions by VideoBERT and the S3D baseline. In the last example, VideoBERT fails to exploit the full temporal context, since it misses the paper towel frame. # 5. Discussion and conclusion This paper adapts the powerful BERT model to learn a joint visual-linguistic representation for video. Our exper- imental results demonstrate that we are able to learn high- level semantic representations, and we outperform the state- of-the-art for video captioning on the YouCook II dataset. We also show that this model can be used directly for open- vocabulary classification, and that its performance grows monotonically with the size of training set. This work is a first step in the direction of learning such joint representations. For many applications, includ- ing cooking, it is important to use spatially fine-grained vi- sual representations, instead of just working at the frame or clip level, so that we can distinguish individual objects and their attributes. We envision either using pretrained object detection and semantic segmentation models, or using unsu- pervised techniques for broader coverage. We also want to explicitly model visual patterns at multiple temporal scales, instead of our current approach, that skips frames but builds a single vocabulary. Beyond improving the model, we plan to assess our ap- proach on other video understanding tasks, and on other do- mains besides cooking. (For example, we may use the re- cently released COIN dataset of manually labeled instruc- tional videos [26].) We believe the future prospects for large scale representation learning from video and language look quite promising. Acknowledgements. We would like to thank Jack Hessel, Bo Pang, Radu Soricut, Baris Sumengen, Zhenhai Zhu, and the BERT team for sharing amazing tools that greatly fa- cilitated our experiments; Justin Gilmer, Abhishek Kumar, David Ross, and Rahul Sukthankar for helpful discussions. Chen would like to thank Y. M. for inspiration. # References [1] YouTube Data API. https://developers.google. com/youtube/v3/docs/captions. 5 [2] Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. Unsu- pervised learning from narrated instruction videos. In CVPR, 2016. 3 [3] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Sound- net: Learning sound representations from unlabeled video. In NeurIPS, 2016. 3 [4] Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. In ICLR, 2018. 2 [5] Emily Denton and Rob Fergus. Stochastic video generation with a learned prior. In ICML, 2018. 2 [6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina BERT: Pre-training of deep bidirectional arXiv preprint Toutanova. transformers for language understanding. arXiv:1810.04805, 2018. 2, 3, 5 [7] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Car- oline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018. 2 [8] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic align- ments for generating image descriptions. In CVPR, 2015. 3 [9] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics hu- man action video dataset. arXiv preprint arXiv:1705.06950, 2017. 2, 5 [10] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-Captioning events in videos. In ICCV, 2017. 2, 3 [11] Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg, and Tamara L Berg. Baby talk: Understanding and generating image descriptions. In CVPR, 2011. 3 [12] Guillaume Lample and Alexis Conneau. Cross-lingual lan- guage model pretraining. arXiv preprint arXiv:1901.07291, 2019. 3 [13] Alex X Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, and Sergey Levine. Stochastic adversarial video prediction. arXiv:1804.01523, 2018. 2 [14] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out, 2004. 7 [15] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Neural baby talk. In CVPR, 2018. 3 [16] Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Murphy. What’s cookin’? interpreting cooking videos using text, speech and vision. In NAACL, Mar. 2015. 3 [17] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. 2 [18] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuf- fle and learn: unsupervised learning using temporal order verification. In ECCV, 2016. 3 [19] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ra- makrishnan, Sarah Adel Bargal, Yan Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments in time dataset: one million videos for event understanding. TPAMI, 2019. 2 [20] Andrew Owens, Phillip Isola, Josh McDermott, Antonio Tor- ralba, Edward H Adelson, and William T Freeman. Visually indicated sounds. In CVPR, 2016. 3 [21] Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for visual learning. In ECCV, 2016. 3 [22] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gard- ner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In NAACL, 2018. 3 [23] Marc Aurelio Ranzato and Alex Graves. Deep unsupervised learning. NIPS Tutorial, 2018. 3 [24] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhi- nav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017. 1 [25] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 5 [26] Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. COIN: A large-scale dataset for comprehensive instructional video analysis. In CVPR, 2019. 8 [27] Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. MoCoGAN: Decomposing motion and content for video generation. In CVPR, 2018. 2 [28] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 3 [29] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evalua- tion. In CVPR, 2015. 7 [30] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. An- In ticipating visual representations from unlabeled video. CVPR, 2016. 2 [31] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In NeurIPS, 2016. 2 [32] Jacob Walker, Carl Doersch, Abhinav Gupta, and Martial Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In ECCV, 2016. 2 [33] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap be- arXiv preprint tween human and machine translation. arXiv:1609.08144, 2016. 5 [34] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning for video understanding. In ECCV, 2018. 5, 7, 8 [35] Tianfan Xue, Jiajun Wu, Katherine Bouman, and Bill Free- man. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, 2016. 2 [36] Hang Zhao, Zhicheng Yan, Heng Wang, Lorenzo Torresani, and Antonio Torralba. Slac: A sparsely labeled dataset arXiv preprint for action classification and localization. arXiv:1712.09374, 2017. 2 [37] Luowei Zhou, Nathan Louis, and Jason J Corso. Weakly- supervised video object grounding from text by loss weight- ing and object interaction. In BMVC, 2018. 6 [38] Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In AAAI, 2018. 2, 3, 5 [39] Luowei Zhou, Yingbo Zhou, Jason J. Corso, Richard Socher, and Caiming Xiong. End-to-end dense video captioning with masked transformer. In CVPR, 2018. 2, 3, 7, 8 ASR: “This is what happens when you play with dough think of yourselves as a kitten who happens to look like Ed Sheeran.” Original Top verbs: make, shape, roll Centroids Top nouns: dough, filling, chicken ASR: “So it's really up to you cut it off get Original this and nice slices.” Top verbs: cut, prepare, make Centroids Top nouns: orange, lemon, tomato ASR: “The less your work is the better on with the flat layer get it nice level as much as you can and again with your frosting.” Original Top verbs: assemble, decorate, Centroids Top nouns: cake, crust, Vania frosting (as ASR: “| highly recommend that you whole sheet just because when make the smaller sushi rolls, fixings tend to fall out.” Original Top verbs: roll, make, cut Top nouns: fish, salmon, Centroids nfl ASR: “The less your work is the better then on with the flat layer get it nice and level as much as you can and then again with your frosting.” Original Top verbs: assemble, decorate, frost Centroids Top nouns: cake, crust, cream Vania frosting (as ASR: “| highly recommend that you use a whole sheet just because when you make the smaller sushi rolls, they the fixings tend to fall out.” Original Top verbs: roll, make, cut Top nouns: fish, salmon, dough Centroids nfl Figure A1: Visualizations for video to text prediction. For each example, we show the key frames from the original video (top left) and the associated ASR outputs (top right), we then show the centroid images of video tokens (bottom left) and the top predicted verbs and nouns by VideoBERT (bottom right). Note that the ASR outputs are not used to predict verbs and nouns. Input Future 1 Future 2 Future 3 GROUND BEEF Figure A2: Visualizations for video to video prediction. Given an input video token, we show the top 3 predicted video tokens 2 steps away in the future. We visualize each video token by the centroids. Input text Retrieved centroid Retrieved centroid “Put the pizza into oven.” “Put the cookies into oven.” “Put the chicken into oven.” “Put the pizza on wooden peel.” Input text Retrieved centroid Retrieved centroid “Cut the steak into pieces.” “Cut the carrots into pieces.” Cortamos “Cut the lettuce into pieces.” “Cut the steak into thin slices.” Figure A3: Visualizations for text to video prediction. In particular, we make small changes to the input text, and compare how the generated video tokens vary. We show top 2 retrieved video tokens for each text query.
{ "id": "1810.04805" }
1904.01557
Analysing Mathematical Reasoning Abilities of Neural Models
Mathematical reasoning---a core ability within human intelligence---presents some unique challenges as a domain: we do not come to understand and solve mathematical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of mathematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge.
http://arxiv.org/pdf/1904.01557
David Saxton, Edward Grefenstette, Felix Hill, Pushmeet Kohli
cs.LG, stat.ML
null
null
cs.LG
20190402
20190402
9 1 0 2 r p A 2 ] G L . s c [ 1 v 7 5 5 1 0 . 4 0 9 1 : v i X r a Published as a conference paper at ICLR 2019 ANALYSING MATHEMATICAL REASONING ABILITIES OF NEURAL MODELS David Saxton DeepMind [email protected] Edward Grefenstette DeepMind [email protected] Felix Hill DeepMind [email protected] # Pushmeet Kohli DeepMind [email protected] # ABSTRACT Mathematical reasoning—a core ability within human intelligence—presents some unique challenges as a domain: we do not come to understand and solve mathemat- ical problems primarily on the back of experience and evidence, but on the basis of inferring, learning, and exploiting laws, axioms, and symbol manipulation rules. In this paper, we present a new challenge for the evaluation (and eventually the design) of neural architectures and similar system, developing a task suite of math- ematics problems involving sequential questions and answers in a free-form textual input/output format. The structured nature of the mathematics domain, covering arithmetic, algebra, probability and calculus, enables the construction of training and test splits designed to clearly illuminate the capabilities and failure-modes of different architectures, as well as evaluate their ability to compose and relate knowledge and learned processes. Having described the data generation process and its potential future expansions, we conduct a comprehensive analysis of models from two broad classes of the most powerful sequence-to-sequence architectures and find notable differences in their ability to resolve mathematical problems and generalize their knowledge. # INTRODUCTION Deep learning, powered by convolutional and recurrent networks, has had remarkable success in areas involving pattern matching (such as in images (Krizhevsky et al., 2012), machine translation (Bahdanau et al., 2014; Vaswani et al., 2017), and reinforcement learning (Mnih et al., 2015; Silver et al., 2016)). However, deep models are far from achieving the robustness and flexibility exhibited by humans. They are limited in their ability to generalize beyond the environments they have experienced and are extremely brittle in the presence of adversarially constructed inputs (Szegedy et al., 2013). One area where human intelligence still differs and excels compared to neural models is discrete compositional reasoning about objects and entities, that “algebraically generalize” (Marcus, 2003). Our ability to generalise within this domain is complex, multi-faceted, and patently different from the sorts of generalisations that permit us to, for example, translate new sentence of French into English. For example, consider the following question from mathematics, with answer "−70x − 165". What is g(h(f (x))), where f (x) = 2x + 3, g(x) = 7x − 4, and h(x) = −5x − 8? To solve this problem, humans use a variety of cognitive skills: • Parsing the characters into entities such as numbers, arithmetic operators, variables (which together form functions) and words (determining the question). Planning (for example, identifying the functions in the correct order to compose). • Using sub-algorithms for function composition (addition, multiplication). • Exploiting working memory to store intermediate values (such as the composition h(f (x))). 1 Published as a conference paper at ICLR 2019 • Generally applying acquired knowledge of rules, transformations, processes, and axioms. In this paper, we introduce a dataset consisting of many different types of mathematics problems, with the motivation that it should be harder for a model to do well across a range of problem types (including generalization, which we detail below) without possessing at least some part of these abilities that allow for algebraic generalization. This domain is an important one for the analysis of neural architectures in general. In addition to providing a wide range of questions, there are several other advantages: Mathematics offers a self-consistent universe; notation is the same across different problem types, which allows for an easily extendable dataset; and rules and methods learnt on one problem type often apply elsewhere. Addition of numbers (for example) obeys the same rules everywhere, and occurs as a “subroutine" in other problems (such as concretely in multiplication, and both concretely and more abstractly in addition of polynomials); models that possess the ability to transfer knowledge will do well on the dataset (and knowledge transfer may be a necessity for solving harder problems). Mathematics is also an interesting domain in its own right; although models solving the mostly school-level problems in this dataset would not themselves have applications, they may lead on to more powerful models that can solve interesting and substantial new mathematical problems. But more generally, it is no coincidence that experiments seeking to validate new architectures which aim capture algorithmic/systematic reasoning have often been drawn from this domain (Graves et al., 2016; Kaiser & Sutskever, 2015; Joulin & Mikolov, 2015), and thus in providing a large scale training and evaluation framework for such models, we hope to provide a solid foundation upon which to continue such research into machine reasoning beyond mathematics. Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r. Answer: 4 Question: Calculate -841880142.544 + 411127. Answer: -841469015.544 Question: 39. Let w(j) = q(x(j)). Answer: 54*a - 30 Question: Let e(l) = l - 6. Is 2 a factor of both e(9) and 2? Answer: False Question: Let u(n) = -n**3 - n**2. = -118*e(j) + 54*u(j). What is the derivative of l(a)? Answer: 546*a**2 - 108*a - 118 Question: Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql. Answer: 1/110 # Figure 1: Examples from the dataset. 1.1 OUR CONTRIBUTIONS Dataset and generalization tests We release1 a sequence-to-sequence dataset consisting of many different types of mathematics questions (see Figure 1) for measuring mathematical reasoning, with the provision of both generation code and pre-generated questions. The dataset comes with two sets of tests: interpolation tests, one for each type of question occurring in the training set; and extrapolation tests, that measure generalization along various axes of difficulty to beyond that seen during training. We include extrapolation tests as an additional measure of whether models are employing abilities that allow them to algebraically generalize. Experiments and model analysis We perform an experimental evaluation to investigate the alge- braic abilities of state-of-the-art neural architectures, and show that they do well on some types of questions, but certainly not all, and furthermore have only moderate amounts of generalization. We give some insights into how they learn to answer mathematics questions, and their failure modes. 1Dataset will be available at https://github.com/deepmind/mathematics_dataset 2 Published as a conference paper at ICLR 2019 1.2 RELATED WORK There are various papers with datasets with a discrete reasoning nature. Kaiser & Sutskever (2015) use an adapted convolutional architecture to solve addition and multiplication with good generalization; Allamanis et al. (2016) and Evans et al. (2018) use tree networks to predict polynomial or logical equivalence or logical entailment; Selsam et al. (2018) uses message passing networks with a bipartite graph structure to decide satisfiability in formulas in conjunctive normal form, and so on. The difference between those problems and the dataset in this paper is that the former all have a single well-defined input structure that can be easily mapped into narrow architectures suited to the problem structure, avoiding the need for general reasoning skills like parsing or generic working memory. Zaremba & Sutskever (2014) analyze the ability of LSTMs to map short Python programs (addition or for-loops) to their output. Some mathematics problems are of a similar imperative nature (e.g. arith- metic), but we also cover many other types of problems, so our dataset subsumes learning-to-execute. There are a few other synthetically generated datasets designed to assess reasoning of some form. The bAbI dataset of Weston et al. (2015) consists of textual questions, testing the ability to extract knowledge from a story-like sequence of questions. The CLEVR dataset of Johnson et al. (2017) consists of image-question pairs, where the image is of a set of objects, and the question asks for some property of the scene; this dataset is designed to assess visual analysis. Santoro et al. (2018b) use Raven’s progressive matrix puzzles to measure abstract reasoning of networks. There has also been a recent interest in solving algebraic word problems. These questions tend to be crowd sourced or obtained from exercise books, and existing datasets include Allen Institute for AI (2014); Kushman et al. (2014); Huang et al. (2016); Upadhyay & Chang (2016); Wang et al. (2017); Ling et al. (2017). These range in size from hundreds to up to one hundred thousand examples, with different variations and focuses; for example, containing supervised “answer rationale", or focusing on more narrow types of problems, or additionally containing geometry problems (although some of these are too small to train deep learning models without extensive prior mathematical knowledge). Our dataset differs from these in that our focus is mathematical reasoning rather than linguistic comprehension; we cover more areas of mathematics, but with less variation in problem specification, and we see mathematical reasoning as a partially orthogonal and complementary direction to linguistic understanding existing in these other datasets. 2 THE DATASET 2.1 DESIGN CHOICES Modular structure and procedural generation There are two choices for obtaining mathematical questions: either crowd-sourced, or synthetically generated. While crowd-sourcing has the advantage of introducing linguistic diversity, as well as a diversity of problem types, it is difficult to collect and validate such data at scale. In contrast, procedural generation is sufficient for our purposes in most respects: it (1) easily provides a larger number of training examples, with (2) precise controls over difficulty levels, permitting (3) analysis of performance by question type, and (4) better guarantees on question correctness, with (5) potential for more efficient model training by varying the time spent on each module, and (6) ease of testing generalization (since one can precisely vary different axes of difficulty in different question types). Freeform question/answers Given that we synthetically generate the data, we could of course provide the questions as parsed into some structure appropriate for each question type (e.g. a tree or graph). However, we opt for freeform—as a sequence of characters—because (1) it is a powerful and flexible format, allowing us to express many question types (whereas trees or graphs are only appropriate for some problems), (2) the ability to properly semantically parse is a non-negligible part of cognition, and (3) sequences are much simpler objects than graphs and trees, which simplifies development of the dataset and models. Perhaps most importantly, using freeform inputs and outputs means that the input and output space for models evaluated on the benchmark tasks in this dataset is the same as required to address a variety of “real world” mathematics exams questions. While it is not plausible that models trained on our data would perform well on such actual tests due to restricted linguistic variation in how questions 3 Published as a conference paper at ICLR 2019 and answers are formulated, it is nonetheless a desirable feature of our data that future models which do attack real world tests can be “unit tested” on our benchmarks during their development. Compositionality The questions can be seen as mappings with input and output types. For example, function evaluation maps a function and an integer to another integer, function composition maps a pair of functions to a function, and so on. We use this to generate additional composed questions by chaining modules with matching types, where intermediate values from one sub-problem are used as inputs to the next sub-problem. For example, for a single intermediate value, this composition may <question(x)>. See Figure 1 for examples. be phrased as Let x = <description>. This makes the dataset more interesting and challenging in several ways. Many rules in mathematics appear when different concepts are composed. For example, when differentiation is composed with function composition, the chain rule appears; when addition is composed with factorization, distributivity can emerge; and so on. Composition moves the questions away from pure perception, since intermediate results must be stored (working memory) and manipulated (reuse of sub-routines). 2.2 BRIEF OVERVIEW OF MODULES What types of mathematics problems should be included in the dataset? The original content was based on a national school mathematics curriculum (up to age 16), restricted to textual questions (thus excluding geometry questions), which gave a comprehensive range of mathematics topics that worked together as part of a learning curriculum. We extended this with additional areas that offer good tests for algebraic reasoning. We cover the following areas (Appendix B contains the full list of modules). (1) Algebra, such as solving linear systems in 1 and 2 variables, finding roots of polynomials (presented in simplified or unsimplified forms), and extending sequences and finding their general form. (2) Arithmetic, such as basic addition etc, evaluating nested expressions, and simplifying expressions involving square roots. (3) Calculus and differentiating polynomials. (4) Comparisons, such as establishing which of two numbers is bigger, or sorting a list of numbers, or finding the closest number to a given one in a list. (5) Measurement, such as converting between different length scales, and calculating time intervals. (6) Numbers, such as finding divisors, rounding, place value, factorization, and primality. (7) Manipulating polynomials, such as simplification, expansion, evaluation, composition, and addition. (8) Probability, such as probability of obtaining a given sequence when sampling without replacement. Many modules participate in composition where possible. For example, one might have to compare two numbers (a composition module), one of which is the solution of a linear system, and the other is the evaluation of a function. 2.3 GENERATING DIVERSE QUESTIONS FOR TRAINING AND TESTING Most questions involve evaluating one or more randomly generated mathematical objects (e.g. arith- metic expressions, linear systems, polynomials, compositions of these, etc). The biggest challenge in producing the dataset is generating diverse questions that are neither trivial nor impossibly hard. During testing we also want to generate questions that have not been seen in training. These requirements rule-out naive unconditional sampling of such objects. For example, the product of a sequence of rationals will evaluate to zero if any of the rationals are zero; an arithmetic expression generated by randomly sampling a binary tree will often evaluate to zero or some large number; and a linear system in two variables will rarely have integer solutions. So instead for most modules we employ a different approach: we first sample the answer, and then work backwards to generate the question (including if we are doing module composition). The details of how we do this are diverse and depend on the question type, and we refer the reader to the generation code for more detail. Training and interpolation tests Per module, we generate 2 × 106 train questions, and 105 test (interpolation) questions. To ensure the train questions are diverse, and the test questions are distinct from the train questions, the generation code guarantees lower bounds on the probability of a given question appearing. (Post-generation hashing does not in general work, since the same question may occur with linguistic variation, although we use it in a few limited cases.) We generate test questions such that any particular question has a probability of at most 10−8, thus guaranteeing that at most 10−8 × 2 × 106 = 2% of the test questions to have already appeared in the training data. (To be more precise, each module generator accepts an input α, such that the output question has probability at 4 Published as a conference paper at ICLR 2019 most 10−α; train questions are generated by sampling α uniformly from [3, 10] (typically), and test questions are generated by taking α = 8.) The various mechanisms by which we achieve these probabilistic guarantees are again diverse and question dependent, so again we refer the reader to the generation code. But to give an example, many questions involve one or more integers (which includes rationals, a quotient of two integers). If we need to generate n integers, then provided the ith integer is sampled from a set of size at least a;, then the probability of a given sequence of integers is at most []; 1/a;. We then simply need to choose these sets of integers appropriately (e.g. a symmetric set about zero, or the first positive integers, or integers coprime to some other integer, etc). Extrapolation tests Mathematical generalization exists along a variety of axes (e.g. length, number of symbols, depth of composition/recursion). We therefore include, in our extrapolation test sets, a range of modules that measure extrapolation along different axes, such as to problems involving larger numbers, more numbers, more compositions, and (for probability questions) larger samplers. Full details are in Appendix B. 2.4 EVALUATION CRITERION Given a model that maps an input question to an output answer, we score each question either 0 or 1 according to whether the answer matches the correct answer character-for-character. The performance on a given test module is the average of this score across all questions. Performance across the interpolation and extrapolation test sets is then the average across all modules inside the test set. This choice of criterion is appropriate given the restricted nature of the answers generated in our dataset (but see Section 5 for possible future extensions). 2.5 RELEASE We will release 2 × 106 training examples and 104 pre-generated test examples per module upon publication of this paper. In the dataset, the questions and answers use a common alphabet of size 95 (upper and lower case characters, digits, and punctuation characters). The questions are capped to 160 characters in length and answers to 30, which is sufficient for a wide range of question types. Mathematical equations are formatted according to Python/SymPy (Meurer et al., 2017) conventions (for example, ** is used for power rather than ˆ); these rules are consistent for all modules. # 3 MODELS EXAMINED Due to the construction process underlying this dataset, there are a large number of existing models, which could be adapted, purpose-built, or tailored to solve the sort of problems we present here, especially with the help of symbolic solvers or computer algebra systems. Setting aside the possible brittleness or limits in scalability of traditional symbolic approaches as the complexity or linguistic diversity of questions and answers grows, we are interested here in evaluating general purpose models, rather than ones with their mathematics knowledge already inbuilt. What makes such models (which are invariably neural architectures) so ubiquitous from translation to parsing via image captioning is the lack of bias these function approximators present due to having relatively little (or no) domain-specific knowledge encoded in their design. Although there are some neural network-driven approaches with direct access to mathematical operations (such as addition or multiplication (Ling et al., 2017), or more complex mathematical templates like in (Kushman et al., 2014)), which would undoubtedly perform competitively on the tasks we present in this paper, we will limit ourselves to general sequence-processing architectures which are used in other non-mathematical tasks to present the most general baselines possible for future comparison. We investigate two (broad classes of) models that have demonstrated themselves to be state-of-the-art on sequence-to-sequence problems: recurrent neural architectures, and the more recently introduced attentional/transformer (Vaswani et al., 2017) architecture. We also tried to use Differentiable Neural Computers (Graves et al., 2016), which is a recurrent model with an “external memory” (whose size is independent of the number of parameters in the network). In theory this could be well suited for solving mathematical questions, since it can store intermediate values for later usage. However we were unable to get decent performance out of it. (Even with hyperparameter sweeps for the number 5 Published as a conference paper at ICLR 2019 Encoder i i 2 + (repeat) attention ' f (embed) ‘(embed) 41 + 1/3|2 17 (a) Attentional LSTM (b) Transformer Answer k I 1/7)|3 (key, value) pairs attention f <________ Decoder 4 query Encoder! Ly 41 + 13,2 1/7 Question Figure 2: The attentional LSTM and Transformer architectures are both consist of an encoder, that parses the question, and a decoder, which maps the correct answer right-shifted by 1 to a distribution of the next character in the answer at every position (thus allowing auto-regressive prediction). (a) The Attentional LSTM encodes the question to a sequence of (key, value) positions, which are then attended over by the decoder. (b) The Transformer has several stages of self- and input-attention; see (Vaswani et al., 2017) for details. and size of memory slots, etc, we were only able to get to 10% validation performance after a day of training, whereas most models obtain this in less than an hour). # 3.1 RECURRENT ARCHITECTURES The LSTM (Hochreiter & Schmidhuber, 1997) is a powerful building block of sequence-to-sequence models that have achieved state of the art results in many domains, and despite its simplicity, continues to be a central building block for recurrent neural networks. We benchmark two standard recurrent architectures (described in more detail in Appendix A). The first and simplest model we analyze (referred to in results below as “Simple LSTM”) is to simply feed the question into the LSTM, one character at a time (using a 1-hot encoding), before outputting the answer one character at a time (the output is a distribution over possible characters, and at every answer step, the previous correct answer character is fed in). In the results below, we use a hidden size of 2048 (obtained via a hyperparameter sweep). The second model we analyze (referred to as “Attentional LSTM”) is the encoder/decoder-with- attention architecture introduced in (Bahdanau et al., 2014) which has been prevalent in neural machine translation, and overcomes two problems with the simple LSTM model above, which affect both language translation and mathematical question-answering: (1) information that is presented in the input may be out-of-order for the purpose of calculations required for the output (for example, to calculate 8/(1 + 3), the expression 1 + 3 must be evaluated first); and (2) all information for the answer must be contained within the single vector of cell activations of the LSTM, which is a bottleneck. The attentional LSTM architecture consists of a recurrent encoder that encodes the question to a sequence of keys and values (of the same length as the question), and a recurrent decoder that has as input the correct answer right-shifted by 1, and at every time step attends to the encoded question, and outputs a distribution over the next character. We use an encoding LSTM with 512 hidden units and a decoding LSTM with 2048 hidden units. (These settings were obtained using a hyperparameter sweep.) In both these architecture, we also employ a simple change that improves performance. The models as described must output the answer straight after parsing the question. However, it may be necessary for the models to expend several computation steps integrating information from the question. To allow for this, we add additional steps (with zero input) before outputting the answer. We also experimented with Adaptive Computation Time as introduced in (Graves, 2016), although this yielded worse results than simply having a fixed number of “thinking” steps. Recently a recurrent architecture known as relational recurrent neural network (Santoro et al., 2018a), or relational memory core (RMC), has been developed as a replacement for the LSTM. This recurrent unit has multiple memory slots that interact via attention. This seems like a natural candidate for 6 Published as a conference paper at ICLR 2019 Simple LSTM Simple RMC Attentional LSTM, LSTM encoder Attentional LSTM, bidir LSTM encoder Attentional RMC, bidir LSTM encoder Transformer Parameters 18M 38M 24M 26M 39M 30M Interpolation Extrapolation 0.57 0.53 0.57 0.58 0.54 0.76 0.41 0.38 0.38 0.42 0.43 0.50 Figure 3: Model accuracy (probability of correct answer) averaged across modules. RMC is the relational recurrent neural network model. mathematical reasoning, for example if the model can learn to use the slots to store mathematical entities. However, a comprehensive hyperparameter sweep gave the best setting as 1 memory slot (i.e., without making full use of the RMC). We include these results below, also with 2048 total units, 16 attention heads, and 1 block. 3.2 TRANSFORMER (ATTENTION IS ALL YOU NEED) The Transformer model (Vaswani et al., 2017) is a sequence-to-sequence model achieving state- of-the-art results in machine translation. We briefly describe it here (see Figure 2b). The model consists of an encoder, which transforms the question (represented as a sequence of vectors) to another sequence of the same length, and a decoder (which transforms the encoded question, and the answer autoregressively shifted right, into the answer prediction). Internally the input is transformed via attentional mechanisms (both self- and input-attention), and position-wise fully connected layers. We use an embedding size of dmodel = 512, with h = 8 attentional heads, and thus key and value sizes of dk = dv = dmodel/h = 64. Each layer has an intermediate representation with dimension dff = 2048. For translation tasks, it is typically applied to sequences of embedded words; here we instead treat the question and answer as a sequence of characters, since we need to be able to embed arbitrary mathematical expressions. # 4 ANALYSIS 4.1 TRAINING AND EVALUATION METHODS As is common in sequence-to-sequence models, the models predict the answer autoregressively using a greedy decoder (output majority class at each step). We minimize the sum of log probabilities of the correct character via the Adam optimizer (Kingma & Bal with learning rate of 6 x 10-4, By = 0.9, Bz = 0.995, € = 10~°. We use a batch size of 1024 split across 8 NVIDIA P100 GPUs for 500k batches, with absolute gradient value clipping of 0.1. 4.2 RESULTS AND INSIGHTS Figure 3 shows the average interpolation and extrapolation performances for the different architectures. Full per-module performance results are in Appendix C. LSTMs vs RMCs Using a RMC with more than one memory slot did not help performance; perhaps it is hard for the RMC to learn to use slots for manipulating mathematical entities. For a given number of hidden units, RMCs were more data efficient but trained more slowly (since they had more parameters), and LSTMs had better asymptotic performance. Simple vs attentional LSTM The attentional LSTM and the simple LSTM have similar perfor- mance. One might suspect that the attentional LSTM does nothing, however this is not the case, since a simple LSTM model of the same size as the parsing LSTM obtains much worse performance. We speculate that the attentional model is not learning to algorithmically parse the question, and so the ability to change attention focus per-step does not count for as much. 7 Published as a conference paper at ICLR 2019 Number of thinking steps For the attentional LSTM model, we observed that increasing the number of “thinking” steps (as defined above) from 0 up to 16 increased the performance. Transformer vs best non-transformer model The Transformer performs the same as or signifi- cantly better than recurrent models across nearly all modules. Both architectures have a comparable number of parameters. One might a-priori expect the LSTM to perform better, since its sequential architecture is perhaps more similar to sequential reasoning steps that a human performs. However, evidence above and below suggest that neither of the networks are doing much “algorithmic rea- soning”, and the Transformer has various advantages over LSTM architectures, such as (1) doing more calculations with the same number of parameters, (2) having a shallower architecture (with better gradient propagation), and (3) having an internal "memory" that is sequential, which is more pre-disposed to mathematical objects like sequences of digits. Easiest maths for neural networks The easiest question types were finding the place value in a number, and rounding decimals and integers, which all models got nearly perfect scores on. Questions involving comparisons also tended to be quite easy, possible because such tasks are quite perceptual (e.g. comparing lengths or individual digits). This success includes questions Is k(-103) with module composition, for example Let k(c) = -611*c + 2188857. != 2251790? (False) and mixtures of decimals and rationals, for example, Sort -139/4, 40.8, -555, 607 in increasing order. Overall it seems that magnitude is easy for neural networks to learn. Hardest maths for neural networks Perhaps not surprisingly, some of the hardest modules in- clude more number-theoretic questions which are also hard for humans, such as detecting pri- mality and factorization. The Transformer model still gives plausible-looking answers, such as factoring 235232673 as 3, 11, 13, 19, 23, 1487 (the correct answer is 3, 13, 19, 317453). The Transformer model has a performance of 90% or more on the “add or subtract several numbers" module and the “multiply or divide several numbers" module (which is just addition and subtraction in log space). However on the mixed arithmetic module (mixing all four operations together with parentheses), the performance drops to around 50%. (Note the distribution of the value of the expression is the same for all these modules, so it is not the case that difficulty increases due to different answer magnitudes.) We speculate that the difference between these modules in that the former can be computed in a relatively linear/shallow/parallel manner (so that the solution method is relatively easier to discover via gradient descent), whereas there are no shortcuts to evaluating mixed arithmetic expressions with parentheses, where intermediate values need to be calculated. This is evidence that the models do not learn to do any algebraic/algorithmic manipulation of values, and are instead learning relatively shallow tricks to obtain good answers on many of the modules. The same holds true for other modules that require intermediate value calculation, such as evaluating polynomials, and general composition. Performance on polynomial manipulation One notable difference between the Transformer and the recurrent models was polynomial manipulation. The Transformer did significantly better on polynomial expansion, collecting terms, addition, composition, differentiation, and extracting named coefficients. Speculatively, the parallel sequential nature of the Transformer is better at manipulating polynomials where several coefficients must be kept in memory simultaneously where they can interact. Other insights Examining the performance on adding multiple integers, we tested the models on adding 1 + 1 + · · · + 1, where 1 occurs n times. Both the LSTM and Transformer models gave the correct answer for n ≤ 6, but the incorrect answer of 6 for n = 7 (seemingly missing one of the 1s), and other incorrect values for n > 7. (The models are trained on sequences of random integers up to length 10, and are capable of giving the correct answer on longer sequences of far bigger numbers, for example -34 + 53 + -936 + -297 + 162 + -242 + -128.) We do not have a good explanation for this behaviour; one hypothesis is that the models calculate subsums and then combine these, but rely on different input numbers to align the subsums, and fail when the input is “camouflaged” by consisting of the same number repeated multiple times. 8 Published as a conference paper at ICLR 2019 Robustness to question phrasing Although we do not train for linguistic variation and do not expect models to be robust to it, the failure modes are still interesting. For example, on one trained Transformer, the question “Calculate 17 * 4.” gave the correct answer 68, but the same question without the final full stop gave 69. Extrapolation performance Modules on which good extrapolation performance was obtained include rounding larger numbers than seen during training, comparing more numbers, and adding and subtracting larger numbers. However for example models completely failed to add together more numbers than seen during training, which agrees with the suspicion that models have learnt to add numbers in parallel rather than calculating subsums. # 4.3 PERFORMANCE ON REAL MATHEMATICS QUESTIONS To provide an external benchmark for the capability of neural network models trained on our dataset, we tested the trained Transformer model on a set of 40 questions selected from publicly-available maths exams for British 16 year old schoolchildren2. These questions were gathered from four exam papers after excluding those involving graphs, tables or other figures - the full set is reproduced in the supplementary materials. On these exam questions, the Transformer model got 14/40 questions correct, which is (proportionally) equivalent to that of an E grade student3. The model showed some promise by correctly solving the simultaneous equations 5x + 2y = 11 and 4x − 3y = 18, identified the correct next number in the sequence 3, 9, 15, 27. The disappointing grade also assumes that no marks were awarded for plausible but incorrect attempts, such as the factorisation 1(y − 2)(y + 4) of the expression y2 − 10y + 16. Overall, this analysis suggests that, with knowledge of the exam syllabus to inform the training data generation, and the ability to receive graphical inputs, it may be possible to encode the knowledge necessary to excel at unseen exams in an out-of-the-box neural network, although the pattern of errors and ability to generalise would likely differ from typical school-age students. # 5 CONCLUSIONS AND FUTURE WORK We have created a dataset on which current state-of-the-art neural models obtain moderate per- formance. Some modules are largely unsolved (for example those requiring several intermediate calculations), for which a human would find easy, and extrapolation performance is low. We hope this dataset will become a robust analyzable benchmark for developing models with more alge- braic/symbolic reasoning abilities. The dataset is easily extendable, since it is modular, with all modules using a common input/output format and the common language of mathematics. The main restriction is that the answers must be well-determined (i.e. unique), but this still allows for covering a lot of mathematics up to university level. At some point it becomes harder to cover more of mathematics (for example, proofs) while maintaining the sequence-to-sequence format, but hopefully by this point the dataset in its current for- mat will have served its purpose in developing models that can reason mathematically. Alternatively, we could consider methods for assessing answers where there is not a single unique answer; for now the full scope of possibilities is too large to include in this paper, but a few possibilities include metrics such as BLEU (Papineni et al., 2002), by extending the data generation process to provide several reference answers, or by obtaining human paraphrases following the data augmentation process proposed by Wang et al. (2015). We have not addressed linguistic variation or complexity in this dataset. Although to some extent linguistic complexity is orthogonal to the difficulty of the maths problems involved, the two cannot be entirely separated. The most obvious example of this for school-level mathematics is in algebraic word problems, where much of the difficulty lies in translating the description of the problem into an algebraic problem. Thus it would be useful to extend the dataset with “linguistic complexity”, where the same underlying mathematical problem is phrased in quite distinct, and not-at-first-obvious, translations. One option may be to do joint training on this dataset, and that of (Ling et al., 2017); # 2Edexcel exam board Higher Tier, 2012-2013. 3https://qualifications.pearson.com/content/dam/pdf/Support/Grade-boundaries/GCSE/1211-GCSE-Unit- UMS-Boundaries(Science-Mathematics).pdf 9 Published as a conference paper at ICLR 2019 another would be to obtain more question templates via mechanical turking, as proposed by Wang et al. (2015). Finally one completely distinct direction the dataset could be extended is to include visual (e.g. geom- etry) problems as well. For humans, visual reasoning is an important part of mathematical reasoning, even concerning problems that are not specified in a visual format. Therefore we want to develop questions along these lines, including those that require “intermediate visual representations” (in a similar way to how the textual module composition requires intermediate digital representations) and visual working memory. Note that reasoning with intermediate visual representations or ideas is richer than simply analyzing a visual domain (such as is typical in visual question-answering datasets). # REFERENCES Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, and Charles Sutton. Learning continuous semantic representations of symbolic expressions. arXiv preprint arXiv:1611.01423, 2016. # Allen Institute for AI. Project Euclid. http://allenai.org/euclid/, 2014. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. Can neural networks understand logical entailment? arXiv preprint arXiv:1802.08535, 2018. Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin, and Wei-Ying Ma. How well do computers solve math word problems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 887–896, 2016. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pp. 1988–1997. IEEE, 2017. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in neural information processing systems, pp. 190–198, 2015. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolu- tional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 271–281, 2014. 10 Published as a conference paper at ICLR 2019 Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146, 2017. Gary F Marcus. The algebraic mind: Integrating connectionism and cognitive science. MIT press, 2003. Aaron Meurer, Christopher P Smith, Mateusz Paprocki, Ondˇrej ˇCertík, Sergey B Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K Moore, Sartaj Singh, et al. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, 2017. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Association for Computational Linguistics, 2002. Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. Relational recurrent neural networks. arXiv preprint arXiv:1806.01822, 2018a. Adam Santoro, Felix Hill, David Barrett, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In International Conference on Machine Learning, pp. 4477–4486, 2018b. Daniel Selsam, Matthew Lamm, Benedikt Bunz, Percy Liang, Leonardo de Moura, and David L Dill. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685, 2018. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Shyam Upadhyay and Ming-Wei Chang. Annotating derivations: A new evaluation strategy and dataset for algebra word problems. arXiv preprint arXiv:1609.07197, 2016. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz In Advances in Neural Information Kaiser, and Illia Polosukhin. Attention is all you need. Processing Systems, pp. 6000–6010, 2017. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 845–854, 2017. Yushi Wang, Jonathan Berant, and Percy Liang. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pp. 1332–1342, 2015. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. 11 Published as a conference paper at ICLR 2019 # A RECURRENT ENCODER AND DECODER WITH ATTENTION This model consists of an encoder and a decoder (see Figure 2a). The encoder maps the question (as a sequence of characters represented as 1-hot vectors) to a sequence of pairs of keys and values, where each key is a vector of length k and each value is a vector of length v. We take k = v = 256. We experiment with two different encoder cores. (1) An LSTM with hidden size k + v. The hidden state is split to obtain the keys and values. (2) A bidirectional LSTM, i.e. two LSTMs both with hidden size k + v, one operating in reverse. The keys and values are generated by concatenating the hidden states and mapping through a linear transformation. The decoder LSTM has hidden size 2048. At each step, the output of the decoder is passed through a linear transformation to obtain (1) h query vectors each of length k, where h is the number of attention heads, and (2) a logits vector of length 96 (the number of possible answer characters, plus a special ignored character). The query vectors are dot-producted with the keys to obtain a softmax weighting over the encoded question values (the standard attention mechanism, as done by e.g. Vaswani et al. (2017)). At every time step, the input to the decoder LSTM is the result of this attention mechanism (the soft-weighted values), concatenated with the 1-hot embedding of the current answer character. (The answer is right-shifted by 1, so that the LSTM does not get to see the character it is attempting to predict.) In addition we have 15 initial steps where no answer character is fed in to allow the LSTM to integrate information from the question, and the output predictions are ignored. The model is trained using a cross-entropy loss on the output logits for predicting the correct answer. # B AREAS OF MATHEMATICS B.1 ALGEBRA Some of the algebra modules participate in module composition. • linear_1d Solve linear equations in one variable, e.g. solve 2(x − 10) + 3 = 17x + 10 for x. linear_2d Solve simultaneous linear equations in two variables. • polynomial_roots Find roots of polynomials or factorize them, e.g. factorize 2x2 + 5x + 3. • sequence_next_term Find continuations of a sequence given the first few terms. E.g. what comes next in the sequence 2, 6, 12, 20? • sequence_nth_term Find an expression for the nth term in a sequence, given the first few terms. For extrapolation tests, we include: • polynomial_roots_big Same as polynomial_roots, but with polynomials larger than those seen during training. B.2 ARITHMETIC Many of the arithmetic modules participate in module composition. add_or_sub Add or subtract a pair of integers or decimals. • add_or_sub_in_base Add or subtract a pair of integers given in a different base (between 2 and 16). add_sub_multiple Add and subtract multiple integers. • div Divide one integer by another, with the answer a simplified fraction. • mixed Arithmetic involving addition, subtraction, multiplication, division, and brackets. • mul Multiply pair of integers or decimals. • mul_div_multiple Find simplest fraction of expression involving integers, multiplication, division, and brackets. 12 Published as a conference paper at ICLR 2019 nearest_integer_root Calculate the nearest integer to an nth root of another integer. √ • simplify_surd Simplify an expression involving square-roots, e.g. simplify ( √ −9)/( 2 × 12) × −8. 10 × For extrapolation tests, we include: add_or_sub_big Add or subtract a pair of integers bigger than seen during training. • add_sub_multiple Like add_sub_multiple but with more terms than seen during training. • div_big Divide one integer by another, with bigger integers than seen during training. • mixed_longer Like mixed but with more terms. • mul_big Multiply pair of integers bigger than seen during training. • mul_div_multiple_longer Like mul_div_multiple but with more terms. B.3 CALCULUS The differentiate module fully participates in module composition, accepting inputs from and passing outputs to other modules. • differentiate First and higher order derivatives of multivariate polynomials, either specified directly or as a result of module composition. E.g. let f (x) = 2∗x+3, let g(x) = x∗∗2−17; what is the derivative of f (g(x))? B.4 COMPARISON All comparison modules accept numbers from other modules as inputs. closest Finding the closest to a given number in a list. • kth_biggest Finding the kth biggest or smallest number in a list. • pair Pairwise comparison between pairs of numbers. E.g. which is bigger: 4/37 or 7/65? • sort Sorting lists of numbers into ascending or descending order. For extrapolation tests, we include: closest_more Like closest but with larger lists than seen during training. • kth_biggest_more Like kth_biggest but with larger list. • sort_more Sorting longer lists of numbers than seen during training. B.5 MEASUREMENT • conversion Conversion between different units of length, time, mass, and volume. E.g. how many millilitres are there in 13/8 of a litre? • time Working with clock times: time differences, and time before or after. E.g. how many minutes are there between 8:05 PM and 9:12 PM? For extrapolation tests, we include: • conversion With larger values than seen during training. B.6 NUMBERS All number modules accept numbers from other modules as inputs. base_conversion Conversion between bases (e.g. give 1011001 (base 2) in base 16). • div_remainder Calculate remainders under division. 13 Published as a conference paper at ICLR 2019 gcd Calculating greatest common divisors. • is_factor Recognizing factors, e.g. is 15 a factor of 60? • is_prime Testing for primality. • lcm Calculating least common multiples. • list_prime_factors Factoring numbers into primes. E.g. give the prime factors of 64372. place_value Give the place value of a number, e.g. what is the tens digit of 3585792? • round_number Rounding integers and decimals. E.g. give 432.1058 to three decimal places. For extrapolation tests, we include: • round_number_big Like round_number but with larger numbers than seen during training. • place_value_big Like place_value but with larger numbers than seen during training. B.7 POLYNOMIALS All function modules are fully compositional: they accept functions specified by other questions as inputs, and define functions for use in other modules. • add Adding functions. E.g. calculating 2f (x) + 17g(x) given f and g. collect Simplify polynomial expressions by collecting terms. • compose Calculating the composition of functions. coefficient_named E.g. rearrange (x + 1)(2x + 3) to ax2 + bx + c and give b. evaluate E.g. value of x2y2 + 2xy when x = 2, y = 3. • expand Expand and simplify polynomials, e.g. expand (x + 1)(2x + 3). simplify_power Simplify powers, testing rules of power indices. E.g. simplify x3/x2. B.8 PROBABILITY There are two modules here, both based on sampling without replacement from a bag of repeated letters, specified using either: (1) counts (e.g. {a: 1, b: 7}), or (2) an unsorted list of letters that require counting, e.g. ecggccdcdceeeeg. • swr_p_level_set Calculating probability of obtaining certain counts of different letters. • swr_p_sequence Calculating probability of obtaining a given sequence of letters. For extrapolation tests, we include the same modules, but with more letters sampled from the bag than seen during training: swr_p_level_set_more_samples swr_p_sequence_more_samples # C PER-MODULE PERFORMANCE Interpolation test performance is shown in Figure 4 and extrapolation test performance is shown in Figure 5. Of the different encoders for the recurrent attention architecture, we show the per-module performance of the bidirectional LSTM encoder which has the greatest performance. 14 Published as a conference paper at ICLR 2019 mummbers: peace ee ES, Numbers: round_NUrm be) SSS, arithmetic: add. SL. —————— rr calculus: differenti(t, TTS measurement: time comparison: kth_Dbiglc et Ss arithmetic: add_or_sutb_irm_[c1Se lisse — COMPASON: SOt polynomials: Collect Ts | comparison: closest nl algebra: |i0C0)_1¢ i arithmetic: add_sub_QUCi)C arithmetic: mul_div_rultijp|e esse Comparison: pair (SiS comparison: kth_biggest (COMPS) TTT 1 1 1 1 ' polynomials: coefficient_named : comparison: closest (COMD0S¢1) SEIII(([[[[[.....—-—= NSC OVS iitTtTtTTTTT ' arithmetic: nearest_integer_roOt Sse i 1 1 1 f 1 f 1 1 1 algebra: sequence_next_te) 7 — ; : calculus: differentiate (composed) ' comparison: sort (COMPOS) SSS : polynomials: expand (== algebra: linear_2d —_—————E————EEE Vi T= algebra: linear_ld (composed) numbers: round_number (composed) [ce algebra: linear_2d (Composed) jess i algebra: polynomial_roots (COMP0SC() (tj ' Numbers: 0¢d (CONS) ttt =< : MQ OC —_—_— — — — ; comparison: pair (composed) ———————— : 1 1 \ 1 1 \ 1 algebra: sequence_nth_term [ia numbers: is_factor (COMp0Se() =< polynomials: evaluate (Composed) iiss rs : numbers: is_faCtor [Ss SSS ——=-_- i polynomials: compose probability: swr_p_level_set iE arithmetic: nl = ———— i numbers: place_value (Composed) EEE i probability: swr_p_sequence jie = ; numbers: Icm (composed) Sa rere ; arithmetic: mixed —— numbers: is_prime (composed) esr — numbers: QC SSeS: Polynomials: evaluate NUNES: iS_PUN TTT algebra: polynomial_roots Le eel : Numbers: 1c” [Serres ! f : ' : numbers: div remainder (On S) T= | : : ' ' : arithmetic: simplifysud =< numbers: div_remainder ES | numbers: list_prime_factors (composed) —E— : polynomials: simplify_power fae numbers: list_prime_factors [mars i 4 i 1 . 1 numbers: base_conversion j= ; i ; _ ; teallaal bey 0.0 0.2 0.4 0.6 0.8 1.0 P(correct) =! Transformer lm Simple LSTM Figure 4: Interpolation test performance on the different modules. 15 Published as a conference paper at ICLR 2019 numbers: round_number_bic) js Comparison: Closest_MOre |i SS athnetic dd. ——<—< Measurement: conversion (Sse ONT. SO tTTgggTCLTSTSTELELEL..AO numbers: aC CVU TIT — arithmetic: cliv_1ic [ims arithmetic: mul_div_multiple_longer Eat arithmetic: add_sub_multiple_longer EES arithmetic: mul_big comparison: kth_biggest_More [ues algebra: polynomial_roots_big jum arithmetic: mixed_longer g—_———" probability: swr_p_level_set_more_samples gE” probability: swr_p_sequence_more_samples [amt 0.0 0.2 0.4 0.6 0.8 1.0 P(correct) Transformer @m Simple LSTM @m Attentional LSTM Figure 5: Extrapolation test performance on the different modules. D HIGH-SCHOOL MATHEMATICS QUESTIONS . Factorise a? + 7x . Factorise y? — 10y + 16 . Factorise 2¢? + 5t + 2 . Simplify ty + (+8) . Solve 2a? * 9x +7 . Solve % ate -—7=0 . Expand 3(x ‘: 4) + 2(5a — 1) . Expand (2a + 1)(a — 4) . Factor 6y” — 9ay . Solve 3p —7> 11 A = 4bc, A = 100, b = 2, calculate c . Make k the subject of m = ,/ (#4) . Expand (p + 9)(p — 4) (ow=8) FOO MON DUN KWN HE . Solve =4w+2 . Factorise x? — 49 . Expand (a — 7)(x + 1) . Simplify J9aFy> assuming x is positive. p= one a = 8.5, y = 4, find p . Make t the subject of 2(d — t) = 4t+7 . Solve 3a? — da —2=0 . Expand 3(2y — 5) . Factorise 82? + 4ry . Make h the subject of t = oh . Simplify (m7?)> . Factorise x? + 3x — 10 . Solve 5a + 2y = 11 and 4x — 3y = 18 for x yYnrvyrvy vYKYY DY HDnn fk WN KF OO 16 Published as a conference paper at ICLR 2019 27. Simplify (x2+3x−4) (2x2−5x+3) (x+2) + 3 4 28. Simplify (x−2) 29. Expand 4(3x + 5) 30. Expand 2(x − 4) + 3(x + 5) 31. Expand (x + 4)(x + 6) 32. Simplify m5 m3 33. Simplify (5x4y3)(x2y) 34. Solve 3x + 2y = 4 and 4x + 5y = 17 for x 35. Complete the sequence: 3, 9, 15, 21, 27 36. Simplify 5x + 4y + x − 7y 37. Complete the sequence: 3, 10, 17, 24 38. Simplify x10x3 39. Solve 7 ∗ (x + 2) = 7 40. Factorise x2 − 12x + 27 17
{ "id": "1609.07197" }
1904.01201
Habitat: A Platform for Embodied AI Research
We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embodied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast -- when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to-end development of embodied AI algorithms -- defining tasks (e.g., navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or 'merely' impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works and find evidence for the opposite conclusion -- that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} x {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI.
http://arxiv.org/pdf/1904.01201
Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra
cs.CV, cs.AI, cs.CL, cs.LG, cs.RO
ICCV 2019
null
cs.CV
20190402
20191125
9 1 0 2 v o N 5 2 ] V C . s c [ 2 v 1 0 2 1 0 . 4 0 9 1 : v i X r a # Habitat: A Platform for Embodied AI Research Manolis Savva1,4*, Abhishek Kadian1*, Oleksandr Maksymets1*, Yili Zhao1, Erik Wijmans1,2,3, Bhavana Jain1, Julian Straub2, Jia Liu1, Vladlen Koltun5, Jitendra Malik1,6, Devi Parikh1,3, Dhruv Batra1,3 1Facebook AI Research, 2Facebook Reality Labs, 3Georgia Institute of Technology, 4Simon Fraser University, 5Intel Labs, 6UC Berkeley https://aihabitat.org # 1. Introduction # Abstract We present Habitat, a platform for research in embodied artificial intelligence (AI). Habitat enables training embod- ied agents (virtual robots) in highly efficient photorealistic 3D simulation. Specifically, Habitat consists of: (i) Habitat-Sim: a flexible, high-performance 3D sim- ulator with configurable agents, sensors, and generic 3D dataset handling. Habitat-Sim is fast – when rendering a scene from Matterport3D, it achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. (ii) Habitat-API: a modular high-level library for end-to- end development of embodied AI algorithms – defining tasks (e.g. navigation, instruction following, question answering), configuring, training, and benchmarking embodied agents. These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or ‘merely’ impractical. Specifically, in the context of point-goal navigation: (1) we revisit the comparison between learning and SLAM approaches from two recent works [20, 16] and find evidence for the oppo- site conclusion – that learning outperforms SLAM if scaled to an order of magnitude more experience than previous investigations, and (2) we conduct the first cross-dataset generalization experiments {train, test} × {Matterport3D, Gibson} for multiple sensors {blind, RGB, RGBD, D} and find that only agents with depth (D) sensors generalize across datasets. We hope that our open-source platform and these findings will advance research in embodied AI. The embodiment hypothesis is the idea that intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity. Smith and Gasser [26] Imagine walking up to a home robot and asking ‘Hey – can you go check if my laptop is on my desk? And if so, bring it to me.’ In order to be successful, such a robot would need a range of skills – visual perception (to recognize scenes and objects), language understanding (to translate questions and instructions into actions), and navigation in complex environ- ments (to move and find things in a changing environment). While there has been significant progress in the vision and language communities thanks to recent advances in deep representations [14, 11], much of this progress has been on ‘internet AI’ rather than embodied AI. The focus of the former is pattern recognition in images, videos, and text on datasets typically curated from the internet [10, 18, 4]. The focus of the latter is to enable action by an embodied agent (e.g. a robot) in an environment. This brings to the fore active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment. A straightforward proposal is to train agents directly in the physical world – exposing them to all its richness. This is valuable and will continue to play an important role in the development of AI. However, we also recognize that train- ing robots in the real world is slow (the real world runs no faster than real time and cannot be parallelized), dangerous (poorly-trained agents can unwittingly injure themselves, the environment, or others), resource intensive (the robot(s) and the environment(s) in which they execute demand resources and time), difficult to control (it is hard to test corner-case scenarios as these are, by definition, infrequent and chal- lenging to recreate), and not easily reproducible (replicating conditions across experiments and institutions is difficult). *Denotes equal contribution. We aim to support a complementary research program: ita Tasks “S EmbodiedQA Language grounding (Das etal, 2018) (Hilletal, 2017) (Gordon et al, 2018) Simulators =: House3D (Wu et al, 2017) Al2-THOR (Kolve et al, 2017) MINOS Datasets hd he \ ad Replica (Straub et al,, 2019) Interactive QA Vision-Language Navigation (Anderson et al., 2018) LEE EY at a Dail (Sawa et al., 2017) Matterport3D (Chang et al,, 2017) Habitat Platform Habitat API Visual Navigation (Zhu et al, 2017, Gupta et al., 2017) 4 Habitat Sim Gibson (Zamir et al,, 2018) CHALET (Yan et al., 2018) 4 Generic Dataset Support 2D-3D-S (Armeni et al., 2017) Figure 1: The ‘software stack’ for training embodied agents involves (1) datasets providing 3D assets with semantic annotations, (2) simulators that render these assets and within which an embodied agent may be simulated, and (3) tasks that define evaluatable problems that enable us to benchmark scientific progress. Prior work (highlighted in blue boxes) has contributed a variety of datasets, simulation software, and task definitions. We propose a unified embodied agent stack with the Habitat platform, including generic dataset support, a highly performant simulator (Habitat-Sim), and a flexible API (Habitat-API) allowing the definition and evaluation of a broad set of tasks. training embodied agents (e.g. virtual robots) in rich realistic simulators and then transferring the learned skills to reality. Simulations have a long and rich history in science and engineering (from aerospace to zoology). In the context of embodied AI, simulators help overcome the aforementioned challenges – they can run orders of magnitude faster than real-time and can be parallelized over a cluster; training in simulation is safe, cheap, and enables fair comparison and benchmarking of progress in a concerted community- wide effort. Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms that operate in the real world [6, 15]. question answering), configuring and training embodied agents (via imitation or reinforcement learning, or via classic SLAM), and benchmarking using standard metrics [2]. The Habitat architecture and implementation combine modularity and high performance. When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (fps) running single- threaded, and can reach over 10,000 fps multi-process on a single GPU, which is orders of magnitude faster than the closest simulator. Habitat-API allows us to train and benchmark embodied agents with different classes of meth- ods and in different 3D scene datasets. Datasets have been a key driver of progress in computer vision, NLP, and other areas of AI [10, 18, 4, 1]. As the community transitions to embodied AI, we believe that sim- ulators will assume the role played previously by datasets. To support this transition, we aim to standardize the entire ‘software stack’ for training embodied agents (Figure 1): scanning the world and creating photorealistic 3D assets, de- veloping the next generation of highly efficient and paralleliz- able simulators, specifying embodied AI tasks that enable us to benchmark scientific progress, and releasing modu- lar high-level libraries for training and deploying embodied agents. Specifically, Habitat consists of the following: 1. Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling (with built-in support for Mat- terport3D, Gibson, and Replica datasets). 2. Habitat-API: a modular high-level library for end- to-end development of embodied AI algorithms – defining embodied AI tasks (e.g. navigation, instruction following, These large-scale engineering contributions enable us to answer scientific questions requiring experiments that were till now impracticable or ‘merely’ impractical. Specifically, in the context of point-goal navigation [2], we make two scientific contributions: 1. We revisit the comparison between learning and SLAM approaches from two recent works [20, 16] and find evidence for the opposite conclusion – that learning out- performs SLAM if scaled to an order of magnitude more experience than previous investigations. 2. We conduct the first cross-dataset generalization exper- iments {train, test} × {Matterport3D, Gibson} for multiple sensors {Blind1, RGB, RGBD, D} × {GPS+Compass} and find that only agents with depth (D) sensors generalize well across datasets. We hope that our open-source platform and these findings will advance and guide future research in embodied AI. # 1Blind refers to agents with no visual sensory inputs. # 2. Related Work Reality is something you rise above. # Liza Minnelli The availability of large-scale 3D scene datasets [5, 27, 8] and community interest in active vision tasks led to a recent surge of work that resulted in the development of a variety of simulation platforms for indoor environments [17, 7, 13, 24, 29, 3, 30, 31, 23]. These platforms vary with respect to the 3D scene data they use, the embodied agent tasks they address, and the evaluation protocols they implement. This surge of activity is both thrilling and alarming. On the one hand, it is clearly a sign of the interest in embodied AI across diverse research communities (computer vision, natural language processing, robotics, machine learning). On the other hand, the existence of multiple differing simulation environments can cause fragmentation, replication of effort, and difficulty in reproduction and community-wide progress. Moreover, existing simulators exhibit several shortcomings: – Tight coupling of task (e.g. navigation), simulation plat- form (e.g. GibsonEnv), and 3D dataset (e.g. Gibson). Ex- periments with multiple tasks or datasets are impractical. – Hard-coded agent configuration (e.g. size, action-space). Ablations of agent parameters and sensor types are not supported, making results hard to compare. – Suboptimal rendering and simulation performance. Most existing indoor simulators operate at relatively low frame rates (10-100 fps), becoming a bottleneck in training agents and making large-scale learning infeasible. Take- away messages from such experiments become unreliable – has the learning converged to trust the comparisons? – Limited control of environment state. The structure of the 3D scene in terms of present objects cannot be program- matically modified (e.g. to test the robustness of agents). Most critically, work built on top of any of the existing platforms is hard to reproduce independently from the plat- form, and thus hard to evaluate against work based on a different platform, even in cases where the target tasks and datasets are the same. This status quo is undesirable and mo- tivates the Habitat effort. We aim to learn from the successes of previous frameworks and develop a unifying platform that combines their desirable characteristics while addressing their limitations. A common, unifying platform can sig- nificantly accelerate research by enabling code re-use and consistent experimental methodology. Moreover, a common platform enables us to easily carry out experiments testing agents based on different paradigms (learned vs. classical) and generalization of agents between datasets. The experiments we carry out contrasting learned and classical approaches to navigation are similar to the recent work of Mishkin et al. [20]. However, the performance of the Habitat stack relative to MINOS [24] used in [20] a EET foie a Figure 2: Example rendered sensor observations for three sensors (color camera, depth sensor, semantic instance mask) in two differ- ent environment datasets. A Matterport3D [8] environment is in the top row, and a Replica [28] environment in the bottom row. – thousands vs. one hundred frames per second – allows us to evaluate agents that have been trained with signifi- cantly larger amounts of experience (75 million steps vs. five million steps). The trends we observe demonstrate that learned agents can begin to match and outperform classical approaches when provided with large amounts of training experience. Other recent work by Koijima and Deng [16] has also compared hand-engineered navigation agents against learned agents but their focus is on defining additional met- rics to characterize the performance of agents and to establish measures of hardness for navigation episodes. To our knowl- edge, our experiments are the first to train navigation agents provided with multi-month experience in realistic indoor environments and contrast them against classical methods. # 3. Habitat Platform What I cannot create I do not understand. Richard Feynman The development of Habitat is a long-term effort to en- able the formation of a common task framework [12] for research into embodied agents, thereby supporting system- atic research progress in this area. Design requirements. The issues discussed in the previous section lead us to a set of requirements that we seek to fulfill. – Highly performant rendering engine: resource- efficient rendering engine that can produce multiple chan- nels of visual information (e.g. RGB, depth, semantic instance segmentation, surface normals, optical flow) for multiple concurrently operating agents. – Scene dataset ingestion API: makes the platform agnos- tic to 3D scene datasets and allows users to use their own datasets. – Agent API: allows users to specify parameterized em- bodied agents with well-defined geometry, physics, and actuation characteristics. – Sensor suite API: allows specification of arbitrary num- bers of parameterized sensors (e.g. RGB, depth, contact, GPS, compass sensors) attached to each agent. – Scenario and task API: allows portable definition of tasks and their evaluation protocols. – Implementation: C++ backend with Python API and interoperation with common learning frameworks, mini- mizes entry threshold. – Containerization: enables distributed training in clusters and remote-server evaluation of user-provided code. – Humans-as-agents: allows humans to function as agents in simulation in order to collect human behavior and in- vestigate human-agent or human-human interactions. – Environment state manipulation: programmatic con- trol of the environment configuration in terms of the ob- jects that are present and their relative layout. Design overview. The above design requirements cut across several layers in the ‘software stack’ in Figure 1. A mono- lithic design is not suitable for addressing requirements at all levels. We, therefore, structure the Habitat platform to mirror this multi-layer abstraction. At the lowest level is Habitat-Sim, a flexible, high- performance 3D simulator, responsible for loading 3D scenes into a standardized scene-graph representation, configuring agents with multiple sensors, simulating agent motion, and returning sensory data from an agent’s sensor suite. The sensor abstraction in Habitat allows additional sensors such as LIDAR and IMU to be easily implemented as plugins. Generic graphs. Habitat-Sim employs a hierarchical scene graph to represent all supported 3D environment datasets, whether synthetic or based on real-world reconstructions. The use of a uniform scene graph representation allows us to abstract the details of specific datasets, and to treat them in a consistent fashion. Scene graphs allow us to compose 3D environments through procedural scene generation, editing, or programmatic manipulation. Rendering engine. The Habitat-Sim backend module is implemented in C++ and leverages the Magnum graphics middleware library2 to support cross-platform deployment on a broad variety of hardware configurations. The simu- lator backend employs an efficient rendering pipeline that implements visual sensor frame rendering using a multi- attachment ‘uber-shader’ combining outputs for color cam- era sensors, depth sensors, and semantic mask sensors. By allowing all outputs to be produced in a single render pass, we avoid additional overhead when sensor parameters are shared and the same render pass can be used for all outputs. Figure 2 shows examples of visual sensors rendered in three different supported datasets. The same agent and sensor configuration was instantiated in a scene from each of the # 2https://magnum.graphics/ 1 process 5 processes Sensors / Resolution 128 256 512 128 256 RGB RGB + depth 4,093 2,050 1,987 1,042 848 423 10,592 5,223 3,574 1,774 512 2,629 1,348 Table 1: Performance of Habitat-Sim in frames per second for an example Matterport3D scene (id 17DRP5sb8fy) on an Intel Xeon E5-2690 v4 CPU and Nvidia Titan Xp GPU, measured at different frame resolutions and with a varying number of concur- rent simulator processes sharing the GPU. See the supplement for additional benchmarking results. three datasets by simply specifying a different input scene. Habitat-Sim achieves thousands of Performance. frames per second per simulator thread and is orders of mag- nitude faster than previous simulators for realistic indoor environments (which typically operate at tens or hundreds of frames per second) – see Table 1 for a summary and the sup- plement for more details. By comparison, AI2-THOR [17] and CHALET [31] run at tens of fps, MINOS [24] and Gib- son [30] run at about a hundred, and House3D [29] runs at about 300 fps. Habitat-Sim is 2-3 orders of magnitude faster. By operating at 10,000 frames per second we shift the bottleneck from simulation to optimization for network training. Based on TensorFlow benchmarks, many popular network architectures run at frame rates that are 10-100x lower on a single GPU3. In practice, we have observed that it is often faster to generate images using Habitat-Sim than to load images from disk. Efficient GPU throughput. Currently, frames rendered by Habitat-Sim are exposed as Python tensors through shared memory. Future development will focus on even higher rendering efficiency by entirely avoiding GPU-to- CPU memory copy overhead through the use of CUDA-GL interoperation and direct sharing of render buffers and tex- tures as tensors. Our preliminary internal testing suggests that this can lead to a speedup by a factor of 2. Above the simulation backend, the Habitat-API layer is a modular high-level library for end-to-end development in embodied AI. Setting up an embodied task involves speci- fying observations that may be used by the agent(s), using environment information provided by the simulator, and con- necting the information with a task-specific episode dataset. – Task: simulator’s Observations class and action space with task- specific ones. The criteria of episode termination and measures of success are provided by the Task. For in goal-driven navigation, Task provides example, the goal and evaluation metric [2]. To support this kind of functionality the Task has read-only access to 3https://www.tensorflow.org/guide/performance/ benchmarks Simulator and Episode-Dataset. – Episode: a class for episode specification that includes the initial position and orientation of an Agent, scene id, goal position, and optionally the shortest path to the goal. An episode is a description of an instance of the task. – Environment: the fundamental environment concept for Habitat, abstracting all the information needed for working on embodied tasks with a simulator. More details about the architecture of the Habitat plat- form, performance measurements, and examples of API use are provided in the supplement. # 4. PointGoal Navigation at Scale To demonstrate the utility of the Habitat platform de- sign, we carry out experiments to test for generalization of goal-directed visual navigation agents between datasets of different environments and to compare the performance of learning-based agents against classic agents as the amount of available training experience is increased. Task definition. We use the PointGoal task (as defined by Anderson et al. [2]) as our experimental testbed. This task is ostensibly simple to define – an agent is initialized at a ran- dom starting position and orientation in an environment and asked to navigate to target coordinates that are provided rela- tive to the agent’s position; no ground-truth map is available and the agent must only use its sensory input to navigate. However, in the course of experiments, we realized that this task leaves space for subtle choices that (a) can make a significant difference in experimental outcomes and (b) are either not specified or inconsistent across papers, making comparison difficult. We attempt to be as descriptive as pos- sible about these seemingly low-level choices; we hope the Habitat platform will help iron out these inconsistencies. Agent embodiment and action space. The agent is physi- cally embodied as a cylindrical primitive shape with diame- ter 0.2m and height 1.5m. The action space consists of four actions: turn_left, turn_right, move_forward, and stop. These actions are mapped to idealized actua- tions that result in 10 degree turns for the turning actions and linear displacement of 0.25m for the move_forward action. The stop action allows the agent to signal that it has reached the goal. Habitat supports noisy actuations but experiments in this paper are conducted in the noise-free setting as our analysis focuses on other factors. Collision dynamics. Some previous works [3] use a coarse irregular navigation graph where an agent effectively ‘tele- ports’ from one location to another (1-2m apart). Others [9] use a fine-grained regular grid (0.01m resolution) where the agent moves on unoccupied cells and there are no collisions or partial steps. In Habitat and our experiments, we use a more realistic collision model – the agent navigates in a continuous state space4 and motion can produce collisions resulting in partial (or no) progress along the direction in- tended – simply put, it is possible for the agent to ‘slide’ along a wall or obstacle. Crucially, the agent may choose move_forward (0.25m) and end up in a location that is not 0.25m forward of where it started; thus, odometry is not trivial even in the absence of actuation noise. Goal specification: static or dynamic? One conspicuous underspecification in the PointGoal task [2] is whether the goal coordinates are static (i.e. provided once at the start of the episode) or dynamic (i.e. provided at every time step). The former is more realistic – it is difficult to imagine a real task where an oracle would provide precise dynamic goal co- ordinates. However, in the absence of actuation noise and col- lisions, every step taken by the agent results in a known turn or translation, and this combined with the initial goal loca- tion is functionally equivalent to dynamic goal specification. We hypothesize that this is why recent works [16, 20, 13] used dynamic goal specification. We follow and prescribe the following conceptual delineation – as a task, we adopt static PointGoal navigation; as for the sensor suite, we equip our agents with an idealized GPS+Compass sensor. This ori- ents us towards a realistic task (static PointGoal navigation), disentangles simulator design (actuation noise, collision dy- namics) from the task definition, and allows us to compare techniques by sensors used (RGB, depth, GPS, compass, contact sensors). Sensory input. The agents are endowed with a single color vision sensor placed at a height of 1.5m from the center of the agent’s base and oriented to face ‘forward’. This sensor provides RGB frames at a resolution of 2562 pixels and with a field of view of 90 degrees. In addition, an idealized depth sensor is available, in the same position and orientation as the color vision sensor. The field of view and resolution of the depth sensor match those of the color vision sensor. We designate agents that make use of the color sensor by RGB, agents that make use of the depth sensor by Depth, and agents that make use of both by RGBD. Agents that use neither sensor are denoted as Blind. All agents are equipped with an idealized GPS and compass – i.e., they have access to their location coordinates, and implicitly their orientation relative to the goal position. Episode specification. We initialize the agent at a start- ing position and orientation that are sampled uniformly at random from all navigable positions on the floor of the envi- ronment. The goal position is chosen such that it lies on the same floor and there exists a navigable path from the agent’s starting position. During the episode, the agent is allowed to take up to 500 actions. This threshold significantly exceeds the number of steps an optimal agent requires to reach all goals (see the supplement). After each action, the agent 4Up to machine precision. receives a set of observations from the active sensors. Evaluation. A navigation episode is considered successful if and only if the agent issues a stop action within 0.2m of the target coordinates, as measured by a geodesic distance along the shortest path from the agent’s position to the goal position. If the agent takes 500 actions without the above condition being met the episode ends and is considered un- successful. Performance is measured using the ‘Success weighted by Path Length’ (SPL) metric [2]. For an episode where the geodesic distance of the shortest path is l and the agent traverses a distance p, SPL is defined as S · l/max(p,l), where S is a binary indicator of success. Episode dataset preparation. We create PointGoal naviga- tion episode-datasets for Matterport3D [8] and Gibson [30] scenes. For Matterport3D we followed the publicly available train/val/test splits. Note that as in recent works [9, 20, 16], there is no overlap between train, val, and test scenes. For Gibson scenes, we obtained textured 3D surface meshes from the Gibson authors [30], manually annotated each scene on its reconstruction quality (small/big holes, floating/irregular surfaces, poor textures), and curated a subset of 106 scenes (out of 572); see the supplement for details. An episode is de- fined by the unique id of the scene, the starting position and orientation of the agent, and the goal position. Additional metadata such as the geodesic distance along the shortest path (GDSP) from start position to goal position is also in- cluded. While generating episodes, we restrict the GDSP to be between 1m and 30m. An episode is trivial if there is an obstacle-free straight line between the start and goal positions. A good measure of the navigation complexity of an episode is the ratio of GDSP to Euclidean distance between start and goal positions (notice that GDSP can only be larger than or equal to the Euclidean distance). If the ratio is nearly 1, there are few obstacles and the episode is easy; if the ratio is much larger than 1, the episode is difficult because strategic navigation is required. To keep the navi- gation complexity of the precomputed episodes reasonably high, we perform rejection sampling for episodes with the above ratio falling in the range [1, 1.1]. Following this, there is a significant decrease in the number of near-straight-line episodes (episodes with a ratio in [1, 1.1]) – from 37% to 10% for the Gibson dataset generation. This step was not performed in any previous studies. We find that without this filtering, all metrics appear inflated. Gibson scenes have smaller physical dimensions compared to the Matterport3D scenes. This is reflected in the resulting PointGoal dataset – average GDSP of episodes in Gibson scenes is smaller than that of Matterport3D scenes. Baselines. We compare the following baselines: – Random chooses among turn_left, turn_right, and move_forward with uniform distribution. The agent calls the stop action when within 0.2m of the goal (computed using the difference of static goal and dynamic GPS coordinates). – Forward only always calls the move_forward action, and calls the stop action when within 0.2m of the goal. – Goal follower moves towards the goal direction. If it is not facing the goal (more than 15 degrees off-axis), it performs turn_left or turn_right to align itself; otherwise, it calls move_forward. The agent calls the stop action when within 0.2m of the goal. – RL (PPO) is an agent trained with reinforcement learn- ing, specifically proximal policy optimization [25]. We experiment with RL agents equipped with different visual sensors: no visual input (Blind), RGB input, Depth input, and RGB with depth (RGBD). The model consists of a CNN that produces an embedding for visual input, which together with the relative goal vector is used by an actor (GRU) and a critic (linear layer). The CNN has the following architecture: {Conv 8×8, ReLU, Conv 4×4, ReLU, Conv 3×3, ReLU, Linear, ReLU} (see supplement for details). Let rt denote the reward at timestep t, dt be the geodesic distance to goal at timestep t, s a success reward and λ a time penalty (to encourage efficiency). All models were trained with the following reward function: rt = s + dt−1 − dt + λ if goal is reached dt−1 − dt + λ otherwise In our experiments s is set to 10 and λ is set to −0.01. Note that rewards are only provided in training environ- ments; the task is challenging as the agent must generalize to unseen test environments. – SLAM [20] is an agent implementing a classic robotics navigation pipeline (including components for localiza- tion, mapping, and planning), using RGB and depth sen- sors. We use the classic agent by Mishkin et al. [20] which leverages the ORB-SLAM2 [21] localization pipeline, with the same parameters as reported in the original work. Training procedure. When training learning-based agents, we first divide the scenes in the training set equally among 8 (Gibson), 6 (Matterport3D) concurrently running simula- tor worker threads. Each thread establishes blocks of 500 training episodes for each scene in its training set partition and shuffles the ordering of these blocks. Training continues through shuffled copies of this array. We do not hardcode the stop action to retain generality and allow for comparison with future work that does not assume GPS inputs. For the experiments reported here, we train until 75 million agent steps are accumulated across all worker threads. This is 15x larger than the experience used in previous investiga- tions [20, 16]. Training agents to 75 million steps took (in sum over all three datasets): 320 GPU-hours for Blind, 566 GPU-hours for RGB, 475 GPU-hours for Depth, and 906 GPU-hours for RGBD (overall 2267 GPU-hours). Performance on Gibson validation split 1.0 0.8 0.6) SPL 0.4 — RGB —— Depth —— RGBD — Blind —- slAM 0.2 10 20 30 40 50 60 70 Number of training steps taken (experience) in million Performance on Matterport3D validation split 1.0 0 10 20 30 40 50 60 70 Number of training steps taken (experience) in million Figure 3: Average SPL of agents on the val set over the course of training. Previous work [20, 16] has analyzed performance at 5-10 million steps. Interesting trends emerge with more experience: i) Blind agents initially outperform RGB and RGBD but saturate quickly; ii) Learning-based Depth agents outperform classic SLAM. The shaded areas around curves show the standard error of SPL over five seeds. Gibson MP3D Sensors Baseline SPL Succ SPL Succ Blind Random Forward only Goal follower RL (PPO) 0.02 0.00 0.23 0.42 0.03 0.00 0.23 0.62 0.01 0.00 0.12 0.25 0.01 0.00 0.12 0.35 RGB RL (PPO) 0.46 0.64 0.30 0.42 Depth RL (PPO) 0.79 0.89 0.54 0.69 RGBD RL (PPO) SLAM [20] 0.70 0.51 0.80 0.62 0.42 0.39 0.53 0.47 Table 2: Performance of baseline methods on the PointGoal task [2] tested on the Gibson [30] and MP3D [8] test sets under multiple sensor configurations. RL models have been trained for 75 million steps. We report average rate of episode success and SPL [2]. Matterport3D). All RL (PPO) agents start out with far worse SPL, but RL (PPO) Depth, in particular, improves dra- matically and matches the classic baseline at approximately 10M frames (Gibson) or 30M frames (Matterport3D) of ex- perience, continuing to improve thereafter. Notice that if we terminated the experiment at 5M frames as in [20] we would also conclude that SLAM [20] dominates. Interest- ingly, RGB agents do not significantly outperform Blind agents; we hypothesize because both are equipped with GPS sensors. Indeed, qualitative results (Figure 4 and video in supplement) suggest that Blind agents ‘hug’ walls and implement ‘wall following’ heuristics. In contrast, RGB sen- sors provide a high-dimensional complex signal that may be prone to overfitting to train environments due to the variety across scenes (even within the same dataset). We also notice in Figure 3 that all methods perform better on Gibson than Matterport3D. This is consistent with our previous analysis that Gibson contains smaller scenes and shorter episodes. # 5. Results and Findings We seek to answer two questions: i) how do learning- based agents compare to classic SLAM and hand-coded baselines as the amount of training experience increases and ii) how well do learned agents generalize across 3D datasets. It should be tacitly understood, but to be explicit – ‘learn- ing’ and ‘SLAM’ are broad families of techniques (and not a single method), are not necessarily mutually exclusive, and are not ‘settled’ in their development. We compare rep- resentative instances of these families to gain insight into questions of scaling and generalization, and do not make any claims about intrinsic superiority of one or the other. Learning vs SLAM. To answer the first question we plot agent performance (SPL) on validation (i.e. unseen) episodes over the course of training in Figure 3 (top: Gibson, bottom: Matterport3D). SLAM [20] does not require training and thus has a constant performance (0.59 on Gibson, 0.42 on Next, for each agent and dataset, we select the best- performing checkpoint on validation and report results on test in Table 2. We observe that uniformly across the datasets, RL (PPO) Depth performs best, outperforming RL (PPO) RGBD (by 0.09-0.16 SPL), SLAM (by 0.15-0.28 SPL), and RGB (by 0.13-0.33 SPL) in that order (see the supplement for additional experiments involving noisy depth). We believe Depth performs better than RGBD because i) the PointGoal navigation task requires reasoning only about free space and depth provides relevant information directly, and ii) RGB has significantly more entropy (different houses look very different), thus it is easier to overfit when using RGB. We ran our experiments with 5 random seeds per run, to confirm that these differences are statistically significant. The differences are about an order of magnitude larger than the standard devi- ation of average SPL for all cases (e.g. on the Gibson dataset errors are, Depth: ±0.015, RGB: ±0.055, RGBD: ±0.028, Blind: ±0.005). Random and forward-only agents have Gibson MP3D Blind SPL=0.28 RGB SPL=0.57 Blind SPL=0.35 a) iy RGBD SPL=0.91 Depth SPL=0.98 mm iy RGB SPL=0.88 RGBD SPL=0.90 Depth SPL=0.94 Figure 4: Navigation examples for different sensory configurations of the RL (PPO) agent, visualizing trials from the Gibson and MP3D val sets. A blue dot and red dot indicate the starting and goal positions, and the blue arrow indicates final agent position. The blue-green-red line is the agent’s trajectory. Color shifts from blue to red as the maximum number of agent steps is approached. See the supplemental materials for more example trajectories. MP3D Blind Gibson A 0.34 MP3D RGB Gibson A 0.40 MP3D Depth Gibson MP3D RGBD Gibson MP3D Figure 5: Generalization of agents between datasets. We report average SPL for a model trained on the source dataset in each row, as evaluated on test episodes for the target dataset in each column. very low performance, while the hand-coded goal follower and Blind baseline see modest performance.See the sup- plement for additional analysis of trained agent behavior. In Figure 4 we plot example trajectories for the RL (PPO) agents, to qualitatively contrast their behavior in the same episode. Consistent with the aggregate statistics, we observe that Blind collides with obstacles and follows walls, while Depth is the most efficient. See the supplement and the video for more example trajectories. Generalization across datasets. Our findings so far are that RL (PPO) agents significantly outperform SLAM [20]. This prompts our second question – are these findings dataset specific or do learned agents generalize across datasets? We report exhaustive comparisons in Figure 5 – specifically, average SPL for all combinations of {train, test} × {Matterport3D, Gibson} for all agents {Blind, RGB, RGBD, Depth }. Rows indicate (agent, train set) pair, columns indicate test set. We find a number of interesting trends. First, nearly all agents suffer a drop in performance when trained on one dataset and tested on another, e.g. RGBD Gibson→Gibson 0.70 vs RGBD Gibson→Matterport3D 0.53 (drop of 0.17). RGB and RGBD agents suffer a significant performance degradation, while the Blind agent is least affected (as we would expect). Second, we find a potentially counter-intuitive trend – agents trained on Gibson consistently outperform their coun- terparts trained on Matterport3D, even when evaluated on Matterport3D. We believe the reason is the previously noted observation that Gibson scenes are smaller and episodes are shorter (lower GDSP) than Matterport3D. Gibson agents are trained on ‘easier’ episodes and encounter positive reward more easily during random exploration, thus bootstrapping learning. Consequently, for a fixed computation budget Gib- son agents are stronger universally (not just on Gibson). This finding suggests that visual navigation agents could benefit from curriculum learning. These insights are enabled by the engineering of Habitat, which made these experiments as simple as a change in the evaluation dataset name. # 6. Habitat Challenge No battle plan ever survives contact with the enemy. Helmuth Karl Bernhard von Moltke Challenges drive progress. The history of AI sub-fields indicates that the formulation of the right questions, the creation of the right datasets, and the coalescence of commu- nities around the right challenges drives scientific progress. Our goal is to support this process for embodied AI. Habitat Challenge is an autonomous navigation challenge that aims to benchmark and advance efforts in goal-directed visual navigation. One difficulty in creating a challenge around embodied AI tasks is the transition from static predictions (as in passive perception) to sequential decision making (as in sensori- motor control). In traditional ‘internet AI’ challenges (e.g. ImageNet [10], COCO [18], VQA [4]), it is possible to re- lease a static testing dataset and ask participants to simply upload their predictions on this set. In contrast, embodied AI tasks typically involve sequential decision making and agent- driven control, making it infeasible to pre-package a testing dataset. Essentially, embodied AI challenges require partici- pants to upload code not predictions. The uploaded agents can then be evaluated in novel (unseen) test environments. Challenge infrastructure. We leverage the frontend and challenge submission process of the EvalAI platform, and build backend infrastructure ourselves. Participants in Habi- tat Challenge are asked to upload Docker containers [19] with their agents via EvalAI. The submitted agents are then evaluated on a live AWS GPU-enabled instance. Specifically, contestants are free to train their agents however they wish (any language, any framework, any infrastructure). In or- der to evaluate these agents, participants are asked to derive from a base Habitat Docker container and implement a spe- cific interface to their model – agent’s action taken given an observation from the environment at each step. This docker- ized interface enables running the participant code on new environments. More details regarding the Habitat Challenge held at CVPR 2019 are available at the https://aihabitat. org/challenge/ website. In a future iteration of this challenge we will introduce three major differences designed to both reduce the gap between simulation and reality and to increase the difficulty of the task. – In the 2019 challenge, the relative coordinates specifying the goal were continuously updated during agent movement – essentially simulating an agent with perfect localization and heading estimation (e.g. an agent with an idealized GPS+Compass). However, high-precision localization in indoor environments can not be assumed in realistic settings – GPS has low precision indoors, (visual) odometry may be noisy, SLAM-based localization can fail, etc. Hence, we will investiage only providing to the agent a fixed relative coordinate for the goal position from the start location. – Likewise, the 2019 Habitat Challenge modeled agent actions (e.g. forward, turn 10◦ left,...) deter- ministically. However in real settings, agent intention (e.g. go forward 1m) and the result rarely match perfectly – actuation error, differing surface materials, and a myriad of other sources of error introduce significant drift over a long trajectory. To model this, we introduce a noise model acquired by benchmarking a real robotic platform [22]. Visual sensing is an excellent means of combating this “dead-reckoning” drift and this change allows participants to study methodologies that are robust to and can correct for this noise. – Finally, we will introduce realistic models of sensor noise for RGB and depth sensors – narrowing the gap between perceptual experiences agents would have in simulation and reality. We look forward to supporting the community in estab- lishing a benchmark to evaluate the state-of-the-art in meth- ods for embodied navigation agents. # 7. Future Work We described the design and implementation of the Habi- tat platform. Our goal is to unify existing community efforts and to accelerate research into embodied AI. This is a long- term effort that will succeed only by full engagement of the broader research community. Experiments enabled by the generic dataset support and the high performance of the Habitat stack indicate that i) learning-based agents can match and exceed the perfor- mance of classic visual navigation methods when trained for long enough and ii) learned agents equipped with depth sensors generalize well between different 3D environment datasets in comparison to agents equipped with only RGB. Feature roadmap. Our near-term development roadmap will focus on incorporating physics simulation and enabling physics-based interaction between mobile agents and ob- jects in 3D environments. Habitat-Sim’s scene graph representation is well-suited for integration with physics en- gines, allowing us to directly control the state of individual objects and agents within a scene graph. Another planned avenue of future work involves procedural generation of 3D environments by leveraging a combination of 3D reconstruc- tion and virtual object datasets. By combining high-quality reconstructions of large indoor spaces with separately re- constructed or modelled objects, we can take full advantage of our hierarchical scene graph representation to introduce controlled variation in the simulated 3D environments. Lastly, we plan to focus on distributed simulation settings that involve large numbers of agents potentially interacting with one another in competitive or collaborative scenarios. Acknowledgments. We thank the reviewers for their help- ful suggestions. The Habitat project would not have been possible without the support and contributions of many in- dividuals. We are grateful to Mandeep Baines, Angel Xuan Chang, Alexander Clegg, Devendra Singh Chaplot, Xin- lei Chen, Wojciech Galuba, Georgia Gkioxari, Daniel Gor- don, Leonidas Guibas, Saurabh Gupta, Jerry (Zhi-Yang) He, Rishabh Jain, Or Litany, Joel Marcey, Dmytro Mishkin, Mar- cus Rohrbach, Amanpreet Singh, Yuandong Tian, Yuxin Wu, Fei Xia, Deshraj Yadav, Amir Zamir, and Jiazhi Zhang for their help. Licenses for referenced datasets. Gibson: com/gibson_material/Agreement%20GDS% 2006-04-18.pdf Matterport3D: matterport/MP_TOS.pdf. # References [1] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Košecká, and Alexander C Berg. A dataset for developing and benchmarking active vision. In ICRA, 2017. [2] Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir Roshan Zamir. On evaluation of embodied naviga- tion agents. arXiv:1807.06757, 2018. [3] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: In- terpreting visually-grounded navigation instructions in real environments. In CVPR, 2018. [4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015. [5] Iro Armeni, Ozan Sener, Amir R. Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese. 3D semantic parsing of large-scale indoor spaces. In CVPR, 2016. [6] Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam, and Alex Kendall. Learning to drive from simulation without real world labels. In ICRA, 2019. [7] Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, and Aaron C. Courville. HoME: A household multimodal environment. arXiv:1711.11017, 2017. [8] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Hal- ber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3D: Learning from RGB- D data in indoor environments. In International Conference on 3D Vision (3DV), 2017. [9] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answer- ing. In CVPR, 2018. [10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and ImageNet: A large-scale hierarchical image Fei-Fei Li. database. In CVPR, 2009. [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- arXiv:1810.04805, formers for language understanding. 2018. [12] David Donoho. 50 years of data science. In Tukey Centennial Workshop, 2015. [13] Saurabh Gupta, James Davidson, Sergey Levine, Rahul Suk- thankar, and Jitendra Malik. Cognitive mapping and planning for visual navigation. In CVPR, 2017. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. [15] Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, and Marco Hutter. Learning agile and dynamic motor skills for legged robots. Science Robotics, 2019. [16] Noriyuki Kojima and Jia Deng. To learn or not to learn: Analyzing the role of learning for navigation in virtual envi- ronments. arXiv:1907.11770, 2019. [17] Eric Kolve, Roozbeh Mottaghi, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An interactive 3D environment for visual AI. arXiv:1712.05474, 2017. [18] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. [19] Dirk Merkel. Docker: Lightweight Linux containers for con- sistent development and deployment. Linux Journal, 2014. [20] Dmytro Mishkin, Alexey Dosovitskiy, and Vladlen Koltun. Benchmarking classic and learned navigation in complex 3D environments. arXiv:1901.10915, 2019. [21] Raúl Mur-Artal and Juan D. Tardós. ORB-SLAM2: An open-source SLAM system for monocular, stereo and RGB-D cameras. IEEE Transactions on Robotics, 33(5), 2017. [22] Adithyavairavan Murali, Tao Chen, Kalyan Vasudev Alwala, Dhiraj Gandhi, Lerrel Pinto, Saurabh Gupta, and Abhinav Gupta. Pyrobot: An open-source robotics framework for re- search and benchmarking. arXiv preprint arXiv:1906.08236, 2019. [23] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. VirtualHome: Sim- ulating household activities via programs. In CVPR, 2018. [24] Manolis Savva, Angel X. Chang, Alexey Dosovitskiy, Thomas Funkhouser, and Vladlen Koltun. MINOS: Mul- timodal indoor simulator for navigation in complex environ- ments. arXiv:1712.03931, 2017. [25] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Rad- ford, and Oleg Klimov. Proximal policy optimization algo- rithms. arXiv:1707.06347, 2017. [26] Linda Smith and Michael Gasser. The development of em- bodied cognition: Six lessons from babies. Artificial Life, 11(1-2), 2005. [27] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Mano- lis Savva, and Thomas Funkhouser. Semantic scene comple- tion from a single depth image. In CVPR, 2017. [28] Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kim- berly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard Newcombe. The Replica dataset: A digital replica of indoor spaces. arXiv:1906.05797, 2019. [29] Yi Wu, Yuxin Wu, Georgia Gkioxari, and Yuandong Tian. Building generalizable agents with a realistic and rich 3D environment. arXiv:1801.02209, 2018. [30] Fei Xia, Amir R. Zamir, Zhiyang He, Alexander Sax, Jiten- dra Malik, and Silvio Savarese. Gibson env: Real-world perception for embodied agents. In CVPR, 2018. [31] Claudia Yan, Dipendra Misra, Andrew Bennnett, Aaron Wals- man, Yonatan Bisk, and Yoav Artzi. CHALET: Cornell house agent learning environment. arXiv:1801.07357, 2018. # A. Habitat Platform Details As described in the main paper, Habitat consists of the following components: • Habitat-Sim: a flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling (with built-in sup- port for Matterport3D [8], Gibson [30], and other datasets). Habitat-Sim is fast – when rendering a realistic scanned scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (fps) running single-threaded, and can reach over 10,000 fps multi-process on a single GPU. • Habitat-API: a modular high-level library for end- to-end development of embodied AI – defining embod- ied AI tasks (e.g. navigation [2], instruction follow- ing [3], question answering [9]), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or via classic SLAM), and benchmarking their perfor- mance on the defined tasks using standard metrics [2]. Habitat-API currently uses Habitat-Sim as the core simulator, but is designed with a modular abstrac- tion for the simulator backend to maintain compatibility over multiple simulators. Key abstractions. The Habitat platform relies on a num- ber of key abstractions that model the domain of embodied agents and tasks that can be carried out in three-dimensional indoor environments. Here we provide a brief summary of key abstractions: Agent: a physically embodied agent with a suite of Sensors. Can observe the environment and is capable of taking actions that change agent or environment state. • Sensor: associated with a specific Agent, capable of returning observation data from the environment at a specified frequency. • SceneGraph: a hierarchical representation of a 3D environment that organizes the environment into re- gions and objects which can be programmatically ma- nipulated. • Simulator: an instance of a simulator backend. Given actions for a set of configured Agents and SceneGraphs, can update the state of the Agents and SceneGraphs, and provide observations for all active Sensors possessed by the Agents. These abstractions connect the different layers of the platform. They also enable generic and portable specification of embodied AI tasks. Habitat-Sim. The architecture of the Habitat-Sim back- end module is illustrated in Figure 6. The design of this module ensures a few key properties: • Memory-efficient management of 3D environment re- sources (triangle mesh geometry, textures, shaders) en- suring shared resources are cached and reused. • Flexible, structured representation of 3D environments using SceneGraphs, allowing for programmatic ma- nipulation of object state, and combination of objects from different environments. High-efficiency rendering engine with multi-attachment render pass to reduce overhead for multiple sensors. • Arbitrary numbers of Agents and corresponding Sensors that can be linked to a 3D environment by attachment to a SceneGraph. The performance of the simulation backend surpasses that of prior work operating on realistic reconstruction datasets by a large margin. Table 3 reports performance statistics on a test scene from the Matterport3D dataset. Single-thread performance reaches several thousand frames per second (fps), while multi-process operation with several simulation backends can reach over 10,000 fps on a single GPU. In addition, by employing OpenGL-CUDA interoperation we enable direct sharing of rendered image frames with ML frameworks such as PyTorch without a measurable impact on performance as the image resolution is increased (see Figure 7). Habitat-API. The second layer of the Habitat platform (Habitat-API) focuses on creating a general and flex- ible API for defining embodied agents, tasks that they may carry out, and evaluation metrics for those tasks. When de- signing such an API, a key consideration is to allow for easy extensibility of the defined abstractions. This is particularly important since many of the parameters of embodied agent tasks, specific agent configurations, and 3D environment setups can be varied in interesting ways. Future research is likely to propose new tasks, new agent configurations, and new 3D environments. The API allows for alternative simulator backends to be used, beyond the Habitat-Sim module that we imple- mented. This modularity has the advantage of allowing incor- poration of existing simulator backends to aid in transitioning from experiments that previous work has performed using legacy frameworks. The architecture of Habitat-API is illustrated in Figure 8, indicating core API functionality and functionality implemented as extensions to the core. Above the API level, we define a concrete embodied task such as visual navigation. This involves defining a specific dataset configuration, specifying the structure of episodes (e.g. number of steps taken, termination conditions), training curriculum (progression of episodes, difficulty ramp), and evaluation procedure (e.g. test episode sets and task metrics). An example of loading a pre-configured task (PointNav) and stepping through the environment with a random agent is shown in the code below. 5Note: The semantic sensor in Matterport3D requires using additional 3D meshes with significantly more geometric complexity, leading to re- duced performance. We expect this to be addressed in future versions, leading to speeds comparable to RGB + depth. ResourceManager Texture Material Simulator SceneManager SceneGraph SceneNode Figure 6: Architecture of Habitat-Sim main classes. The Simulator delegates management of all resources related to 3D environments to a ResourceManager that is responsible for loading and caching 3D environment data from a variety of on-disk formats. These resources are used within SceneGraphs at the level of individual SceneNodes that represent distinct objects or regions in a particular Scene. Agents and their Sensors are instantiated by being attached to SceneNodes in a particular SceneGraph. GPU→CPU→GPU GPU→CPU GPU→GPU Sensors / number of processes 1 3 5 1 3 5 1 3 5 RGB RGB + depth RGB + depth + semantics5 2,346 1,260 378 6,049 3,025 463 7,784 3,730 470 3,919 1,777 396 8,810 4,307 465 11,598 5,522 466 4,538 2,151 464 8,573 3,557 455 7,279 3,486 453 Table 3: Performance of Habitat-Sim in frames per second for an example Matterport3D scene (id 17DRP5sb8fy) on a Xeon E5-2690 v4 CPU and Nvidia Titan Xp GPU, measured at a frame resolution of 128x128, under different frame memory transfer strategies and with a varying number of concurrent simulator processes sharing the GPU. ‘GPU-CPU-GPU’ indicates passing of rendered frames from OpenGL context to CPU host memory and back to GPU device memory for use in optimization, ‘GPU-CPU’ only reports copying from OpenGL context to CPU host memory, whereas ‘GPU-GPU’ indicates direct sharing through OpenGL-CUDA interoperation. 5000 4000 no} 8 S 3000 n ® a 2 2000 2 a 4000 . —— _GPU->GPU — GPU->CPU ——] —— GPU->CPU->GPU 0 128 256 512 1024 Resolution Figure 7: Performance of Habitat-Sim under different sensor frame memory transfer strategies for increasing image resolution. We see that ‘GPU->GPU’ is unaffected by image resolution while other strategies degrade rapidly. # B. Additional Dataset Statistics report the average geodesic distance along the shortest path (GDSP) between starting point and goal position. As noted in the main paper, Gibson episodes are significantly shorter than Matterport3D ones. Figure 9 visualizes the episode distributions over geodesic distance (GDSP), Euclidean dis- tance between start and goal position, and the ratio of the two (an approximate measure of complexity for the episode). We again note that Gibson episodes have more episodes with shorter distances, leading to the dataset being overall easier than the Matterport3D dataset. import habitat # Load embodied AI task (PointNav) # and a pre-specified virtual robot config = habitat.get_config(config_file= "pointnav.yaml") env = habitat.Env(config) observations = env.reset() # Step through environment with random actions while not env.episode_over: observations = \ env.step(env.action_space.sample()) In Table 5 we summarize the train, validation and test split sizes for all three datasets used in our experiments. We also # Gibson Sensor API Simulator AP! - Embodied QA . - Episodes Dataset =- =- -_-7 1 1 RL Environment RL baselines | ' 1 1 1 SLAM ' 1 Environment ' 1 Imitation | | learning ' 1 ' Baselines | Episode _ <Ir-- -_ ~ ~~. a PointNav eel PointNav flea’ PointNav neater EQA Replica +— use + -- inherit | core API LI extensions and implementations # EQA Figure 8: Architecture of Habitat-API. The core functionality defines fundamental building blocks such as the API for interacting with the simulator backend and receiving observations through Sensors. Concrete simulation backends, 3D datasets, and embodied agent baselines are implemented as extensions to the core API. Dataset scenes (#) episodes (#) average GDSP (m) Matterport3D Gibson 58 / 11 / 18 72 / 16 / 10 4.8M / 495 / 1008 4.9M / 1000 / 1000 11.5 / 11.1 / 13.2 6.9 / 6.5 / 7.0 Table 4: Statistics of the PointGoal navigation datasets that we precompute for the Matterport3D and Gibson datasets: total number of scenes, total number of episodes, and average geodesic distance between start and goal positions. Each cell reports train / val / test split statistics. # C.1. Analysis of Collisions To further characterize the behavior of learned agents during navigation we plot the average number of collisions in Figure 10. We see that Blind incurs a much larger number of collisions than other agents, providing evidence for ‘wall-following’ behavior. Depth-equipped agents have the lowest number of collisions, while RGB agents are in between. Dataset Min Median Mean Max Matterport3D Gibson 18 15 90.0 60.0 97.1 63.3 281 207 Table 5: Statistics of path length (in actions) for an oracle which greedily fits actions to follow the negative of geodesic distance gradient on the PointGoal navigation validation sets. This provides expected horizon lengths for a near-perfect agent and contextualizes the decision for a max-step limit of 500. # C. Additional Experimental Results # C.2. Noisy Depth To investigate the impact of noisy depth measurements on agent performance, we re-evaluated depth agents (without re-training) on noisy depth generated using a simple noise model: iid Gaussian noise (µ = 0, σ = 0.4) at each pixel in inverse depth (larger depth = more noise). We observe a drop of 0.13 and 0.02 SPL for depth-RL and SLAM on Gibson-val (depth-RL still outperforms SLAM). Note that SLAM from [20] utilizes ORB-SLAM2, which is quite robust to noise, while depth-RL was trained without noise. If we increase σ to 0.1, depth-RL gets 0.12 SPL whereas SLAM suffers catastrophic failures. In order to confirm that the trends we observe for the experimental results presented in the paper hold for much larger amounts of experience, we scaled our experiments to 800M steps. We found that (1) the ordering of the visual inputs stays Depth > RGBD > RGB > Blind; (2) RGB is consistently better than Blind (by 0.06/0.03 SPL on Gibson/Matterport3D), and (3) RGBD outperforms SLAM on Matterport3D (by 0.16 SPL). # D. Gibson Dataset Curation We manually curated the full dataset of Gibson 3D tex- tured meshes [30] to select meshes that do not exhibit signif- icant reconstruction artifacts such as holes or texture quality issues. A key issue that we tried to avoid is the presence of ese 8 E sees Figure 9: Statistics of PointGoal navigation episodes. From left: distribution over Euclidean distance between start and goal, distribution over geodesic distance along shortest path between start and goal, and distribution over the ratio of geodesic to Euclidean distance. Gibson Bind RGB Ei RGB as MP3D 15 20 25 Avg. Collisions 30 35 40 Figure 10: Average number of collisions during successful navi- gation episodes for the different sensory configurations of the RL (PPO) baseline agent on test set episodes for the Gibson and Matter- port3D datasets. The Blind agent experiences the highest number of collisions, while agents possessing depth sensors (Depth and RGBD) have the fewest collisions on average. habitat-api/habitat_baselines. Below is the shell script we used for our RL experiments: # Note: parameters in {} are experiment specific. # Note: use 8, 6 processes for Gibson, MP3D # python habitat_baselines/train_ppo.py \ --sensors {RGB_SENSOR,DEPTH_SENSOR} \ --blind {0,1} --use-gae --lr 2.5e-4 \ --clip-param 0.1 --use-linear-lr-decay \ --num-processes {8,6} --num-steps 128 \ --num-mini-batch 4 --num-updates 135000 \ --use-linear-clip-decay For running SLAM please refer to habitat- api/habitat_baselines/slambased. holes or cracks in floor surfaces. This is particularly problem- atic for navigation tasks as it divides seemingly connected navigable areas into non-traversable disconnected compo- nents. We manually annotated the scenes (using the 0 to 5 quality scale shown in Figure 11) and only use scenes with a rating of 4 or higher, i.e., no holes, good reconstruction, and negligible texture issues to generate the dataset episodes. # E. Reproducing Experimental Results results can be reproduced us- ing ec9557a) and Habitat-Sim (commit d383c20) repositories. The code for running experiments is present under the folder # F. Example Navigation Episodes Figure 12 visualizes additional example navigation episodes for the different sensory configurations of the RL (PPO) agents that we describe in the main paper. Blind agents have the lowest performance, colliding much more frequently with the environment and adopting a ‘wall hug- ging’ strategy for navigation. RGB agents are less prone to collisions but still struggle to navigate to the goal posi- tion successfully in some cases. In contrast, depth-equipped agents are much more efficient, exhibiting fewer collisions, and navigating to goals more successfully (as indicated by the overall higher SPL values). 0: critical reconstruction artifacts, holes, or texture issues 1: big holes or significant texture issues and reconstruction artifacts 2: big holes or significant texture issues, but good reconstruction 3: small holes, some texture issues, good reconstruction 4: no holes, some texture issues, good reconstruction 5: no holes, uniform textures, good reconstruction Figure 11: Rating scale used in curation of 3D textured mesh reconstructions from the Gibson dataset. We use only meshes with ratings of 4 or higher for the Habitat Challenge dataset. Gibson Blind SPL = 0.00 RGB SPL = 0.45 RGBD SPL = 0.82 Depth SPL = 0.88 Blind SPL = 0.00 RGB SPL = 0.29 RGBD SPL = 0.49 Depth SPL = 0.96 Figure 12: Additional navigation example episodes for the different sensory configurations of the RL (PPO) agent, visualizing trials from the Gibson and MP3D val sets. A blue dot and red dot indicate the starting and goal positions, and the blue arrow indicates final agent position. The blue-green-red line is the agent’s trajectory. Color shifts from blue to red as the maximum number of allowed agent steps is approached. MP3D Blind SPL = 0.00 RGB SPL = 0.40 RGBD SPL = 0.92 Depth SPL = 0.98 Figure 12: Additional navigation example episodes for the different sensory configurations of the RL (PPO) agent, visualizing trials from the Gibson and MP3D val sets. A blue dot and red dot indicate the starting and goal positions, and the blue arrow indicates final agent position. The blue-green-red line is the agent’s trajectory. Color shifts from blue to red as the maximum number of allowed agent steps is approached.
{ "id": "1711.11017" }
1904.01038
fairseq: A Fast, Extensible Toolkit for Sequence Modeling
fairseq is an open-source sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and inference on modern GPUs. A demo video can be found at https://www.youtube.com/watch?v=OtgDdWtHvto
http://arxiv.org/pdf/1904.01038
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli
cs.CL
NAACL 2019 Demo paper
null
cs.CL
20190401
20190401
9 1 0 2 r p A 1 ] L C . s c [ 1 v 8 3 0 1 0 . 4 0 9 1 : v i X r a # FAIRSEQ: A Fast, Extensible Toolkit for Sequence Modeling Myle Ott®* Sergey Edunov“* Nathan Ng“ Alexei Baevski* David Grangier’' Michael Auli Angela Fan® 4 Facebook AI Research V Google Brain # A Sam Gross # Abstract FAIRSEQ is an open-source sequence model- ing toolkit that allows researchers and devel- opers to train custom models for translation, summarization, language modeling, and other text generation tasks. The toolkit is based on PyTorch and supports distributed training across multiple GPUs and machines. We also support fast mixed-precision training and in- ference on modern GPUs. A demo video can be found here: https://www.youtube. com/watch?v=OtgDdWtHvto. with user-supplied plug-ins (§2); (ii) efficient dis- tributed and mixed precision training, enabling training over datasets with hundreds of millions of sentences on current hardware (§3); (iii) state- of-the-art implementations and pretrained models for machine translation, summarization, and lan- guage modeling (§4); and (iv) optimized inference with multiple supported search algorithms, includ- ing beam search, diverse beam search (Vijayaku- mar et al., 2016), and top-k sampling. FAIRSEQ is distributed with a BSD license and is avail- able on GitHub at https://github.com/ pytorch/fairseq. # Introduction Neural sequence-to-sequence models have been successful on a variety of text generation tasks, in- cluding machine translation, abstractive document summarization, and language modeling. Accord- ingly, both researchers and industry professionals can benefit from a fast and easily extensible se- quence modeling toolkit. # 2 Design Extensibility. FAIRSEQ can be extended through five types of user-supplied plug-ins, which enable experimenting with new ideas while reusing exist- ing components as much as possible. There are several toolkits with similar basic functionality, but they differ in focus area and in- tended audiences. For example, OpenNMT (Klein et al., 2017) is a community-built toolkit written in multiple languages with an emphasis on exten- sibility. MarianNMT (Junczys-Dowmunt et al., 2018) focuses on performance and the backend is written in C++ for fast automatic differentiation. OpenSeq2Seq (Kuchaiev et al., 2018) provides reference implementations for fast distributed and mixed precision training. Tensor2tensor (Vaswani et al., 2018) and Sockeye (Hieber et al., 2018) fo- cus on production-readiness. Models define the neural network architecture and encapsulate all learnable parameters. Models extend the BaseFairseqModel class, which in turn extends torch.nn.Module. Thus any FAIRSEQ model can be used as a stand-alone mod- ule in other PyTorch code. Models can addition- ally predefine named architectures with common network configurations (e.g., embedding dimen- sion, number of layers, etc.). We also abstracted the methods through which the model interacts with the generation algorithm, e.g., beam search, through step-wise prediction. This isolates model implementation from the generation algorithm. In this paper, we present FAIRSEQ, a sequence modeling toolkit written in PyTorch that is fast, extensible, and useful for both research and pro- duction. FAIRSEQ features: (i) a common inter- face across models and tasks that can be extended ∗equal contribution † Work done while at Facebook AI Research. Criterions compute the loss given the model loss = and a batch of data, roughly: criterion(model, batch). This formula- tion makes criterions very expressive, since they have complete access to the model. For exam- ple, a criterion may perform on-the-fly genera- tion to support sequence-level training (Edunov et al., 2018b) or online backtranslation (Edunov et al., 2018a; Lample et al., 2018). Alternatively, in a mixture-of-experts model, a criterion may implement EM-style training and backpropagate only through the expert that produces the lowest loss (Shen et al., 2019). Tasks store dictionaries, provide helpers for loading and batching data and define the training loop. They are intended to be immutable and pri- marily interface between the various components. We provide tasks for translation, language model- ing, and classification. Optimizers update the model parameters based on the gradients. We provide wrappers around most PyTorch optimizers and an implementation of Adafactor (Shazeer and Stern, 2018), which is a memory-efficient variant of Adam. Learning Rate Schedulers update the learn- ing rate over the course of training. We pro- vide several popular schedulers, e.g., the in- verse square-root scheduler from Vaswani et al. (2017) and cyclical schedulers based on warm restarts (Loshchilov and Hutter, 2016). Reproducibility and forward compatibility. FAIRSEQ includes features designed to improve re- producibility and forward compatibility. For ex- ample, checkpoints contain the full state of the model, optimizer and dataloader, so that results are reproducible if training is interrupted and re- sumed. FAIRSEQ also provides forward compat- ibility, i.e., models trained using old versions of the toolkit will continue to run on the latest ver- sion through automatic checkpoint upgrading. # Implementation FAIRSEQ is implemented in PyTorch and it pro- vides efficient batching, mixed precision training, multi-GPU as well as multi-machine training. Batching. There are multiple strategies to batch input and output sequence pairs (Morishita et al., 2017). FAIRSEQ minimizes padding within a mini- batch by grouping source and target sequences of similar length. The content of each mini-batch stays the same throughout training, however mini- batches themselves are shuffled randomly every epoch. When training on more than one GPU or machine, then the mini-batches for each worker Syne after backward gpul a) ; gpu4 Overlap syne with backward 1 Gradient sync. spu Forward b) Backward gpu4 Idle > + syne after 2 backwards gpul c) gpu4 time Figure 1: Illustration of (a) gradient synchronization and idle time during training, (b) overlapping back- propagation (backward) with gradient synchronization to improve training speed, (c) how accumulating gradi- ent updates can reduce variance in processing time and reduce communication time. are likely to differ in the average sentence length which results in more representative updates. Multi-GPU training. FAIRSEQ uses the NCCL2 library and torch.distributed for inter- GPU communication. Models are trained in a syn- chronous optimization setup where each GPU has a copy of the model to process a sub-batch of data after which gradients are synchronized be- tween GPUs; all sub-batches constitute a mini- batch. Even though sub-batches contain a simi- lar number of tokens, we still observe a high vari- ance in processing times. In multi-GPU or multi- machine setups, this results in idle time for most GPUs while slower workers are finishing their work (Figure 1 (a)). FAIRSEQ mitigates the ef- fect of stragglers by overlapping gradient synchro- nization between workers with the backward pass and by accumulating gradients over multiple mini- batches for each GPU (Ott et al., 2018b). Overlapping gradient synchronization starts to synchronize gradients of parts of the network when they are computed. In particular, when the gradient computation for a layer finishes, FAIRSEQ adds the result to a buffer. When the size of the buffer reaches a predefined threshold, the gra- dients are synchronized in a background thread while back-propagation continues as usual (Fig- ure 1 (b)). Next, we accumulate gradients for mul- tiple sub-batches on each GPU which reduces the variance in processing time between workers since there is no need to wait for stragglers after each sub-batch (Figure 1 (c)). This also increases the Sentences/sec FAIRSEQ FP32 FAIRSEQ FP16 88.1 136.0 Table 1: Translation speed measured on a V100 GPU on the test set of the standard WMT’14 English- German benchmark using a big Transformer model. effective batch size but we found that models can still be trained effectively (Ott et al., 2018b). Mixed precision. Recent GPUs enable efficient half precision floating point (FP16) computation. FAIRSEQ provides support for both full preci- sion (FP32) and FP16 at training and inference. We perform all forward-backward computations as well as the all-reduce for gradient synchroniza- tion between workers in FP16. However, the pa- rameter updates remain in FP32 to preserve ac- curacy. FAIRSEQ implements dynamic loss scal- ing (Micikevicius et al., 2018) in order to avoid underflows for activations and gradients because of the limited precision offered by FP16. This scales the loss right after the forward pass to fit into the FP16 range while the backward pass is left unchanged. After the FP16 gradients are synchro- nized between workers, we convert them to FP32, restore the original scale, and update the weights. Inference. FAIRSEQ provides fast inference for non-recurrent models (Gehring et al., 2017; Vaswani et al., 2017; Fan et al., 2018b; Wu et al., 2019) through incremental decoding, where the model states of previously generated tokens are cached in each active beam and re-used. This can speed up a na¨ıve implementation without caching by up to an order of magnitude, since only new states are computed for each token. For some models, this requires a component-specific caching implementation, e.g., multi-head attention in the Transformer architecture. During inference we build batches with a vari- able number of examples up to a user-specified number of tokens, similar to training. FAIRSEQ also supports inference in FP16 which increases decoding speed by 54% compared to FP32 with no loss in accuracy (Table 1). # 4 Applications FAIRSEQ has been used in many applications, such as machine translation (Gehring et al., 2017; Edunov et al., 2018b,a; Chen et al., 2018; Ott et al., 2018a; Song et al., 2018; Wu et al., 2019), lan- guage modeling (Dauphin et al., 2017; Baevski and Auli, 2019), abstractive document summariza- tion (Fan et al., 2018a; Liu et al., 2018; Narayan et al., 2018), story generation (Fan et al., 2018b, 2019), error correction (Chollampatt and Ng, 2018), multilingual sentence embeddings (Artetxe and Schwenk, 2018), and dialogue (Miller et al., 2017; Dinan et al., 2019). # 4.1 Machine translation We provide reference implementations of sev- eral popular sequence-to-sequence models which can be used for machine translation, including LSTM (Luong et al., 2015), convolutional mod- els (Gehring et al., 2017; Wu et al., 2019) and Transformer (Vaswani et al., 2017). We evaluate a “big” Transformer encoder- decoder model on two language pairs, WMT En- glish to German (En–De) and WMT English to French (En–Fr). For En–De we replicate the setup of Vaswani et al. (2017) which relies on WMT’16 for training with 4.5M sentence pairs, we validate on newstest13 and test on newstest14. The 32K vocabulary is based on a joint source and target byte pair encoding (BPE; Sennrich et al. 2016). For En–Fr, we train on WMT’14 and borrow the setup of Gehring et al. (2017) with 36M training sentence pairs. We use newstest12+13 for valida- tion and newstest14 for test. The 40K vocabulary is based on a joint source and target BPE. We measure case-sensitive tokenized BLEU with multi-bleu (Hoang et al., 2006) and de- tokenized BLEU with SacreBLEU1 (Post, 2018). All results use beam search with a beam width of 4 and length penalty of 0.6, following Vaswani et al. 2017. FAIRSEQ results are summarized in Table 2. We reported improved BLEU scores over Vaswani et al. (2017) by training with a bigger batch size and an increased learning rate (Ott et al., 2018b). # 4.2 Language modeling FAIRSEQ supports language modeling with gated convolutional models (Dauphin et al., 2017) and Transformer models (Vaswani et al., 2017). Mod- els can be trained using a variety of input and out- put representations, such as standard token embed- dings, convolutional character embeddings (Kim 1SacreBLEU hash: BLEU+case.mixed+lang.en-{de,fr}+ numrefs.1+smooth.exp+test.wmt14/full+tok.13a+version.1.2.9 En–De En–Fr a. Gehring et al. (2017) b. Vaswani et al. (2017) c. Ahmed et al. (2017) d. Shaw et al. (2018) 25.2 28.4 28.9 29.2 40.5 41.0 41.4 41.5 FAIRSEQ Transformer base FAIRSEQ Transformer big detok. SacreBLEU 8 GPU training time 128 GPU training time 41.1 28.1 43.2 29.3 41.4 28.6 ∼12 h ∼73 h ∼1.3 h ∼7.2 h Table 2: BLEU on news2014 for WMT English- German (En–De) and English-French (En–Fr). All re- sults are based on WMT’14 training data, except for En–De (b), (c), (d) and our models which were trained on WMT’16. Train times based on V100 GPUs. Perplexity Grave et al. (2016) Dauphin et al. (2017) Merity et al. (2018) Rae et al. (2018) 40.8 37.2 33.0 29.2 FAIRSEQ Adaptive inputs 18.7 Table 3: Test perplexity on WikiText-103 (cf. Table 4). et al., 2016), adaptive softmax (Grave et al., 2017), and adaptive inputs (Baevski and Auli, 2019). We also provide tutorials and pre-trained models that replicate the results of Dauphin et al. (2017) and Baevski and Auli (2019) on WikiText-103 and the One Billion Word datasets. We evaluate two Transformer language models, which use only a decoder network and adaptive input embeddings, following Baevski and Auli (2019). The first model has 16 blocks, inner di- mension 4K and embedding dimension 1K; results on WikiText-103 are in Table 3. The second model has 24 blocks, inner dimension 8K and embedding dimension 1.5K; results on the One Billion Word benchmark are in Table 4. # 4.3 Abstractive document summarization Next, we experiment with abstractive document summarization where we use a base Transformer to encode the input document and then generate a summary with a decoder network. We use the CNN-Dailymail dataset (Hermann et al., 2015; Nallapati et al., 2016) of news articles paired with multi-sentence summaries. We evaluate on Perplexity Dauphin et al. (2017) J´ozefowicz et al. (2016) Shazeer et al. (2017) 31.9 30.0 28.0 FAIRSEQ Adaptive inputs 23.0 Table 4: Test perplexity on the One Billion Word benchmark. Adaptive inputs share parameters with an adaptive softmax. 1 ROUGE 2 L See et al. (2017) Gehrmann et al. (2018) 39.5 41.2 17.3 18.7 36.4 38.3 FAIRSEQ + pre-trained LM 40.1 41.6 17.6 18.9 36.8 38.5 Table 5: Abstractive summarization results on the full- text version of CNN-DailyMail dataset. the full-text version with no entity anonymization (See et al., 2017); we truncate articles to 400 to- kens (See et al., 2017). We use BPE with 30K operations to form our vocabulary following Fan et al. (2018a). To evaluate, we use the standard ROUGE metric (Lin, 2004) and report ROUGE-1, ROUGE-2, and ROUGE-L. To generate summaries, we follow standard practice in tuning the min- imum output length and disallow repeating the same trigram (Paulus et al., 2017). Table 5 shows results of FAIRSEQ. We also consider a configura- tion where we input pre-trained language model representations to the encoder network and this language model was trained on newscrawl and CNN-Dailymail, totalling 193M sentences. # 5 Conclusion We presented FAIRSEQ, a fast, extensible toolkit for sequence modeling that is scalable and suit- able for many applications. In the future, we will continue the development of the toolkit to enable further research advances. # Acknowledgements We thank Jonas Gehring for writing the original Lua/Torch version of fairseq. # References Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. arxiv, 1711.02132. Mikel Artetxe and Holger Schwenk. 2018. Mas- sively multilingual sentence embeddings for zero- arXiv, shot cross-lingual abs/1812.10464. Alexei Baevski and Michael Auli. 2019. Adaptive in- put representations for neural language modeling. In Proc. of ICLR. Yun Chen, Victor OK Li, Kyunghyun Cho, and Samuel R Bowman. 2018. A stable and effec- tive learning strategy for trainable greedy decoding. arXiv, abs/1804.07915. Shamil Chollampatt and Hwee Tou Ng. 2018. A mul- tilayer convolutional encoder-decoder neural net- arXiv, work for grammatical error correction. abs/1801.08831. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proc. of ICML. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proc. of ICLR. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018a. Understanding back-translation at scale. In Conference of the Association for Compu- tational Linguistics (ACL). Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, et al. 2018b. Classical structured prediction losses for sequence to sequence learning. In Proc. of NAACL. Angela Fan, David Grangier, and Michael Auli. 2018a. In ACL Controllable abstractive summarization. Workshop on Neural Machine Translation and Gen- eration. Angela Fan, Mike Lewis, and Yann Dauphin. 2018b. In Proc. of Hierarchical neural story generation. ACL. Angela Fan, Mike Lewis, and Yann Dauphin. 2019. Strategies for structuring story generation. arXiv, abs/1902.01109. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional Sequence to Sequence Learning. In Proc. of ICML. Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. arXiv, abs/1808.10792. Edouard Grave, Armand Joulin, Moustapha Ciss´e, David Grangier, and Herv´e J´egou. 2017. Efficient softmax approximation for gpus. In Proc. of ICML. Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a 2016. continuous cache. arXiv, abs/1612.04426. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In NIPS. Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. Sockeye: A Toolkit for Neural Machine Translation. arXiv, abs/1712.05690. Hieu Hoang, Philipp Koehn, Ulrich Germann, Kenneth Heafield, and Barry Haddow. 2006. multi-bleu.perl. https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl. Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling. arXiv, abs/1602.02410. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proc. of ACL 2018, System Demonstrations. Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M Rush. 2016. Character-aware neural language models. In Proc. of AAAI. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Open- NMT: Open-source toolkit for neural machine trans- lation. In Proc. ACL. Oleksii Kuchaiev, Boris Ginsburg, Igor Gitman, Vi- taly Lavrukhin, Carl Case, and Paulius Micikevicius. 2018. OpenSeq2Seq: Extensible Toolkit for Dis- tributed and Mixed Precision Training of Sequence- to-Sequence Models. In Proc. of Workshop for NLP Open Source Software. Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. In Proc. of EMNLP. Chin-Yew Lin. 2004. Rouge: a package for automatic evaluation of summaries. In ACL Workshop on Text Summarization Branches Out. Yizhu Liu, Zhiyi Luo, and Kenny Zhu. 2018. Con- trolling length in abstractive summarization using a convolutional neural network. In Proc. of EMNLP. Ilya Loshchilov and Frank Hutter. 2016. Stochastic gradient descent with warm restarts. Proc. of ICLR. Sgdr: In Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- In Proc. of based neural machine translation. EMNLP. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language mod- eling at multiple scales. arXiv, abs/1803.08240. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David Gar- cia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. 2018. Mixed Precision Training. In Proc. of ICLR. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Par- lai: A dialog research software platform. arXiv, abs/1705.06476. Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An empirical study of mini-batch creation strategies for neural machine translation. In Proc. of WMT. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. In SIGNLL Conference on Com- putational Natural Language Learning. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. arXiv, abs/1808.08745. Myle Ott, Michael Auli, David Grangier, and MarcAu- relio Ranzato. 2018a. Analyzing uncertainty in neu- ral machine translation. In Proc. of ICML. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine trans- lation. In Proc. of WMT. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304. Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv, abs/1804.08771. Jack W. Rae, Chris Dyer, Peter Dayan, and Timothy P. Lillicrap. 2018. Fast parametric learning with acti- vation memorization. arXiv, abs/1803.10049. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proc. of NAACL. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural net- works: The sparsely-gated mixture-of-experts layer. arXiv, abs/1701.06538. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. and Marc’Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. arXiv, abs/1902.07816. Kaitao Song, Xu Tan, Di He, Jianfeng Lu, Tao Qin, and Tie-Yan Liu. 2018. Double path net- works for sequence to sequence learning. arXiv, abs/1806.04856. A. Vaswani, S. Bengio, E. Brevdo, F. Chollet, A. N. Gomez, S. Gouws, L. Jones, Ł. Kaiser, N. Kalch- brenner, N. Parmar, R. Sepassi, N. Shazeer, and J. Uszkoreit. 2018. Tensor2Tensor for Neural Ma- chine Translation. arXiv, abs/1803.07416. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Proc. of NIPS. Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. arXiv preprint arXiv:1610.02424. Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, and Michael Auli. 2019. Pay less atten- tion with lightweight and dynamic convlutions. In Proc. of ICLR.
{ "id": "1705.04304" }
1904.00962
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains ResNet on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and ResNet-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1). The LAMB implementation is available at https://github.com/tensorflow/addons/blob/master/tensorflow_addons/optimizers/lamb.py
http://arxiv.org/pdf/1904.00962
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, Cho-Jui Hsieh
cs.LG, cs.AI, cs.CL, stat.ML
Published as a conference paper at ICLR 2020
null
cs.LG
20190401
20200103
0 2 0 2 n a J 3 ] G L . s c [ 5 v 2 6 9 0 0 . 4 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # LARGE BATCH OPTIMIZATION FOR DEEP LEARNING: TRAINING BERT IN 76 MINUTES Yang You2, Jing Li1, Sashank Reddi1, Jonathan Hseu1, Sanjiv Kumar1, Srinadh Bhojanapalli1 Xiaodan Song1, James Demmel2, Kurt Keutzer2, Cho-Jui Hsieh1,3 Yang You was a student researcher at Google Brain. This project was done when he was at Google Brain. Google1, UC Berkeley2, UCLA3 {youyang, demmel, keutzer}@cs.berkeley.edu, {jingli, sashank, jhseu, sanjivk, bsrinadh, xiaodansong, chojui}@google.com # ABSTRACT Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is LARS, which by employing layerwise adaptive learning rates trains RESNET on ImageNet in a few minutes. However, LARS performs poorly for attention models like BERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called LAMB; we then provide convergence analysis of LAMB as well as LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB across various tasks such as BERT and RESNET-50 training with very little hyperparameter tuning. In particular, for BERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, BERT training time can be reduced from 3 days to just 76 minutes (Table 1). The LAMB implementation is available online1. # INTRODUCTION With the advent of large scale datasets, training large deep neural networks, even using computation- ally efficient optimization methods like Stochastic gradient descent (SGD), has become particularly challenging. For instance, training state-of-the-art deep learning models like BERT and ResNet-50 takes 3 days on 16 TPUv3 chips and 29 hours on 8 Tesla P100 gpus respectively (Devlin et al., 2018; He et al., 2016). Thus, there is a growing interest to develop optimization solutions to tackle this critical issue. The goal of this paper is to investigate and develop optimization techniques to accelerate training large deep neural networks, mostly focusing on approaches based on variants of SGD. Methods based on SGD iteratively update the parameters of the model by moving them in a scaled (negative) direction of the gradient calculated on a minibatch. However, SGD’s scalability is limited by its inherent sequential nature. Owing to this limitation, traditional approaches to improve SGD training time in the context of deep learning largely resort to distributed asynchronous setup (Dean et al., 2012; Recht et al., 2011). However, the implicit staleness introduced due to the asynchrony limits the parallelization of the approach, often leading to degraded performance. The feasibility of computing gradient on large minibatches in parallel due to recent hardware advances has seen the resurgence of simply using synchronous SGD with large minibatches as an alternative to asynchronous SGD. However, naïvely increasing the batch size typically results in degradation of generalization performance and reduces computational benefits (Goyal et al., 2017). Synchronous SGD on large minibatches benefits from reduced variance of the stochastic gradients used in SGD. This allows one to use much larger learning rates in SGD, typically of the order square root of the minibatch size. Surprisingly, recent works have demonstrated that up to certain minibatch sizes, linear scaling of the learning rate with minibatch size can be used to further speed up the # 1https://github.com/tensorflow/addons/blob/master/tensorflow_addons/ optimizers/lamb.py 1 Published as a conference paper at ICLR 2020 training Goyal et al. (2017). These works also elucidate two interesting aspects to enable the use of linear scaling in large batch synchronous SGD: (i) linear scaling of learning rate is harmful during the initial phase; thus, a hand-tuned warmup strategy of slowly increasing the learning rate needs to be used initially, and (ii) linear scaling of learning rate can be detrimental beyond a certain batch size. Using these tricks, Goyal et al. (2017) was able to drastically reduce the training time of ResNet-50 model from 29 hours to 1 hour using a batch size of 8192. While these works demonstrate the feasibility of this strategy for reducing the wall time for training large deep neural networks, they also highlight the need for an adaptive learning rate mechanism for large batch learning. Variants of SGD using layerwise adaptive learning rates have been recently proposed to address this problem. The most successful in this line of research is the LARS algorithm (You et al., 2017), which was initially proposed for training RESNET. Using LARS, ResNet-50 can be trained on ImageNet in just a few minutes! However, it has been observed that its performance gains are not consistent across tasks. For instance, LARS performs poorly for attention models like BERT. Furthermore, theoretical understanding of the adaptation employed in LARS is largely missing. To this end, we study and develop new approaches specially catered to the large batch setting of our interest. Contributions. More specifically, we make the following main contributions in this paper. • Inspired by LARS, we investigate a general adaptation strategy specially catered to large batch learning and provide intuition for the strategy. • Based on the adaptation strategy, we develop a new optimization algorithm (LAMB) for achieving adaptivity of learning rate in SGD. Furthermore, we provide convergence analysis for both LARS and LAMB to achieve a stationary point in nonconvex settings. We highlight the benefits of using these methods for large batch settings. • We demonstrate the strong empirical performance of LAMB across several challenging tasks. Using LAMB we scale the batch size in training BERT to more than 32k without degrading the performance; thereby, cutting the time down from 3 days to 76 minutes. Ours is the first work to reduce BERT training wall time to less than couple of hours. • We also demonstrate the efficiency of LAMB for training state-of-the-art image classification models like RESNET. To the best of our knowledge, ours is first adaptive solver that can achieve state-of-the-art accuracy for RESNET-50 as adaptive solvers like Adam fail to obtain the accuracy of SGD with momentum for these tasks. 1.1 RELATED WORK The literature on optimization for machine learning is vast and hence, we restrict our attention to the most relevant works here. Earlier works on large batch optimization for machine learning mostly focused on convex models, benefiting by a factor of square root of batch size using appropriately large learning rate. Similar results can be shown for nonconvex settings wherein using larger minibatches improves the convergence to stationary points; albeit at the cost of extra computation. However, several important concerns were raised with respect to generalization and computational performance in large batch nonconvex settings. It was observed that training with extremely large batch was difficult (Keskar et al., 2016; Hoffer et al., 2017). Thus, several prior works carefully hand-tune training hyper-parameters, like learning rate and momentum, to avoid degradation of generalization performance (Goyal et al., 2017; Li, 2017; You et al., 2018; Shallue et al., 2018). (Krizhevsky, 2014) empirically found that simply scaling the learning rate linearly with respect to batch size works better up to certain batch sizes. To avoid optimization instability due to linear scaling of learning rate, Goyal et al. (2017) proposed a highly hand-tuned learning rate which involves a warm-up strategy that gradually increases the LR to a larger value and then switching to the regular LR policy (e.g. exponential or polynomial decay). Using LR warm-up and linear scaling, Goyal et al. (2017) managed to train RESNET-50 with batch size 8192 without loss in generalization performance. However, empirical study (Shallue et al., 2018) shows that learning rate scaling heuristics with the batch size do not hold across all problems or across all batch sizes. More recently, to reduce hand-tuning of hyperparameters, adaptive learning rates for large batch training garnered significant interests. Several recent works successfully scaled the batch size to large values using adaptive learning rates without degrading the performance, thereby, finishing RESNET- 50 training on ImageNet in a few minutes (You et al., 2018; Iandola et al., 2016; Codreanu et al., 2017; Akiba et al., 2017; Jia et al., 2018; Smith et al., 2017; Martens & Grosse, 2015; Devarakonda 2 Published as a conference paper at ICLR 2020 et al., 2017; Mikami et al., 2018; Osawa et al., 2018; You et al., 2019; Yamazaki et al., 2019). To the best of our knowledge, the fastest training result for RESNET-50 on ImageNet is due to Ying et al. (2018), who achieve 76+% top-1 accuracy. By using the LARS optimizer and scaling the batch size to 32K on a TPUv3 Pod, Ying et al. (2018) was able to train RESNET-50 on ImageNet in 2.2 minutes. However, it was empirically observed that none of these performance gains hold in other tasks such as BERT training (see Section 4). # 2 PRELIMINARIES Notation. For any vector x; € R?¢, either T,5 OF [xz] j are used to denote its je coordinate where Jj € (dj. Let I be the d x d identity matrix, and let I = [Ij, ly, ..., I,] be its decomposition into column submatrices I; = d x dp. For x € R%, let x be the block of variables corresponding to the columns of Iie, c =I} a € R* fori = {1,2,--- ,h}. For any function f : R¢ > R, we use Vif (x) to denote the gradient with respect to x“). For any vectors u,v € R4, we use u? and u/v to denote elementwise square and division operators respectively. We use |].|| and ||.||; to denote /)-norm and l,-norm of a vector respectively. We start our discussion by formally stating the problem setup. In this paper, we study nonconvex stochastic optimization problems of the form Xr min f() == Essel, 8)] + 5 lle?, a) where ¢ is a smooth (possibly nonconvex) function and P is a probability distribution on the domain S CR*. Here, x corresponds to model parameters, is the loss function and P is an unknown data distribution. We assume function ¢(2) is L;-smooth with respect to a, ive., there exists a constant L; such that |Vie(z, s) — Villy, s)|| < Lilja — yl, Va,y eR, ands €S, (2) for all i € [h]. We use L = (L1,--+ , L;,)' to denote the h-dimensional vector of Lipschitz constants. We use L. and Layg to denote max; L; and Y: ue respectively. We assume the following bound on the variance in stochastic gradients: E||V;0(x, s) — Vi f(2)||? < 0? for all « € R¢ andi € [A]. Furthermore, we also assume E||[V (x, s)]; — [Vf (x)]i||? < &? for all 2 € R¢ and i € [d]. We use o =(01,°+: ,on)' and& = (6),--- ,4)! to denote the vectors of standard deviations of stochastic gradient per layer and per dimension respectively. Finally, we assume that the gradients are bounded ie., [VU(x, s)]; < G for all i € [d], « € R4 and s € S. Note that such assumptions are typical in the analysis of stochastic first-order methods (cf. (Ghadimi & Lan, 2013a; Ghadimi et al., 2014)). Stochastic gradient descent (SGD) is one of the simplest first-order algorithms for solving problem in Equation 1. The update at the tth iteration of SGD is of the following form: 1 Tip = Lp — "Sq Ss VE(a1, 81) + AX, (SGD) seeSt st∈St where St is set of b random samples drawn from the distribution P. For very large batch settings, the following is a well-known result for SGD. Theorem 1 ((Ghadimi & Lan, 2013b)). With large batch b = T and using appropriate learning rate, we have the following for the iterates of SGD: (en) = Fee, Hl) E[||Vf(«a)||?] < O liv (ea) |F] <0 (LON a where x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }. However, tuning the learning rate ηt in SGD, especially in large batch settings, is difficult in practice. Furthermore, the dependence on L∞ (the maximum of smoothness across dimension) can lead to significantly slow convergence. In the next section, we discuss algorithms to circumvent this issue. 3 Published as a conference paper at ICLR 2020 # 3 ALGORITHMS In this section, we first discuss a general strategy to adapt the learning rate in large batch settings. Using this strategy, we discuss two specific algorithms in the later part of the section. Since our primary focus is on deep learning, our discussion is centered around training a h-layer neural network. General Strategy. Suppose we use an iterative base algorithm A (e.g. SGD or ADAM) in the small batch setting with the following layerwise update rule: xt+1 = xt + ηtut, where ut is the update made by A at time step t. We propose the following two changes to the update for large batch settings: 1. The update is normalized to unit /2-norm. This is ensured by modifying the update to the form u,/||uz||. Throughout this paper, such a normalization is done layerwise i.e., the update for each layer is ensured to be unit /2-norm. 2. The learning rate is scaled by ¢(||2xz||) for some function ¢ : R+ — Rt. Similar to the normalization, such a scaling is done layerwise. Suppose the base algorithm A is SGD, then the modification results in the following update rule: (i) i é P(x of, = 2)? —n SED, ma (3) for all layers i ∈ [h] and where x(i) are the parameters and the gradients of the ith layer at t time step t. The normalization modification is similar to one typically used in normalized gradient descent except that it is done layerwise. Note that the modification leads to a biased gradient update; however, in large-batch settings, it can be shown that this bias is small. It is intuitive that such a normalization provides robustness to exploding gradients (where the gradient can be arbitrarily large) and plateaus (where the gradient can be arbitrarily small). Normalization of this form essentially ignores the size of the gradient and is particularly useful in large batch settings where the direction of the gradient is largely preserved. The scaling term involving ¢ ensures that the norm of the update is of the same order as that of the parameter. We found that this typically ensures faster convergence in deep neural networks. In practice, we observed that a simple function of ¢(z) = min{max{z, yi}, yu} works well. It is instructive to consider the case where $(z) = z. In this scenario, the overall change in the learning Ea lige” I gradient (see equation 2). We now discuss different instantiations of the strategy discussed above. In particular, we focus on two algorithms: LARS (3.1) and the proposed method, LAMB (3.2). rate is , which can also be interpreted as an estimate on the inverse of Lipschitz constant of the 3.1 LARS ALGORITHM The first instantiation of the general strategy is LARS algorithm (You et al., 2017), which is obtained by using momentum optimizer as the base algorithm A in the framework. LARS was earlier proposed for large batch learning for RESNET on ImageNet. In general, it is observed that the using (heavy-ball) momentum, one can reduce the variance in the stochastic gradients at the cost of little bias. The pseudocode for LARS is provide in Algorithm 1. We now provide convergence analysis for LARS in general nonconvex setting stated in this paper. For the sake of simplicity, we analyze the case where β1 = 0 and λ = 0 in Algorithm 1. However, our analysis should extend to the general case as well. We will defer all discussions about the convergence rate to the end of the section. Theorem 2. Let ηt = η = where αl, αu > 0. Then for xt generated using LARS (Algorithm 1), we have the following bound te ; (f(e1) — f(e*)) Lavy, lle (2 Jed ivuren]) <o( a ' a). where x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }. 4 Published as a conference paper at ICLR 2020 # Algorithm 2 LAMB Input: 71 € R¢, learning rate {neki parameters 0 < 61, B2 < 1, scaling function ¢, « > 0 Set mo = 0, vo = 0 Algorithm 1 LARS for t = 1 toT'do Draw b samples S; from P. Compute g: = TSit Vsrese VE(xt, 81). Input: 7 € R?, learning rate {nye parameter 0 < 61 < 1, scaling function ¢, « > 0 Set mo =0 me = Bime-1 + (1 — Bi) ge for t = 1toT do vt = Bove + (1 — Ba) 9? Draw b samples S; from P mz = m/(1 — Bt) Compute g; = aT Dajes, VE(as, 82) wy =u /(L—- 2) me = Bame-1 + (1 — B1)(ge + Ave) Compute ratio re = Fie a), = xh) — nN be Ds for all i € [h] a, = xh) — nN ES j (7 + dal?) end for end for 3.2 LAMB ALGORITHM The second instantiation of the general strategy is obtained by using ADAM as the base algorithm A. ADAM optimizer is popular in deep learning community and has shown to have good performance for training state-of-the-art language models like BERT. Unlike LARS, the adaptivity of LAMB is two-fold: (i) per dimension normalization with respect to the square root of the second moment used in ADAM and (ii) layerwise normalization obtained due to layerwise adaptivity. The pseudocode for LAMB is provided in Algorithm 2. When β1 = 0 and β2 = 0, the algorithm reduces to be Sign SGD where the learning rate is scaled by square root of the layer dimension (Bernstein et al., 2018). The following result provides convergence rate for LAMB in general nonconvex settings. Similar to the previous case, we focus on the setting where β1 = 0 and λ = 0. As before, our analysis extends to the general case; however, the calculations become messy. for all t ∈ [T ], b = T , di = d/h for all i ∈ [h], and Theorem 3. Let ηt = η = αl ≤ φ(v) ≤ αu for all v > 0 where αl, αu > 0. Then for xt generated using LAMB (Algorithm 2), we have the following bounds: 1. When β2 = 0, we have (= [pivreaii]) <0 (Ler fe tou, Hel), , 2. When β2 > 0, we have BlIvsto i <0 ( - Gd “| af) = Fea it). (1 — B2) T VT where x∗ is an optimal solution to the problem in equation 1 and xa is an iterate uniformly randomly chosen from {x1, · · · , xT }. Discussion on convergence rates. We first start our discussion with the comparison of convergence rate of LARS with that of SGD (Theorem 1). The convergence rates of LARS and SGD differ in two ways: (1) the convergence criterion is (E[)7""_, ||Vif'l])? as opposed to E[|| Vf ||?] in SGD and (2) the dependence on L and a in the convergence rate. Briefly, the convergence rate of LARS is better than SGD when the gradient is denser than curvature and stochasticity. This convergence rate comparison is similar in spirit to the one obtained in (Bernstein et al., 2018). Assuming that the convergence criterion in Theorem | and Theorem 2 is of similar order (which happens when gradients are fairly dense), convergence rate of LARS and LAMB depend on Layg instead of L.. and are thus, significantly better than that of SGD. A more quantitative comparison is provided in Section C of the Appendix. The comparison of LAMB (with 32 = 0) with SGD is along similar lines. We obtain slightly worse rates for the case where 3) > 0; although, we believe that its behavior should be better than the case 82 = 0. We leave this investigation to future work. 5 , Published as a conference paper at ICLR 2020 # 4 EXPERIMENTS We now present empirical results comparing LAMB with existing optimizers on two important large batch training tasks: BERT and RESNET-50 training. We also compare LAMB with existing optimizers for small batch size (< 1K) and small dataset (e.g. CIFAR, MNIST) (see Appendix). Experimental Setup. To demonstrate its robustness, we use very minimal hyperparameter tuning for the LAMB optimizer. Thus, it is possible to achieve better results by further tuning the hyperparameters. The parameters β1 and β2 in Algorithm 2 are set to 0.9 and 0.999 respectively in all our experiments; we only tune the learning rate. We use a polynomially decaying learning rate of ηt = η0 ×(1−t/T ) in Algorithm 2), which is the same as in BERT baseline. This setting also works for all other applications in this paper. Furthermore, for BERT and RESNET-50 training, we did not tune the hyperparameters of LAMB while increasing the batch size. We use the square root of LR scaling rule to automatically adjust learning rate and linear-epoch warmup scheduling. We use TPUv3 in all the experiments. A TPUv3 Pod has 1024 chips and can provide more than 100 petaflops performance for mixed precision computing. To make sure we are comparing with solid baselines, we use grid search to tune the hyper-parameters for ADAM, ADAGRAD, ADAMW (ADAM with weight decay), and LARS. We also tune weight decay for ADAMW. All the hyperparameter tuning settings are reported in the Appendix. Due to space constraints, several experimental details are relegated to the Appendix. 4.1 BERT TRAINING We first discuss empirical results for speeding up BERT training. For this experiment, we use the same dataset as Devlin et al. (2018), which is a concatenation of Wikipedia and BooksCorpus with 2.5B and 800M words respectively. We specifically focus on the SQuAD task2 in this paper. The F1 score on SQuAD-v1 is used as the accuracy metric in our experiments. All our comparisons are with respect to the baseline BERT model by Devlin et al. (2018). To train BERT, Devlin et al. (2018) first train the model for 900k iterations using a sequence length of 128 and then switch to a sequence length of 512 for the last 100k iterations. This results in a training time of around 3 days on 16 TPUv3 chips. The baseline BERT model3 achieves a F1 score of 90.395. To ensure a fair comparison, we follow the same SQuAD fine-tune procedure of Devlin et al. (2018) without modifying any configuration (including number of epochs and hyperparameters). As noted earlier, we could get even better results by changing the fine-tune configuration. For instance, by just slightly changing the learning rate in the fine-tune stage, we can obtain a higher F1 score of 91.688 for the batch size of 16K using LAMB. We report a F1 score of 91.345 in Table 1, which is the score obtained for the untuned version. Below we describe two different training choices for training BERT and discuss the corresponding speedups. For the first choice, we maintain the same training procedure as the baseline except for changing the training optimizer to LAMB. We run with the same number of epochs as the baseline but with batch size scaled from 512 to 32K. The choice of 32K batch size (with sequence length 512) is mainly due to memory limits of TPU Pod. Our results are shown in Table 1. By using the LAMB optimizer, we are able to achieve a F1 score of 91.460 in 15625 iterations for a batch size of 32768 (14063 iterations for sequence length 128 and 1562 iterations for sequence length 512). With 32K batch size, we reduce BERT training time from 3 days to around 100 minutes. We achieved 49.1 times speedup by 64 times computational resources (76.7% efficiency). We consider the speedup is great because we use the synchronous data-parallelism. There is a communication overhead coming from transferring of the gradients over the interconnect. For RESNET-50, researchers are able to achieve 90% scaling efficiency because RESNET-50 has much fewer parameters (# parameters is equal to #gradients) than BERT (25 million versus 300 million). To obtain further improvements, we use the Mixed-Batch Training procedure with LAMB. Recall that BERT training involves two stages: the first 9/10 of the total epochs use a sequence length of 128, while the last 1/10 of the total epochs use a sequence length of 512. For the second stage training, which involves a longer sequence length, due to memory limits, a maximum batch size of only 32768 can be used on a TPUv3 Pod. However, we can potentially use a larger batch size for the first stage because of a shorter sequence length. In particular, the batch size can be increased to 131072 for the first stage. However, we did not observe any speedup by increasing the batch size from 65536 to 131072 for the first stage, thus, we restrict the batch size to 65536 for this stage. By using this strategy, we are able to make full utilization of the hardware resources throughout the training 2https://rajpurkar.github.io/SQuAD-explorer/ 3Pre-trained BERT model can be downloaded from https://github.com/google-research/bert 6 Published as a conference paper at ICLR 2020 Table 1: We use the F1 score on SQuAD-v1 as the accuracy metric. The baseline F1 score is the score obtained by the pre-trained model (BERT-Large) provided on BERT’s public repository (as of February 1st, 2019). We use TPUv3s in our experiments. We use the same setting as the baseline: the first 9/10 of the total epochs used a sequence length of 128 and the last 1/10 of the total epochs used a sequence length of 512. All the experiments run the same number of epochs. Dev set means the test data. It is worth noting that we can achieve better results by manually tuning the hyperparameters. The data in this table is collected from the untuned version. batch size Solver steps F1 score on dev set TPUs Time Baseline LAMB LAMB LAMB LAMB LAMB LAMB LAMB 512 512 1k 2k 4k 8k 16k 32k 1000k 1000k 500k 250k 125k 62500 31250 15625 90.395 91.752 91.761 91.946 91.137 91.263 91.345 91.475 16 16 32 64 128 256 512 1024 81.4h 82.8h 43.2h 21.4h 693.6m 390.5m 200.0m 101.2m LAMB 64k/32k 8599 90.584 1024 76.19m procedure. Increasing the batch size is able to warm-up and stabilize the optimization process (Smith et al., 2017), but decreasing the batch size brings chaos to the optimization process and can cause divergence. In our experiments, we found a technique that is useful to stabilize the second stage optimization. Because we switched to a different optimization problem, it is necessary to re-warm-up the optimization. Instead of decaying the learning rate at the second stage, we ramp up the learning rate from zero again in the second stage (re-warm-up). As with the first stage, we decay the learning rate after the re-warm-up phase. With this method, we only need 8599 iterations and finish BERT training in 76 minutes (100.2% efficiency). Comparison with ADAMW and LARS. To ensure that our approach is compared to a solid baseline for the BERT training, we tried three different strategies for tuning ADAMW: (1) ADAMW with default hyperparameters (see Devlin et al. (2018)) (2) ADAMW with the same hyperparameters as LAMB, and (3) ADAMW with tuned hyperparameters. ADAMW stops scaling at the batch size of 16K because it is not able to achieve the target F1 score (88.1 vs 90.4). The tuning information of ADAMW is shown in the Appendix. For 64K/32K mixed-batch training, even after extensive tuning of the hyperparameters, we fail to get any reasonable result with ADAMW optimizer. We conclude that ADAMW does not work well in large-batch BERT training or is at least hard to tune. We also observe that LAMB performs better than LARS for all batch sizes (see Table 2). Table 2: LAMB achieves a higher performance (F1 score) than LARS for all the batch sizes. The baseline achieves a F1 score of 90.390. Thus, LARS stops scaling at the batch size of 16K. 2K Batch Size 512 1K 4K 8K 16K 32K LARS LAMB 90.717 91.752 90.369 91.761 90.748 91.946 90.537 91.137 90.548 91.263 89.589 91.345 diverge 91.475 IMAGENET TRAINING WITH RESNET-50. ImageNet training with ResNet-50 is an industry standard metric that is being used in MLPerf4. The baseline can get 76.3% top-1 accuracy in 90 epochs (Goyal et al., 2017). All the successful implementations are based on momentum SGD (He et al., 2016; Goyal et al., 2017) or LARS optimizer (Ying et al., 2018; Jia et al., 2018; Mikami et al., 2018; You et al., 2018; Yamazaki et al., 2019). Before our study, we did not find any paper reporting a state-of-the-art accuracy achieved by ADAM, 4https://mlperf.org/ 7 Published as a conference paper at ICLR 2020 ADAGRAD, or ADAMW optimizer. In our experiments, even with comprehensive hyper-parameter tuning, ADAGRAD/ADAM/ADAMW (with batch size 16K) only achieves 55.38%/66.04%/67.27% top-1 accuracy. After adding learning rate scheme of Goyal et al. (2017), the top-1 accuracy of ADAGRAD/ADAM/ADAMW was improved to 72.0%/73.48%/73.07%. However, they are still much lower than 76.3%. The details of the tuning information are in the Appendix. Table 3 shows that LAMB can achieve the target accuracy. Beyond a batch size of 8K, LAMB’s accuracy is higher than the momentum. LAMB’s accuracy is also slightly better than LARS. At a batch size of 32K, LAMB achieves 76.4% top-1 accuracy while LARS achieves 76.3%. At a batch size of 2K, LAMB is able to achieve 77.11% top-1 accuracy while LARS achieves 76.6%. Table 3: Top-1 validation accuracy of ImageNet/RESNET-50 training at the batch size of 16K (90 epochs). The performance of momentum was reported by (Goyal et al., 2017). + means adding the learning rate scheme of Goyal et al. (2017) to the optimizer: (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). All the adaptive solvers were comprehensively tuned. The tuning information was in the Appendix. adagrad/adagrad+ optimizer adam/adam+ adamw/adamw+ momentum lamb Accuracy 0.5538/0.7201 0.6604/0.7348 0.6727/0.7307 0.7520 0.7666 4.3 HYPERPARAMETERS FOR SCALING THE BATCH SIZE For BERT and ImageNet training, we did not tune the hyperparameters of LAMB optimizer when increasing the batch size. We use the square root LR scaling rule and linear-epoch warmup scheduling to automatically adjust learning rate. The details can be found in Tables 4 and 5 Table 4: Untuned LAMB for BERT training across different batch sizes (fixed #epochs). We use square root LR scaling and linear-epoch warmup. For example, batch size 32K needs to finish 15625 iterations. It uses 0.2×15625 = 3125 iterations for learning rate warmup. BERT’s baseline achieved a F1 score of 90.395. We can achieve an even higher F1 score if we manually tune the hyperparameters. Batch Size Learning Rate Warmup Ratio F1 score Exact Match 512 5 23.0×103 1 320 91.752 85.090 1K 5 22.5×103 1 160 91.761 85.260 2K 5 22.0×103 1 80 91.946 85.355 4K 5 21.5×103 1 40 91.137 84.172 8K 5 21.0×103 1 20 91.263 84.901 16K 5 20.5×103 1 10 91.345 84.816 32K 5 20.0×103 1 5 91.475 84.939 Table 5: Untuned LAMB for ImageNet training with RESNET-50 for different batch sizes (90 epochs). We use square root LR scaling and linear-epoch warmup. The baseline Goyal et al. (2017) gets 76.3% top-1 accuracy in 90 epochs. Stanford DAWN Bench (Coleman et al., 2017) baseline achieves 93% top-5 accuracy. LAMB achieves both of them. LAMB can achieve an even higher accuracy if we manually tune the hyperparameters. Batch Size Learning Rate Warmup Epochs Top-5 Accuracy Top-1 Accuracy 512 4 23.0×100 0.3125 0.9335 0.7696 1K 4 22.5×100 0.625 0.9349 0.7706 2K 4 22.0×100 1.25 0.9353 0.7711 4K 4 21.5×100 2.5 0.9332 0.7692 8K 4 21.0×100 5 0.9331 0.7689 16K 4 20.5×100 10 0.9322 0.7666 32K 4 20.0×100 20 0.9308 0.7642 # 5 CONCLUSION Large batch techniques are critical to speeding up deep neural network training. In this paper, we propose the LAMB optimizer, which supports adaptive elementwise updating and layerwise learning 8 Published as a conference paper at ICLR 2020 rates. Furthermore, LAMB is a general purpose optimizer that works for both small and large batches. We also provided theoretical analysis for the LAMB optimizer, highlighting the cases where it performs better than standard SGD. LAMB achieves a better performance than existing optimizers for a wide range of applications. By using LAMB, we are able to scale the batch size of BERT pre-training to 64K without losing accuracy, thereby, reducing the BERT training time from 3 days to around 76 minutes. LAMB is also the first large batch adaptive solver that can achieve state-of-the-art accuracy on ImageNet training with RESNET-50. 6 ACKNOWLEDGEMENT We want to thank the comments from George Dahl and Jeff Dean. We want to thank Michael Banfield, Dehao Chen, Youlong Cheng, Sameer Kumar, and Zak Stone for TPU Pod support. # REFERENCES Takuya Akiba, Shuji Suzuki, and Keisuke Fukuda. Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes. arXiv preprint arXiv:1711.04325, 2017. Yoshua Bengio. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade, pp. 437–478. Springer, 2012. Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Anima Anandkumar. signsgd: compressed optimisation for non-convex problems. CoRR, abs/1802.04434, 2018. Valeriu Codreanu, Damian Podareanu, and Vikram Saletore. Scale out for large minibatch sgd: Residual network training on imagenet-1k with improved accuracy and reduced time to train. arXiv preprint arXiv:1711.04291, 2017. Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. Dawnbench: An end-to-end deep learning bench- mark and competition. Training, 100(101):102, 2017. Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing systems, pp. 1223–1231, 2012. Aditya Devarakonda, Maxim Naumov, and Michael Garland. Adabatch: Adaptive batch sizes for training deep neural networks. arXiv preprint arXiv:1712.02029, 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Timothy Dozat. Incorporating nesterov momentum into adam. 2016. Saeed Ghadimi and Guanghui Lan. Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013a. doi: 10.1137/ 120880811. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013b. Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1-2):267–305, 2014. Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Elad Hoffer, Itay Hubara, and Daniel Soudry. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. arXiv preprint arXiv:1705.08741, 2017. 9 Published as a conference paper at ICLR 2020 Forrest N Iandola, Matthew W Moskewicz, Khalid Ashraf, and Kurt Keutzer. Firecaffe: near-linear acceleration of deep neural network training on compute clusters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2592–2600, 2016. Xianyan Jia, Shutao Song, Wei He, Yangzihao Wang, Haidong Rong, Feihu Zhou, Liqiang Xie, Zhenyu Guo, Yuanzhou Yang, Liwei Yu, et al. Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes. arXiv preprint arXiv:1807.11205, 2018. Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016. Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014. Mu Li. Scaling Distributed Machine Learning with System and Algorithm Co-design. PhD thesis, Intel, 2017. James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408–2417, 2015. Hiroaki Mikami, Hisahiro Suganuma, Yoshiki Tanaka, Yuichi Kageyama, et al. Imagenet/resnet-50 training in 224 seconds. arXiv preprint arXiv:1811.05233, 2018. Yurii E Nesterov. A method for solving the convex programming problem with convergence rate o (1/kˆ 2). In Dokl. akad. nauk Sssr, volume 269, pp. 543–547, 1983. Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Rio Yokota, and Satoshi Matsuoka. Second-order optimization method for large mini-batch: Training resnet-50 on imagenet in 35 epochs. arXiv preprint arXiv:1811.12019, 2018. Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in neural information processing systems, pp. 693–701, 2011. Christopher J Shallue, Jaehoon Lee, Joe Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E Dahl. Measuring the effects of data parallelism on neural network training. arXiv preprint arXiv:1811.03600, 2018. Samuel L Smith, Pieter-Jan Kindermans, and Quoc V Le. Don’t decay the learning rate, increase the batch size. arXiv preprint arXiv:1711.00489, 2017. Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139–1147, 2013. Masafumi Yamazaki, Akihiko Kasagi, Akihiro Tabuchi, Takumi Honda, Masahiro Miwa, Naoto Fukumoto, Tsuguchika Tabaru, Atsushi Ike, and Kohta Nakashima. Yet another accelerated sgd: Resnet-50 training on imagenet in 74.7 seconds. arXiv preprint arXiv:1903.12650, 2019. Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classification at supercomputer scale. arXiv preprint arXiv:1811.06992, 2018. Yang You, Igor Gitman, and Boris Ginsburg. Scaling sgd batch size to 32k for imagenet training. arXiv preprint arXiv:1708.03888, 2017. Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. Imagenet training in minutes. In Proceedings of the 47th International Conference on Parallel Processing, pp. 1. ACM, 2018. Yang You, Jonathan Hseu, Chris Ying, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large-batch training for lstm and beyond. arXiv preprint arXiv:1901.08256, 2019. 10 Published as a conference paper at ICLR 2020 # APPENDIX # A PROOF OF THEOREM 2 Proof. We analyze the convergence of LARS for general minibatch size here. Recall that the update of LARS is the following @ _,@ gh Try = Uy — mo(\lk >) ly Oy for all i ∈ [h]. For simplicity of notation, we reason the Since the function f is L-smooth, we have the following: P(tesr) S flee) + (Wife) eth = 21) + y Fal), — af IP h da (i) h ; ¢ PaO) = Fle) —m YOY) (lla II) x (se mn Pye Enel fa) i=1 j=1 i=1 hod (i) fs . n)- sje oy (2 Watlools , (WaFlod]y)) nba fe a ' x (tsteon (i IviFtee)l Rca) a i (3) («,) — (2 |) («)|| - Ii5 _ Wif(@)]; ()— male «sled —m Se (teste x (HE — eaneale)) (4) (4) The first inequality follows from the lipschitz continuous nature of the gradient. Let ∆(i) ∇if (xt). Then the above inequality can be rewritten in the following manner: # t = g(i) h F (tens) < Flee) — me D> ole PIVif ol i=l ho di ; . (A + [Vi f(a)];) [Vif (e)]; 22 _ ald) id (@4)|5 bd J i t)\j ; mY, "aM el) x (iw. Mi x ( JA + Vif eal eh) 5 leh = F(0) —m Yr (kat? IDIVet (ee)Ih _ 1) (A? + Vif (ae), Vif ae) — npo2 nal II) x «( ja” + vif@ll iviseo +A lelh f(a) -n ook (let? DlIVif(we)Il s(ix00 ae Sea) _ MeO yp +n Sool 0 ( a + Wotton ea = F(0) —m Yr (lat? DlVi f(a) || + 4 ea if (eo) IAP + Vif (@e)|l = JAY? + Vif @o|? + (AP AP + Vif ee) inde {9px (Hela + Vif (es)|] = Al? + Vite LAP + ViF(ae)) nS ollie 1A? cVifteol (5) 11 t α2 η2 u 2 # yp ) . Published as a conference paper at ICLR 2020 Using Cauchy-Schwarz inequality in the above inequality, we have: h S(@esa) < Flee) = ne > 6(llart? Vif (we) i=l ‘ (i) : (i) Wy) 4. MeO + ne Y> o(llet II) x ([IViF(@)ll — IAP + Vif (ee) + IA! ye ell i=l h h S fer) — me D> d(llart? UVF (ee) I| + 2m Y> (let? 1) x JAP 4% Oe LEI, i=l i=l Taking expectation, we obtain the following: h h Elf(eisa)] S fee) — m D> H(lle! Vif eI] + 2m D> o(fe |) x BAL? |] + Be “Lh i=1 i=l h < (ee) — mou > |ViF (ae) || + 2moul Ze ©) = vo” Summing the above inequality for t = 1 to T and using telescoping sum, we have the following inequality: ie aullolla 7027 Elf(wrs1)] < f(e1) — new 95 SC El||Vif (x2)|l] + nT Vet oh t=1 i=1 Rearranging the terms of the above inequality, and dividing by ηT αl, we have: Ti x1) — x On ||o a2 FD IVF oll < f(e1) = Elf (er+1)] 2 Wlorlla , on Za t=1 i=1 Tay Vbay < flea) = fle"), Rolo nap ~ Tyna ab 2a, # B PROOF OF THEOREM 3 Proof. We analyze the convergence of LAMB for general minibatch size here. Recall that the update of LAMB is the following a Ire of), = at) —mo((2t|I) for all i ∈ [h]. For simplicity of notation, we reason the Since the function f is L-smooth, we have the following: h , . i Li ; Paves) S Flee) + (Vif (we). aths — at) + 0 Flies — 20 |P i=1 hod “ 7) * Eye2n? = flee) —m YO Soller Il) (iw.sen, x =) +S rT i=1 j= i=l Ty 12 Published as a conference paper at ICLR 2020 The above inequality simply follows from the lipschitz continuous nature of the gradient. We bound term T1 in the following manner: ds (i) = <n OYe (\l2t? I) x a x | ae >>> ae x [Vif (aa)]) x 9f"}) i=1 j=1 (3) el (\let II) x [Vif (wa)]y + 25) Moia(vistenln # sit i=1j t (8) # T1 ≤ −ηt (8) This follows from the fact that ||r; () I< \/& 5 and \/% < G. If By = 0, then T; can be bounded as follows: hd T<-m Oy) x (o(lla!? WD x [IVs (re)]il) i=1 j=1 h di rf =>>>31C (lle! II) x [Vif (wa)]y FG its) Aeon) # soll) i=1 j=1 # T1 ≤ −ηt The rest of the proof for β2 = 0 is similar to argument for the case β2 > 0, which is shown below. Taking expectation, we have the following: BInl< 023° y Gage [alll x (Ist eos x 9f9)] ao - nde Hee x (ioe te a) Lsign(Vef (e0)]j) # sia) < >>> aeee [( (let I) x Iisa] * 9f9)] =e ate < >>> THe [olla ID x (Wwisleol, x 9)] =e -n eS oul Vif (e)])P(sign(ViS (ea)Jy) # sign(g?)) a E[T1] ≤ −ηt (9) Using the bound on the probability that the signs differ, we get: 32) i, BIT < —mavy “CS Iw peal? I a. i=1 j= Substituting the above bound on T1 in equation 7, we have the following bound: hi o 2 a2 ||L BUs(eres)] < fer) = mov) “Ev (aI? + male + elEh oy 13 Published as a conference paper at ICLR 2020 # Algorithm 4 NN-LAMB # Algorithm 3 N-LAMB Algorithm 3 N-LAMB Input: 7, € R?, learning rate {ne}ien, parame- ters 0 < (1,62 < 1, scaling function ¢, « > 0, parameters 0 < {8{}#.4 <1 Set mo = 0, vo = 0 for t = 1 to T do Draw b samples S; from P. Compute g: = aT Verese VE(xt, 81). me = Bimi-1 + (1 — 81) ge mn Byte (Bi) ge Tone? tar He gt 1-021} 1-H} 8} Input: 71 € R¢, learning rate {neki parameters 0 < fi, G2 < 1, scaling function ¢, € > 0, parame- ters 0 < {BiEy <1 Set mo = 0, vo = 0 for t = 1 to T do Draw b samples S; from P. Compute g; = aT Dajes, VE(at, 81)- me = Bimi-1 + (1 — 81) ge ~ _Bithme (1—8t ge ~ onl a + 1 3 ve = Bovr-1 + (1 — Bo) 9? 2 Ut = Bovr-1 + (1 — Ba) a? ‘eH, ae 5 — Bove oa Pott 4 OA a U= TBE i—mittas | 1-1, 83 ation, — atio 7, — Compute ratio r; = Vine Compute ratio r; = Vine (i) (i) @ _ elles ID (a) @ _ elles ID (a) x = 2° — MD (Tt Att x = 2° — MD (Tt Att t+. t Ir aes mk t ) tl t Ir aes mk t ) end for end for Summing the above inequality for t = 1 to T and using telescoping sum, we have the following inequality: Elf(xr+1)] < f(v1) - mor) RO Ba) SCEIVE (eI) +nTau wa +75 t lel, | Part |Ll|1- Rearranging the terms of the above inequality, and dividing by ηT αl, we have: —B T Var TL Elven s 7 (1) — E[f(xr41)] _ Aull lla Th Tho “avo | 2 Thay Ql Vb mi) = fle"), aullélhy 08) py, 2a , # C COMPARISON OF CONVERGENCE RATES OF LARS AND SGD Inspired by the comparison used by (Bernstein et al., 2018) for comparing SIGN SGD with SGD, we define the following quantities: (x iv. ral) VTAedA VAI? , vod FleIIP i=1 h h vd? ||L 2 [nip < Pe 2 _ Yod|lo||? joug — Veale Then LARS convergence rate can be written in the following manner: (Flos) = fle") Loe be Ill? v2 (E[|VF(x.)|))? <0 ( T eT “): Ifu,< vy and u, < w? then Lars (i.e., gradient is more denser than curvature or stochasticity), we gain over SGD. Otherwise, SGD’s upper bound on convergence rate is better. 14 Published as a conference paper at ICLR 2020 ImageNet/ResNet-50 (Batch Size=32K, 90 epochs) “NI ~ ~N op) N ul NON Nw Top-1 Validation Accuracy ~S Bb ee f 1 * Momentum Nadam “NI Figure 1: This figure shows N-LAMB and NN-LAMB can achieve a comparable accuracy compared to LAMB optimizer. Their performances are much better than momentum solver. The result of momentum optimizer was reported by Goyal et al. (2017). For Nadam, we use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). We also tuned the learning rate of Nadam in {1e-4, 2e-4, ..., 9e-4, 1e-3, 2e-3, ..., 9e-3, 1e-2}. # D N-LAMB: NESTEROV MOMENTUM FOR LAMB Sutskever et al. (2013) report that Nesterov’s accelerated gradient (NAG) proposed by Nesterov (1983) is conceptually and empirically better than the regular momentum method for convex, non-stochastic objectives. Dozat (2016) incorporated Nesterov’s momentum into Adam optimizer and proposed the Nadam optimizer. Specifically, only the first moment of Adam was modified and the second moment of Adam was unchanged. The results on several applications (Word2Vec, Image Recognition, and LSTM Language Model) showed that Nadam optimizer improves the speed of convergence and the quality of the learned models. We also tried using Nesterov’s momentum to replace the regular momentum of LAMB optimizer’s first moment. In this way, we got a new algorithm named as N-LAMB (Nesterov LAMB). The complete algorithm is in Algorithm 3. We can also Nesterov’s momentum to replace the regular momentum of LAMB optimizer’s second moment. We refer to this algorithm as NN-LAMB (Nesterov’s momentum for both the first moment and the second moment). The details of NN-LAMB were shown in Algorithm 4. Dozat (2016) suggested the best performance of Nadam was achieved by 6; = 0.975, 32 = 0.999, and € = le-8. We used the same settings for N-LAMB and NN-LAMB. We scaled the batch size to 32K for ImageNet training with ResNet-50. Our experimental results show that N-LAMB and NN-LAMB can achieve a comparable accuracy compared to LAMB optimizer. Their performances are much better than momentum solver (Figure 1). # E LAMB WITH LEARNING RATE CORRECTION There are two operations at each iteration in original Adam optimizer (let us call it adam-correction): mt = mt/(1 − βt 1) vt = 4/(1 — 83) It has an impact on the learning rate by 7, := m*/ (1 — 6%)/(1 — BT). According to our experimental results, adam-correction essentially has the same effect as learning rate warmup (see Figure 2). The warmup function often was implemented in the modern deep learning system. Thus, we can remove adam-correction from the LAMB optimizer. We did not observe any drop in the test or validation accuracy for BERT and ImageNet training. 15 Published as a conference paper at ICLR 2020 Learning Rate —— adam-correct (betal=0.9, beta2=0.999) —— gradual learning rate warmup ° 2000 4900 e900 000 Iterations Figure 2: The figure shows that adam-correction has the same effect as learning rate warmup. We removed adam-correction from the LAMB optimizer. We did not observe any drop in the test or validation accuracy for BERT and ImageNet training. ImageNet/ResNet-50 by LAMB (90 epochs, Batch Size=32K) > 0.925) gawd 11 Norm (76.5%/93.2%) 0.900; jum =L2 Norm (76.5%/93.1%) 0.875) mm Inf Norm (76.4%/93.2%) 0.850} 0.825} 0.800} oO & 0.775} 0.750 lidation Accura Top-1 Accuracy Top-5 Accuracy Figure 3: We tried different norms in LAMB optimizer. However, we did not observe a significant difference in the validation accuracy of ImageNet training with ResNet-50. We use L2 norm as the default. # F LAMB WITH DIFFERENT NORMS We need to compute the matrix/tensor norm for each layer when we do the parameter updating in the LAMB optimizer. We tried different norms in LAMB optimizer. However, we did not observe a significant difference in the validation accuracy of ImageNet training with ResNet-50. In our experiments, the difference in validation accuracy is less than 0.1 percent (Figure 3). We use L2 norm as the default. # G REGULAR BATCH SIZES FOR SMALL DATASETS: MNIST AND CIFAR-10. According to DAWNBench, DavidNet (a custom 9-layer Residual ConvNet) is the fastest model for CIFAR-10 dataset (as of April 1st, 2019)5. The baseline uses the momentum SGD optimizer. Table 6 and Figure 4 show the test accuracy of CIFAR-10 training with DavidNet. The PyTorch implementation (momentum SGD optimizer) on GPUs was reported on Standford DAWNBench’s website, which achieves 94.06% in 24 epochs. The Tensorflow implementation (momentum SGD optimizer) on TPU achieves a 93.72% accuracy in 24 epochs6. We use the implementation of TensorFlow on TPUs. LAMB optimizer is able to achieve 94.08% test accuracy in 24 epochs, which is better than other adaptive optimizers and momentum SGD. Even on the smaller tasks like MNIST training with LeNet, LAMB is able to achieve a better accuracy than existing solvers (Table 7). # 5https://dawn.cs.stanford.edu/benchmark/CIFAR10/train.html 6https://github.com/fenwickslab/dl_tutorials/blob/master/tutorial3_cifar10_davidnet_fix.ipynb 16 Published as a conference paper at ICLR 2020 CIFAR-10 with DavidNet (under 1 minute on 1 TPU, 24 epochs) o9as, y, 0935, 0.930 092s os20 ons os1o 0905 BANE LAMB Momentum Adagrad Adam AdamW Test Accurac Figure 4: LAMB is better than the existing solvers (batch size = 512). We make sure all the solvers are carefully tuned. The learning rate tuning space of Adam, AdamW, Adagrad and LAMB is {0.0001, 0.0002, 0.0004, 0.0006, 0.0008, 0.001, 0.002, 0.004, 0.006, 0.008, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 2, 4, 6, 8, 10, 15, 20, 25, 30, 35, 40, 45, 50}. The momentum optimizer was tuned by the baseline implementer. The weight decay term of AdamW was tuned by {0.0001, 0.001, 0.01, 0.1, 1.0}. Table 6: CIFAR-10 training with DavidNet (batch size = 512). All of them run 24 epochs and finish the training under one minute on one cloud TPU. We make sure all the solvers are carefully tuned. The learning rate tuning space of Adam, AdamW, Adagrad and LAMB is {0.0001, 0.0002, 0.0004, 0.0006, 0.0008, 0.001, 0.002, 0.004, 0.006, 0.008, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 2, 4, 6, 8, 10, 15, 20, 25, 30, 35, 40, 45, 50}. The momentum optimizer was tuned by the baseline implementer. The weight decay term of AdamW was tuned by {0.0001, 0.001, 0.01, 0.1, 1.0}. Optimizer ADAGRAD ADAM ADAMW momentum LAMB Test Accuracy 0.9074 0.9225 0.9271 0.9372 0.9408 # H IMPLEMENTATION DETAILS AND ADDITIONAL RESULTS There are several hyper-parameters in LAMB optimizer. Although users do not need to tune them, we explain them to help users to have a better understanding. 3) is used for decaying the running average of the gradient. G2 is used for decaying the running average of the square of gradient. The default setting for other parameters: weight decay rate A=0.01, 31=0.9, 32=0.999, c=le-6. We did not tune 3) and 32. However, our experiments show that tuning them may get a higher accuracy. Based on our experience, learning rate is the most important hyper-parameter that affects the learning efficiency and final accuracy. Bengio (2012) suggests that it is often the single most important hyper-parameter and that it always should be tuned. Thus, to make sure we have a solid baseline, we carefully tune the learning rate of ADAM, ADAMW, ADAGRAD, and momentum SGD In our experiments, we found that the validation loss is not reliable for large-batch training. A lower validation loss does not necessarily lead to a higher validation accuracy (Figure 5). Thus, we use the test/val accuracy or F1 score on dev set to evaluate the optimizers. # H.0.1 BERT Table 8 shows some of the tuning information from BERT training with ADAMW optimizer. ADAMW stops scaling at the batch size of 16K. The target F1 score is 90.5. LAMB achieves a F1 score of 91.345. The table shows the tuning information of ADAMW. In Table 8, we report the best F1 score we observed from our experiments. The loss curves of BERT training by LAMB for different batch sizes are shown in Figure 6. We observe that the loss curves are almost identical to each other, which means our optimizer scales well with the batch size. The training loss curve of BERT mixed-batch pre-training with LAMB is shown in Figure 7. This figure shows that LAMB can make the training converge smoothly at the batch size of 64K. Figure 8 shows that we can achieve 76.8% scaling efficiency by scaling the batch size (49.1 times speedup by 64 times computational resources) and 101.8% scaling efficiency with mixed-batch (65.2 times speedup by 64 times computational resources) 17 Published as a conference paper at ICLR 2020 Table 7: Test Accuracy by MNIST training with LeNet (30 epochs for Batch Size = 1024). The tuning space of learning rate for all the optimizers is {0.0001, 0.001, 0.01, 0.1}. We use the same learning rate warmup and decay schedule for all of them. Optimizer Momentum Addgrad ADAM ADAMW LAMB Average accuracy over 5 runs 0.9933 0.9928 0.9936 0.9941 0.9945 We can not trust val loss (ImageNet/ResNet-50, Batch=8K) 35} \ ee Validation Accuracy = 76.4% —— Validation Accuracy = 73.9% wv { W 3.0 fo} pa] Cc PS 2.5 oO 3 © > 20 1.5 10 20 30 40 60 70 80 90 50 Epoch Figure 5: Our experiments show that even the validation loss is not reliable in the large-scale training. A lower validation loss may lead to a worse accuracy. Thus, we use the test/val accuracy or F1 score on dev set to evaluate the optimizers. # H.0.2 IMAGENET Figures 9 - 14 show the LAMB trust ratio at different iterations for ImageNet training with ResNet-50. From these figures we can see that these ratios are very different from each other for different layers. LAMB uses the trust ratio to help the slow learners to train faster. H.1 BASELINE TUNING DETAILS FOR IMAGENET TRAINING WITH RESNET-50 If you are not interested in the baseline tuning details, please skip this section. Goyal et al. (2017) suggested a proper learning rate warmup and decay scheme may help improve the ImageNet classification accuracy. We included these techniques in Adam/AdamW/AdaGrad tuning. Specifically, we use the learning rate recipe of Goyal et al. (2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 76.3% (Goyal et al., 2017). There techniques help to im- prove the accuracy of Adam/AdamW/AdaGrad to around 73%. However, even with these techniques, Adam/AdamW/AdaGrad stil can not achieve the target validation accuracy. To make sure our baseline is solid, we carefully tuned the hyper-parameters. Table 9 shows the tuning information of standard Adagrad. Table 10 shows the tuning information of adding the learning rate scheme of Goyal et al. (2017) to standard Adagrad. Table 11 shows the tuning information of standard Adam. Table shows the tuning information of adding the learning rate scheme of Goyal et al. (2017) to standard Adam. It is tricky to tune the AdamW optimizer since both the L2 regularization and weight decay have the effect on the performance. Thus we have four tuning sets. The first tuning set is based on AdamW with default L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 13, 14, 15, and 16. The second tuning set is based on AdamW with disabled L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 17, 18, 19, and 20. 18 Published as a conference paper at ICLR 2020 Table 8: ADAMW stops scaling at the batch size of 16K. The target F1 score is 90.5. LAMB achieves a F1 score of 91.345. The table shows the tuning information of ADAMW. In this table, we report the best F1 score we observed from our experiments. LR Solver batch size warmup steps last step infomation F1 score on dev set ADAMW ADAMW ADAMW ADAMW ADAMW ADAMW ADAMW ADAMW ADAMW 16K 16K 16K 16K 16K 16K 16K 16K 16K 0.05×31250 0.05×31250 0.05×31250 0.10×31250 0.10×31250 0.10×31250 0.20×31250 0.20×31250 0.20×31250 0.0001 0.0002 0.0003 0.0001 0.0002 0.0003 0.0001 0.0002 0.0003 loss=8.04471, step=28126 loss=7.89673, step=28126 loss=8.35102, step=28126 loss=2.01419, step=31250 loss=1.04689, step=31250 loss=8.05845, step=20000 loss=1.53706, step=31250 loss=1.15500, step=31250 loss=1.48798, step=31250 diverged diverged diverged 86.034 88.540 diverged 85.231 88.110 85.653 LAMB Optimizer, Batch Size = 32K, final train loss = 1.342, dev set F1 score = 91.475 train loss LAMB Optimizer, Batch Size = 16K, final train loss = 1.167, dev set Fl score = 91.345 train loss LAMB Optimizer, Batch Size = 8K, final train loss = 1.475, dev set F1 score = 91,263 train loss LAMB Optimizer, Batch Size = 4K, final train loss = 1.055, dev set F1 score = 91.137 train loss 6 0000 40000 000 soceo 00300 ecb00 LAMB Optimizer, Batch Size = 2K, final train loss = 1.430, dev set F1 score = 91.946 rain loss LAME Optimizer, Batch Size = 1K, final train loss = 1.222, dev set Fl score = 91.761 train loss ev UN SON LAMB Optimizer, Batch Size = 512, final train loss = 1.115, dev set Fl score = 91.752 train loss Iterations Figure 6: This figure shows the training loss curve of LAMB optimizer. We just want to use this figure to show that LAMB can make the training converge smoothly. Even if we scale the batch size to the extremely large cases, the loss curves are almost identical to each other. Then we add the learning rate scheme of Goyal et al. (2017) to AdamW and refer to it as AdamW+. The third tuning set is based on AdamW+ with default L2 regularization. We tune the learning rate and weight decay. The tuning information is Figure 21 and 22. The fourth tuning set is based on AdamW+ with disabled L2 regularization. We tune the learning rate and weight decay. The tuning information is in Figures 23, 24, 25. Based on our comprehensive tuning results, we conclude the existing adaptive solvers do not perform well on ImageNet training or at least it is hard to tune them. 19 Published as a conference paper at ICLR 2020 BERT pre-train for 64K/32K batch size 7 —e— loss = 1.33871 at 7037 step Train Loss ie) 1000 2000 3000 4000 5000 6000 7000 steps Figure 7: This figure shows the training loss curve of LAMB optimizer. This figure shows that LAMB can make the training converge smoothly at the extremely large batch size (e.g. 64K). Scaling Efficiency 60, SE Perfect Scaling Mami Our Scaling 50 $40 mo} o ® 30 Ww 20 : zi om Bill 32 64 128 256 512 1024 1024-mixed TPUs Figure 8: We achieve 76.8% scaling efficiency (49 times speedup by 64 times computational resources) and 101.8% scaling efficiency with a mixed, scaled batch size (65.2 times speedup by 64 times computational resources). 1024-mixed means the mixed-batch training on 1024 TPUs. ImageNet/ResNet-50 training by LAMB optimizer (1st iteration) Trust Ratio 6 D Fa D za Ey Layer ID Figure 9: The LAMB trust ratio. 20 Published as a conference paper at ICLR 2020 Trust Ratio ImageNet/ResNet-50 training by LAMB optimizer (4th iteration) ous nso ous 100 07s uso 0005 000 4 Layer ID Figure 10: The LAMB trust ratio. Trust Ratio ImageNet/ResNet-50 training by LAMB optimizer (10th iteration) 0035 ous0 oms 020 ous ono 005 000 4 Layer ID Figure 11: The LAMB trust ratio. Trust Ratio ImageNet/ResNet-50 training by LAMB optimizer (50th iteration) 010 008 ovo! Layer ID Figure 12: The LAMB trust ratio. Trust Ratio ImageNet/ResNet-50 training by LAMB optimizer (100th iteration) Layer ID Figure 13: The LAMB trust ratio. 21 Published as a conference paper at ICLR 2020 ImageNet/ResNet-50 training by LAMB optimizer (200th iteration) Trust Ratio 9 D Fa Fy ry Ey Layer ID Figure 14: The LAMB trust ratio. Table 9: The accuracy information of tuning default AdaGrad optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). Learning Rate | Top-1 Validation Accuracy 0.0001 0.0026855469 0.001 0.015563965 0.002 0.022684732 0.004 0.030924479 0.008 0.04486084 0.010 0.054158527 0.020 0.0758667 0.040 0.1262614 0.080 0.24037679 0.100 0.27357993 0.200 0.458313 0.400 0.553833 0.800 0.54103595 1.000 0.5489095 2.000 0.47680664 4.000 0.5295207 6.000 0.36950684 8.000 0.31081137 10.00 0.30670166 12.00 0.3091024 14.00 0.3227946 16.00 0.0063680015 18.00 0.11287435 20.00 0.21602376 30.00 0.08315023 40.00 0.0132039385 0.0001 0.001 0.002 0.004 0.008 0.010 0.020 0.040 0.080 0.100 0.200 0.400 0.800 1.000 2.000 4.000 6.000 8.000 10.00 12.00 14.00 16.00 18.00 20.00 30.00 40.00 50.00 0.0026855469 0.015563965 0.022684732 0.030924479 0.04486084 0.054158527 0.0758667 0.1262614 0.24037679 0.27357993 0.458313 0.553833 0.54103595 0.5489095 0.47680664 0.5295207 0.36950684 0.31081137 0.30670166 0.3091024 0.3227946 0.0063680015 0.11287435 0.21602376 0.08315023 0.0132039385 0.0009969076 22 Published as a conference paper at ICLR 2020 Table 10: The accuracy information of tuning AdaGrad optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). Learning Rate | Top-1 Validation Accuracy 0.0001 0.0011189779 0.001 0.00793457 0.002 0.012573242 0.004 0.019022623 0.008 0.027079264 0.010 0.029012045 0.020 0.0421346 0.040 0.06618246 0.080 0.10970052 0.100 0.13429768 0.200 0.26550293 0.400 0.41918945 0.800 0.5519816 1.000 0.58614093 2.000 0.67252606 4.000 0.70306396 6.000 0.709493 8.000 0.7137858 10.00 0.71797687 12.00 0.7187703 14.00 0.72007245 16.00 0.7194214 18.00 0.7149251 20.00 0.71293133 30.00 0.70458984 40.00 0.69085693 0.0001 0.001 0.002 0.004 0.008 0.010 0.020 0.040 0.080 0.100 0.200 0.400 0.800 1.000 2.000 4.000 6.000 8.000 10.00 12.00 14.00 16.00 18.00 20.00 30.00 40.00 50.00 0.0011189779 0.00793457 0.012573242 0.019022623 0.027079264 0.029012045 0.0421346 0.06618246 0.10970052 0.13429768 0.26550293 0.41918945 0.5519816 0.58614093 0.67252606 0.70306396 0.709493 0.7137858 0.71797687 0.7187703 0.72007245 0.7194214 0.7149251 0.71293133 0.70458984 0.69085693 0.67976886 23 Published as a conference paper at ICLR 2020 Table 11: The accuracy information of tuning default Adam optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). Learning Rate Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.010 0.5521 0.6089 0.6432 0.6465 0.6479 0.6604 0.6408 0.5687 0.5165 0.4812 0.3673 Table 12: The accuracy information of tuning Adam optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). Learning Rate Top-1 Validation Accuracy 24 Published as a conference paper at ICLR 2020 Table 13: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 25 Published as a conference paper at ICLR 2020 Table 14: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 26 Published as a conference paper at ICLR 2020 Table 15: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 27 Published as a conference paper at ICLR 2020 Table 16: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.025 0.030 0.040 0.050 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) 0.0009765625 0.0009969076 0.0010172526 0.0009358724 0.0022379558 0.001566569 0.009480794 0.0033569336 0.0029907227 0.0018513998 0.009134929 0.0022176106 0.0040690103 0.0017293295 0.00061035156 0.0022379558 0.0017089844 0.0014241537 0.0020345051 0.0012817383 28 Published as a conference paper at ICLR 2020 Table 17: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 29 Published as a conference paper at ICLR 2020 Table 18: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 30 Published as a conference paper at ICLR 2020 Table 19: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 31 Published as a conference paper at ICLR 2020 Table 20: The accuracy information of tuning default AdamW optimizer for ImageNet training with ResNet-50 (batch size = 16384, 90 epochs, 7038 iterations). The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 32 Published as a conference paper at ICLR 2020 Table 21: The accuracy information of tuning AdamW optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) 0.0009969076 0.0009969076 0.0009969076 0.0009358724 0.0009969076 0.0009765625 0.0010172526 0.0010172526 0.0010172526 0.0010172526 0.0010172526 0.0010172526 0.0010172526 0.0009969076 0.0010172526 0.0010172526 0.0010172526 0.0038452148 0.011881511 0.0061442056 33 Published as a conference paper at ICLR 2020 Table 22: The accuracy information of tuning AdamW optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) default (0.01) 0.3665975 0.5315755 0.6369222 0.6760457 0.69557697 0.7076009 0.73065186 0.72806805 0.72161865 0.71816 0.49804688 0.6287028 0.6773885 0.67348224 0.6622111 0.6468709 0.5846761 0.4868978 0.34969077 0.31193033 34 Published as a conference paper at ICLR 2020 Table 23: The accuracy information of tuning AdamW optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 0.001 disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable 0.0010172526 0.0009765625 0.0010172526 0.0009969076 0.0010172526 0.0009765625 0.0009969076 0.0009969076 0.0009765625 0.0010172526 0.0009765625 0.0010172526 0.0010172526 0.0010172526 0.0010172526 0.0009969076 0.0010579427 0.0016886393 0.019714355 0.1329956 35 Published as a conference paper at ICLR 2020 Table 24: The accuracy information of tuning AdamW optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.025 0.030 0.040 0.050 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 0.0001 disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable 0.28515625 0.44055176 0.56815594 0.6234741 0.6530762 0.6695964 0.70048016 0.71698 0.72021484 0.7223918 0.72017413 0.72058105 0.7188924 0.71695966 0.7154134 0.71358234 0.7145386 0.7114258 0.7066447 0.70284015 36 Published as a conference paper at ICLR 2020 Table 25: The accuracy information of tuning AdamW optimizer for ImageNet training with ResNet- 50 (batch size = 16384, 90 epochs, 7038 iterations). We use the learning rate recipe of (Goyal et al., 2017): (1) 5-epoch warmup to stablize the initial stage; and (2) multiply the learning rate by 0.1 at 30th, 60th, and 80th epoch. The target accuracy is around 0.763 (Goyal et al., 2017). learning rate weight decay L2 regularization Top-1 Validation Accuracy 0.0001 0.0002 0.0004 0.0006 0.0008 0.001 0.002 0.004 0.006 0.008 0.010 0.012 0.014 0.016 0.018 0.020 0.025 0.030 0.040 0.050 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 0.00001 disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable disable 0.31247965 0.4534912 0.57765704 0.6277669 0.65321857 0.6682129 0.69938153 0.7095947 0.710612 0.70857745 0.7094116 0.70717365 0.7109375 0.7058309 0.7052409 0.7064412 0.7035319 0.6994629 0.6972656 0.6971232 37
{ "id": "1706.02677" }
1904.00420
Single Path One-Shot Neural Architecture Search with Uniform Sampling
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method, however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.
http://arxiv.org/pdf/1904.00420
Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, Jian Sun
cs.CV
ECCV 2020
null
cs.CV
20190331
20200708
0 2 0 2 l u J 8 ] V C . s c [ 4 v 0 2 4 0 0 . 4 0 9 1 : v i X r a # Single Path One-Shot Neural Architecture Search with Uniform Sampling Zichao Guo*!*, Xiangyu Zhang*!, Haoyuan Mul?, Wen Heng?, Zechun Liu!?, Yichen Wei', Jian Sunt 1MEGVII Technology 2Tsinghua University, 3Hong Kong University of Science and Technology {guozichao, zhangxiangyu, hengwen, weiyichen, sunjian}@megvii.com, [email protected], [email protected] Abstract. We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Ex- isting one-shot method, however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is per- formed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and ef- fective. It is easy to train and fast to search. It effortlessly supports complex search spaces (e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet. # Introduction Deep learning automates feature engineering and solves the weight optimiza- tion problem. Neural Architecture Search (NAS) aims to automate architecture engineering by solving one more problem, architecture design. Early NAS ap- proaches [36,32,33,11,16,21] solves the two problems in a nested manner. A large number of architectures are sampled and trained from scratch. The computation cost is unaffordable on large datasets. Recent approaches [23,4,12,26,15,31,3,2] adopt a weight sharing strategy to reduce the computation. A supernet subsuming all architectures is trained only once. Each architecture inherits its weights from the supernet. Only fine-tuning is performed. The computation cost is greatly reduced. * Equal contribution. This work is done when Haoyuan Mu and Zechun Liu are interns at MEGVII Technology. 1 This work is supported by The National Key Research and Development Program of China (No. 2017YFA0700800) and Beijing Academy of Artificial Intelligence (BAAI). # 2 Zichao Guo, et al. Most weight sharing approaches use a continuous relaxation to parameter- ize the search space [23,4,12,26,31]. The architecture distribution parameters are jointly optimized during the supernet training via gradient based methods. The best architecture is sampled from the distribution after optimization. There are two issues in this formulation. First, the weights in the supernet are deeply coupled. It is unclear why inherited weights for a specific architecture are still effective. Second, joint optimization introduces further coupling between the ar- chitecture parameters and supernet weights. The greedy nature of the gradient based methods inevitably introduces bias during optimization and could easily mislead the architecture search. They adopted complex optimization techniques to alleviate the problem. The one-shot paradigm [3,2] alleviates the second issue. It defines the su- pernet and performs weight inheritance in a similar way. However, there is no architecture relaxation. The architecture search problem is decoupled from the supernet training and addressed in a separate step. Thus, it is sequential. It com- bines the merits of both nested and joint optimization approaches above. The architecture search is both efficient and flexible. The first issue is still problematic. Existing one-shot approaches [3,2] still have coupled weights in the supernet. Their optimization is complicated and involves sensitive hyper parameters. They have not shown competitive results on large datasets. This work revisits the one-shot paradigm and presents a new approach that further eases the training and enhances architecture search. Based on the ob- servation that the accuracy of an architecture using inherited weights should be predictive for the accuracy using optimized weights, we propose that the super- net training should be stochastic. All architectures have their weights optimized simultaneously. This gives rise to a uniform sampling strategy. To reduce the weight coupling in the supernet, a simple search space that consists of single path architectures is proposed. The training is hyperparameter-free and easy to converge. This work makes the following contributions. 1. We present a principled analysis and point out drawbacks in existing NAS approaches that use nested and joint optimization. Consequently, we hope this work will renew interest in the one-shot paradigm, which combines the merits of both via sequential optimization. 2. We present a single path one-shot approach with uniform sampling. It over- comes the drawbacks of existing one-shot approaches. Its simplicity enables a rich search space, including novel designs for channel size and bit width, all addressed in a unified manner. Architecture search is efficient and flexible. Evolutionary algorithm is used to support real world constraints easily, such as low latency. Comprehensive ablation experiments and comparison to previous works on a large dataset (ImageNet) verify that the proposed approach is state-of-the-art in terms of accuracy, memory consumption, training time, architecture search efficiency and flexibility. SPOS: Single Path One-Shot 3 # 2 Review of NAS Approaches Without loss of generality, the architecture search space A is represented by a directed acyclic graph (DAG). A network architecture is a subgraph a ∈ A, denoted as N (a, w) with weights w. Neural architecture search aims to solve two related problems. The first is weight optimization, wa = argmin Ltrain (N (a, w)) , w (1) where Ltrain(·) is the loss function on the training set. The second is architecture optimization. It finds the architecture that is trained on the training set and has the best accuracy on the validation set, as a∗ = argmax ACCval (N (a, wa)) , a∈A (2) where ACCval(·) is the accuracy on the validation set. Early NAS approaches perform the two optimization problems in a nested manner [35,36,32,33,1]. Numerous architectures are sampled from A and trained from scratch as in Eq. (1). Each training is expensive. Only small dataset (e.g., CIFAR 10) and small search space (e.g, a single block) are affordable. Recent NAS approaches adopt a weight sharing strategy [4,12,23,26,2,3,31,15]. The architecture search space A is encoded in a supernet 1, denoted as N (A, W ), where W is the weights in the supernet. The supernet is trained once. All archi- tectures inherit their weights directly from W . Thus, they share the weights in their common graph nodes. Fine tuning of an architecture is performed in need, but no training from scratch is incurred. Therefore, architecture search is fast and suitable for large datasets like ImageNet. Most weight sharing approaches convert the discrete architecture search space into a continuous one [23,4,12,26,31]. Formally, space A is relaxed to A(θ), where θ denotes the continuous parameters that represent the distribution of the ar- chitectures in the space. Note that the new space subsumes the original one, A ⊆ A(θ). An architecture sampled from A(θ) could be invalid in A. An advantage of the continuous search space is that gradient based meth- ods [12,4,23,22,26,31] is feasible. Both weights and architecture distribution pa- rameters are jointly optimized, as (θ∗, Wθ∗ ) = argmin Ltrain(N (A(θ), W )). θ,W (3) or perform a bi-level optimization, as θ∗ = argmax ACCval (N (A(θ), W ∗ θ )) θ θ = argmin s.t. W ∗ Ltrain(N (A(θ), W )) W (4) 1 “Supernet” is used as a general concept in this work. It has different names and implementation in previous approaches. # 4 Zichao Guo, et al. After optimization, the best architecture a∗ is sampled from A(θ∗). Optimization of Eq. (3) is challenging. First, the weights of the graph nodes in the supernet depend on each other and become deeply coupled during opti- mization. For a specific architecture, it inherits certain node weights from W . While these weights are decoupled from the others, it is unclear why they are still effective. Second, joint optimization of architecture parameter θ and weights W intro- duces further complexity. Solving Eq. (3) inevitably introduces bias to certain areas in θ and certain nodes in W during the progress of optimization. The bias would leave some nodes in the graph well trained and others poorly trained. With different level of maturity in the weights, different architectures are actu- ally non-comparable. However, their prediction accuracy is used as guidance for sampling in A(θ) (e.g., used as reward in policy gradient [4]). This would further mislead the architecture sampling. This problem is in analogy to the “dilemma of exploitation and exploration” problem in reinforcement learning. To alleviate such problems, existing approaches adopt complicated optimization techniques (see Table 1 for a summary). Task constraints Real world tasks usually have additional requirements on a net- work’s memory consumption, FLOPs, latency, energy consumption, etc. These requirements only depends on the architecture a, not on the weights wa. Thus, they are called architecture constraints in this work. A typical constraint is that the network’s latency is no more than a preset budget, as Latency(a∗) ≤ Latmax. (5) Note that it is challenging to satisfy Eq. (2) and Eq. (5) simultaneously for most previous approaches. Some works augment the loss function Ltrain in Eq. (3) with soft loss terms that consider the architecture latency [4,23,26,22]. However, it is hard, if not impossible, to guarantee a hard constraint like Eq. (5). # 3 Our Single Path One-Shot Approach As analyzed above, the coupling between architecture parameters and weights is problematic. This is caused by joint optimization of both. To alleviate the problem, a natural solution is to decouple the supernet training and architec- ture search in two sequential steps. This leads to the so called one-shot ap- proaches [3,2]. In general, the two steps are formulated as follows. Firstly, the supernet weight is optimized as WA = argmin Ltrain (N (A, W )) . W (6) Compared to Eq. (3), the continuous parameterization of search space is absent. Only weights are optimized. SPOS: Single Path One-Shot 5 crop ate 001 > 60.0% | — drop rate 0.05 g trop rate 0.1 § soon | — drop rateos g Sina path # soos e $ so.0% 5 2 200% 8 8 100% 0.0% 030000 60000 80000120000 150000 Training Iter crop ate 001 0.5% 60.0% | — drop rate 0.05 ceeee trop rate 0.1 3 o83 pieces soon | — drop rateos & sar woe pbb Sina path g wll soos § ase | e a sof sees gggiiiiiis so.0% Soom[pibbecrssthibitinaal 200% s PpPLpPeEes 2 flee 100% BI ie 0.0% Random o73% 030000 60000 80000120000 150000 sc Se Snr Sn St SY SEY EV ST Training Iter Evolution iters Fig. 1. Comparison of single path strategy and drop path strategy Fig. 2. Evolutionary vs. random architec- ture search Secondly, architecture searched is performed as a∗ = argmax ACCval (N (a, WA(a))) . a∈A (7) During search, each sampled architecture a inherits its weights from WA as WA(a). The key difference of Eq. (7) from Eq. (1) and (2) is that architecture weights are ready for use. Evaluation of ACCval(·) only requires inference. Thus, the search is very efficient. The search is also flexible. Any adequate search algorithm is feasible. The architecture constraint like Eq. (5) can be exactly satisfied. Search can be re- peated many times on the same supernet once trained, using different constraints (e.g., 100ms latency and 200ms latency). These properties are absent in previous approaches. These make the one-shot paradigm attractive for real world tasks. One problem in Sec. 2 still remains. The graph nodes’ weights in the supernet training in Eq.( 6) are coupled. It is unclear why the inherited weights WA(a) are still good for an arbitrary architecture a. The recent one-shot approach [2] attempts to decouple the weights using a “path dropout” strategy. During an SGD step in Eq. (6), each edge in the supernet graph is randomly dropped. The random chance is controlled via a dropout rate parameter. In this way, the co-adaptation of the node weights is reduced during training. Experiments in [2] indicate that the training is very sensitive to the dropout rate parameter. This makes the supernet training hard. A carefully tuned heat-up strategy is used. In our implementation of this work, we also found that the validation accuracy is very sensitive to the dropout rate parameter. Single Path Supernet and Uniform Sampling. Let us restart to think about the fundamental principle behind the idea of weight sharing. The key to the success of architecture search in Eq. (7) is that, the accuracy of any architecture a on a validation set using inherited weight WA(a) (without extra fine tuning) is highly predictive for the accuracy of a that is fully trained. Ideally, this requires that the weight WA(a) to approximate the optimal weight wa as in Eq. (1). The quality of # 6 Zichao Guo, et al. the approximation depends on how well the training loss Ltrain (N (a, WA(a))) is minimized. This gives rise to the principle that the supernet weights WA should be optimized in a way that all architectures in the search space are optimized simultaneously. This is expressed as WA = argmin Ea∼Γ (A) [Ltrain(N (a, W (a)))] , W (8) where Γ (A) is a prior distribution of a ∈ A. Note that Eq. (8) is an implemen- tation of Eq. (6). In each step of optimization, an architecture a is randomly sampled. Only weights W (a) are activated and updated. So the memory usage is efficient. In this sense, the supernet is no longer a valid network. It behaves as a stochastic supernet [22]. This is different from [2]. To reduce the co-adaptation between node weights, we propose a supernet structure that each architecture is a single path, as shown in Fig. 3 (a). Compared to the path dropout strategy in [2], the single path strategy is hyperparameter- free. We compared the two strategies within the same search space (as in this work). Note that the original drop path in [2] may drop all operations in a block, resulting in a short cut of identity connection. In our implementation, it is forced that one random path is kept in this case since our choice block does not have an identity branch. We randomly select sub network and evaluate its valida- tion accuracy during the training stage. Results in Fig.1 show that drop rate parameters matters a lot. Different drop rates make supernet achieve different validation accuracies. Our single path strategy corresponds to using drop rate 1. It works the best because our single path strategy can decouple the weights of different operations. The Fig.1 verifies the benefit of weight decoupling. The prior distribution Γ (A) is important. In this work, we empirically find that uniform sampling is good. This is not much of a surprise. A concurrent work [10] also finds that purely random search based on stochastic supernet is competitive on CIFAR-10. We also experimented with a variant that samples the architectures uniformly according to their constraints, named uniform constraint sampling. Specifically, we randomly choose a range, and then sample the archi- tecture repeatedly until the FLOPs of sampled architecture falls in the range. This is because a real task usually expects to find multiple architectures satisfy- ing different constraints. In this work, we find the uniform constraint sampling method is slightly better. So we use it by default in this paper. We note that sampling a path according to architecture distribution during optimization is already used in previous weight sharing approaches [22,4,31,28,6,20]. The difference is that, the distribution Γ (A) is a fixed priori during our train- ing (Eq. (8)), while it is learnable and updated (Eq. (3)) in previous approaches (e.g. RL [15], policy gradient [22,4], Gumbel Softmax [23,26], APG [31]). As analyzed in Sec. 2, the latter makes the supernet weights and architecture pa- rameters highly correlated and optimization difficult. There is another concur- rent work [10] that also proposed to use random sampling of paths in One-Shot model, and performed random search to find the superior architecture. This paper [10] achieved competitive results to several SOTA NAS approaches on CIFAR-10, but didn’t verify the method on large dataset ImageNet. It didn’t SPOS: Single Path One-Shot 7 Convolutional Kernel Weights Fig. 3. Choice blocks for (a) our single path supernet (b) channel number search (c) mixed-precision quantization search prove the effectiveness of single path sampling compared to the “path dropout” strategy and analyze the correlation of the supernet performance and the final evaluation performance. These questions will be answered in our work, and our experiments also show that random search is not good enough to find superior architecture from the large search space. Comprehensive experiments in Sec. 4 show that our approach achieves better results than the SOTA methods. Note that there is no such theoretical guarantee that using a fixed prior distribution is inherently better than optimizing the distribution during training. Our better result likely indicates that the joint optimization in Eq. (3) is too difficult for the existing optimization techniques. Supernet Architecture and Novel Choice Block Design. Choice blocks are used to build a stochastic architecture. Fig. 3 (a) illustrates an example case. A choice block consists of multiple architecture choices. For our single path supernet, each choice block only has one choice invoked at the same time. A path is obtained by sampling all the choice blocks. The simplicity of our approach enables us to define different types of choice blocks to search various architecture variables. Specifically, we propose two novel choice blocks to support complex search spaces. Channel Number Search. We propose a new choice block based on weight sharing, as shown in Fig. 3 (b). The main idea is to preallocate a weight tensor with maximum number of channels, and the system randomly selects the channel number and slices out the corresponding subtensor for convolution. With the weight sharing strategy, we found that the supernet can converge quickly. In detail, assume the dimensions of preallocated weights are (max c out, max c in, ksize). For each batch in supernet training, the number of current output channels c out is randomly sampled. Then, we slice out the weights for current batch with the form Weights[: c out, : c in, :], which is used to produce the output. The optimal number of channels is determined in the search step. Mixed-Precision Quantization Search. In this work, We design a novel choice block to search the bit widths of the weights and feature maps, as shown in Fig. 3 (c). We also combine the channel search space discussed earlier to our mixed- precision quantization search space. During supernet training, for each choice 8 Zichao Guo, et al. block feature bit width and weight bit width are randomly sampled. They are determined in the evolutionary step. See Sec. 4 for details. Evolutionary Architecture Search. For architecture search in Eq. (7), previous one-shot works [3,2] use random search. This is not effective for a large search space. This work uses an evolutionary algorithm. Note that evolutionary search in NAS is used in [16], but it is costly as each architecture is trained from scratch. In our search, each architecture only performs inference. This is very efficient. Algorithm 1: Evolutionary Architecture Search 1 Input: supernet weights WA, population size P, architecture constraints C, max iteration T , validation dataset Dval 2 Output: the architecture with highest validation accuracy under architecture constraints 3 P0 := Initialize population(P, C); Topk := ∅; 4 n := P/2; 5 m := P/2; 6 prob := 0.1; 7 for i = 1 : T do 8 ACCi−1 := Inf erence(WA, Dval, Pi−1); Topk := U pdate T opk(Topk, Pi−1, ACCi−1); Pcrossover := Crossover(Topk, n, C); Pmutation := M utation(Topk, m, prob, C); Pi := Pcrossover ∪ Pmutation; 9 10 11 # 12 13 end 14 Return the architecture with highest accuracy in Topk; Crossover number Mutation number Mutation probability The algorithm is elaborated in Algorithm 1. For all experiments, population size P = 50, max iterations T = 20 and k = 10. For crossover, two randomly selected candidates are crossed to produce a new one. For mutation, a randomly selected candidate mutates its every choice block with probability 0.1 to produce a new candidate. Crossover and mutation are repeated to generate enough new candidates that meet the given architecture constraints. Before the inference of an architecture, the statistics of all the Batch Normalization (BN) [9] opera- tions are recalculated on a random subset of training data (20000 images on ImageNet). It takes a few seconds. This is because the BN statistics from the supernet are usually not applicable to the candidate nets. This is also referred in [2]. Fig. 2 plots the validation accuracy over generations, using both evolutionary and random search methods. It is clear that evolutionary search is more effective. Experiment details are in Sec. 4. The evolutionary algorithm is flexible in dealing with different constraints in Eq. (5), because the mutation and crossover processes can be directly controlled to generate proper candidates to satisfy the constraints. Previous RL-based [21] # SPOS: Single Path One-Shot 9 Table 1. Overview and comparison of SOTA weight sharing approaches. Ours is the easiest to train, occupies the smallest memory, best satisfy the architecture (latency) constraint, and easily supports the large dataset. Note that those approaches belong- ing to the joint optimization category (Eq. (3)) have “Supernet optimization” and “Architecture search” columns merged Approach ENAS[15] BSN[22] DARTS[12] Proxyless[4] FBNet[23] Supernet optimization Architecture search Alternative RL and fine tuning Stochastic super networks + policy gradient Gradient-based path dropout Stochastic relaxation of the discrete search + policy gradient Stochastic relaxation of the discrete search to differentiable optimization via Gumbel softmax Hyper parameters in supernet Training Short-time fine tuning setting Weight of cost penalty Path dropout rate. Weight of auxiliary loss Scaling factor of latency loss Temperature parameter in Gumbel softmax. Coefficient in constraint loss Memory consumption in supernet training Single path + RL system Single path Whole supernet Two paths Whole supernet How to satisfy constraint None Soft constraint in training. Not guaranteed None Soft constraint in training. Not guaranteed. Soft constraint in training. Not guaranteed. Experiment on ImageNet No No Transfer Yes Yes SNAS[26] Same as FBNet SMASH[3] Hypernet Random One-Shot[2] Path dropout Random Uniform path sampling Ours Evolution Same as FBNet None Drop rate None Whole supernet Hypernet+Single Path Whole supernet Single path Soft constraint in training. Not guaranteed. None Not investigated Guaranteed in searching. Support multiple constraints. Transfer No Yes Yes and gradient-based [4,23,22] methods design tricky rewards or loss functions to deal with such constraints. For example, [23] uses a loss function CE(a, wa) · α log(LAT(a))β to balance the accuracy and the latency. It is hard to tune the hyper parameter β to satisfy a hard constraint like Eq. (5). Summary. The combination of single path supernet, uniform sampling training strategy, evolutionary architecture search, and rich search space design makes our approach simple, efficient and flexible. Table 1 performs a comprehensive comparison of our approach against previous weight sharing approaches on var- ious aspects. Ours is the easiest to train, occupies the smallest memory, best satisfies the architecture (latency) constraint, and easily supports large datasets. Extensive results in Sec. 4 verify that our approach is the state-of-the-art. # 4 Experiment Results Dataset. All experiments are performed on ImageNet [17]. We randomly split the original training set into two parts: 50000 images are for validation (50 images for each class exactly) and the rest as the training set. The original validation set is used for testing, on which all the evaluation results are reported, following [4]. Training. We use the same settings (including data augmentation, learning rate schedule, etc.) as [14] for supernet and final architecture training. Batch size is 1024. Supernet is trained for 120 epochs and the best architecture for 240 epochs (300000 iterations) by using 8 NVIDIA GTX 1080Ti GPUs. 10 Zichao Guo, et al. Search Space: Building Blocks. First, we evaluate our method on the task of building block selection, i.e. to find the optimal combination of building blocks under a certain complexity constraint. Our basic building block design is inspired by a state-of-the-art manually-designed network – ShuffleNet v2 [14]. Table 2 shows the overall architecture of the supernet. The “stride” column represents the stride of the first block in each repeated group. There are 20 choice blocks in total. Each choice block has 4 candidates, namely “choice 3”, “choice 5”, “choice 7” and “choice x” respectively. They differ in kernel sizes and the number of depthwise convolutions. The size of the search space is 420. Table 2. Supernet architecture. CB - choice block. GAP - global average pooling Table 3. Results of building block search. SPS – single path supernet input shape block 2242 × 3 1122 × 16 562 × 64 282 × 160 142 × 320 72 × 640 72 × 1024 1024 channels 3 × 3 conv 16 64 CB 160 CB 320 CB CB 640 1 × 1 conv 1024 GAP fc - 1000 repeat 1 4 4 8 4 1 1 1 stride 2 2 2 2 2 1 - - model all choice 3 all choice 5 all choice 7 all choice x random select (5 times) ∼320M ∼73.7 SPS + random search ours (fully-equipped) FLOPs 324M 73.4 321M 73.5 327M 73.6 326M 73.5 top-1 acc(%) 323M 73.8 319M 74.3 We use FLOPs ≤ 330M as the complexity constraint, as the FLOPs of a plenty of previous networks lies in [300,330], including manually-designed net- works [8,18,30,14] and those obtained in NAS [4,23,21]. Table 3 shows the results. For comparison, we set up a series of baselines as follows: 1) select a certain block choice only (denoted by “all choice *” entries); note that different choices have different FLOPs, thus we adjust the channels to meet the constraint. 2) Randomly select some candidates from the search space. 3) Replace our evolutionary architecture optimization with random search used in [3,2]. Results show that random search equipped with our single path supernet finds an architecture only slightly better that random select (73.8 vs. 73.7). It does no mean that our single path supernet is less effective. This is because the random search is too naive to pick good candidates from the large search space. Using evolutionary search, our approach finds out an architecture that achieves superior accuracy (74.3) over all the baselines. Search Space: Channels. Based on our novel choice block for channel number search, we first evaluate channel search on the baseline structure “all choice 3” (refer to Table 3): for each building block, we search the number of “mid- channels” (output channels of the first 1x1 conv in each building block) varying from 0.2x to 1.6x (with stride 0.2), where “k-x” means k times the number of default channels. Same as building block search, we set the complexity con- straint FLOPs ≤ 330M . Table 4 (first part) shows the result. Our channel search method has higher accuracy (73.9) than the baselines. # SPOS: Single Path One-Shot 11 To further boost the accuracy, we search building blocks and channels jointly. There are two alternatives: 1) running channel search on the best building block search result; or 2) searching on the combined search space directly. Our experi- ments show that the first pipeline is slightly better. As shown in Table 4, search- ing in the joint space achieves the best accuracy (74.7% acc.), surpassing the previous state-of-the-art manually-designed [14,18] and automatically-searched models [21,36,11,12,4,23] under complexity of ∼ 300M FLOPs. Table 4. Results of channel search. * Performances are reported in the form “x (y)”, where “x” means the accuracy retrained by us and “y” means accuracy reported by the original paper FLOPs/Params Top-1 acc(%) Model 324M/3.1M all choice 3 ∼ 323M/3.2M ∼ 73.1 rand sel. channels (5 times) 329M/3.4M choice 3 + channel search ∼ 325M/3.2M ∼ 73.4 rand sel. blocks + channels block search 319M/3.3M block search + channel search 328M/3.4M 325M/2.6M MobileNet V1 (0.75x) [8] 300M/3.4M MobileNet V2 (1.0x) [18] 299M/3.5M ShuffleNet V2 (1.5x) [14] 564M/5.3M NASNET-A [36] 588M/5.1M PNASNET [11] 317M/4.2M MnasNet [21] 595M/4.7M DARTS [12] 320M/4.0M Proxyless-R (mobile)* [4] 295M/4.5M FBNet-B* [23] Comparison with State-of-the-arts. Results in Table 4 shows our method is superior. Nevertheless, the comparisons could be unfair because different search spaces and training methods are used in previous works [4]. To make direct comparisons, we benchmark our approach to the same search space of [4,23]. In addition, we retrain the searched models reported in [4,23] under the same settings to guarantee the fair comparison. The search space and supernet architecture in ProxylessNAS [4] is inspired by MobileNet v2 [18] and MnasNet [21]. It contains 21 choice blocks; each choice block has 7 choices (6 different building blocks and one skip layer). The size of the search space is 721. FBNet [23] also uses a similar search space. Table 5 reports the accuracy and complexities (FLOPs and latency on our device) of 5 models searched by [4,23], as the baselines. Then, for each baseline, our search method runs under the constraints of same FLOPs or same latency, respectively. Results shows that for all the cases our method achieves comparable or higher accuracy than the counterpart baselines. 12 Zichao Guo, et al. Table 5. Compared with state-of-the-art NAS methods [23,4] using the same search space. The latency is evaluated on a single NVIDIA Titan XP GPU, with batchsize = 32. Accuracy numbers in the brackets are reported by the original papers; others are trained by us. All our architectures are searched from the same supernet via evolu- tionary architecture optimization baseline network FLOPs/ Params 249M/4.3M 13ms FBNet-A [23] 295M/4.5M 17ms FBNet-B [23] FBNet-C [23] 375M/5.5M 19ms Proxyless-R(mobile) [4] 320M/4.0M 17ms 465M/5.3M 22ms Proxyless(GPU) [4] latency top-1 acc(%) baseline 73.0 (73.0) 74.1 (74.1) 74.9 (74.9) 74.2 (74.6) 74.7 (75.1) top-1 acc(%) (same FLOPs) 73.2 74.2 75.0 74.5 74.8 top-1 acc(%) (same latency) 73.3 74.8 75.1 74.8 75.3 Furthermore, it is worth noting that our architectures under different con- straints in Table 5 are searched on the same supernet, justifying the flexibility and efficiency of our approach to deal with different complexity constraints: su- pernet is trained once and searched multiple times. In contrast, previous methods [23,4] have to train multiple supernets under various constraints. According to Table 7, searching is much cheaper than supernet training. Application: Mixed-Precision Quantization. We evaluate our method on ResNet- 18 and ResNet-34 as common practice in previous quantization works (e.g. [5,24,13,34,29]). Following [34,5,24], we only search and quantize the res-blocks, excluding the first convolutional layer and the last fully-connected layer. Choices of weight and feature bit widths include {(1, 2), (2, 2), (1, 4), (2, 4), (3, 4), (4, 4)} in the search space. As for channel search, we search the number of “bottle- neck channels” (i.e. the output channels of the first convolutional layer in each residual block) in {0.5x, 1.0x, 1.5x}, where “k-x” means k times the number of original channels. The size of the search space is (3 × 6)N = 18N , where N is the number of choice blocks (N = 8 for ResNet-18 and N = 16 for ResNet-34). Note that for each building block we use the same bit widths for the two convolutions. We use PACT [5] as the quantization algorithm. Table 6 reports the results. The baselines are denoted as kWkA (k = 2, 3, 4), which means uniform quantization of weights and activations with k-bits. Then, our search method runs under the constraints of the corresponding BitOps. We also compare with a recent mixed-precision quantization search approach [24]. Results shows that our method achieves superior accuracy in most cases. Also note that all our results for ResNet-18 and ResNet-34 are searched on the same supernet. This is very efficient. Search Cost Analysis. The search cost is a matter of concern in NAS methods. So we analyze the search cost of our method and previous methods [23,4] (reim- plemented by us). We use the search space of our building blocks to measure the memory cost of training supernet and overall time cost. All the supernets are trained for 150000 iterations with a batch size of 256. All models are trained with 8 GPUs. The Table 7 shows that our approach clearly uses less memory SPOS: Single Path One-Shot 13 Table 6. Results of mixed-precision quantization search. “kWkA” means k-bit quan- tization for all the weights and activations Method ResNet-18 float point 70.9 65.6 6.32G 2W2A 66.4 6.21G ours 68.3 3W3A 14.21G 68.7 DNAS [24] 15.62G 69.4 13.49G ours 69.3 25.27G 4W4A 70.6 DNAS [24] 25.70G 70.5 24.31G ours BitOPs top1-acc(%) Method BitoPs top1-acc(%) ResNet-34 float point 75.0 13.21G 70.8 2W2A 13.11G 71.5 ours 72.5 3W3A 29.72G 73.2 DNAS [24] 38.64G 73.9 28.78G ours 73.5 52.83G 4W4A 74.0 DNAS [24] 57.31G 74.6 51.92G ours than other two methods because of the single path supernet. And our approach is much more efficient overall although we have an extra search step that costs less than 1 GPU day. Note Table 7 only compares a single run. In practice, our approach is more advantageous and more convenient to use when multiple searches are needed. As summarized in Table 1, it guarantees to find out the architecture satisfying constraints within one search. Repeated search is easily supported. Table 7. Search Cost. Gds - GPU days Method Memory cost (8 GPUs in total) 37G Training time Search time Retrain time Total time Proxyless FBNet Ours 63G 24G 20 Gds 12 Gds 0 <1 Gds 16 Gds 16 Gds 36 Gds 29 Gds 15 Gds 0 16 Gds 31 Gds Correlation Analysis. Recently, the effectiveness of many neural architecture search methods based on weight sharing is questioned because of lacking fair comparison on the same search space and adequate analysis on the correla- tion between the supernet performance and the stand-alone model performance. Some papers [27,25,19] even show that several the state-of-the-art NAS methods perform similarly to the random search. In this work, the fair comparison on the same search space has been showed in Table 5, so we further provider adequate correlation analysis in this part to evaluate the effectiveness of our method. Correlation analysis requires to achieve the performances of a large number of architectures, but training lots of architectures from scratch is very time- consuming, which also requires a large number of GPU resources, so we use the NAS-Bench-201 [7] to analyze our method. NAS-Bench-201 is a cell-based search space which includes 15,625 architectures in total. It provides the performance # 14 Zichao Guo, et al. of each architecture on CIFAR-10, CIFAR-100, and ImageNet-16-120. So the results on it will be more credible and comparable. We apply our method on different search spaces and different datasets to verify the effectiveness adequately. The original search space of NAS-Bench-201 consists of 5 possible operations: zeroize, skip connection, 1-by-1 convolution, 3-by-3 convolution, and 3-by-3 average pooling. Based on it, we further design several reduced search spaces, named Reduce-1, Reduce-2, Reduce-3, by deleting some operations. In detail, we delete 1-by-1 convolution and 3-by-3 average pool- ing respectively from original search space to produce Reduce-1 and Reduce-2 search spaces, and delete both to produce Reduce-3 search space. Table 8. Correlation in Different Search Spaces Original Reduce-1 Reduce-2 Reduce-3 Dataset 0.55 CIFAR-10 CIFAR-100 0.56 ImageNet-16-120 0.54 As Table.8 shows, we use Kendall Tau τ metric to show the correlation be- tween the supernet performance and the stand-alone model performance. It is obvious that our method performs better than random search on different search spaces and different datasets, since the Kendall Tau τ metric of random search should be 0. So the performances of architectures predicted by supernet can re- flect the real ranking of architectures to a certain degree. However, the results in Table.8 also reveals a limitation of our method that the predicted ranking of our supernet is partially correlated, but not perfectly correlated to the real ranking. So our method can not guarantee to find the real best architecture in the search space, but is able to find some superior architectures around the best. And we think that the correlation of supernet depends on search space. The simpler search space is, the higher correlation will be achieved. # 5 Conclusion In this paper, we revisit the one-shot NAS paradigm and analyze the drawbacks of weight coupling in previous weight sharing methods. To alleviate those prob- lems, we propose a single path one-shot approach which is simple but effective. The comprehensive experiments show that our method can achieve better results than others on several different search spaces. We also analyze the search cost and correlation of our methods. Our method is more efficient, especially when multiple searches are needed. And our method can achieve significant correla- tion on different search spaces derived from NAS-Bench-201, which also verify the effectiveness of our method. There is also a limitation in our method that the predicted ranking of our supernet is partially correlated, but not perfectly correlated to the real ranking. And we think that it depends on search space. The simpler search space is, the higher correlation will be achieved. SPOS: Single Path One-Shot 15 # References 1. Baker, B., Gupta, O., Naik, N., Raskar, R.: Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167 (2016) 2. Bender, G., Kindermans, P.J., Zoph, B., Vasudevan, V., Le, Q.: Understanding and simplifying one-shot architecture search. In: International Conference on Machine Learning. pp. 549–558 (2018) 3. Brock, A., Lim, T., Ritchie, J.M., Weston, N.: Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344 (2017) 4. Cai, H., Zhu, L., Han, S.: Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332 (2018) 5. Choi, J., Wang, Z., Venkataramani, S., Chuang, P.I.J., Srinivasan, V., Gopalakr- ishnan, K.: Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085 (2018) 6. Dong, X., Yang, Y.: Searching for a robust neural architecture in four gpu hours. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1761–1770 (2019) 7. Dong, X., Yang, Y.: Nas-bench-102: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326 (2020) 8. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., An- dreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) 9. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015) 10. Li, L., Talwalkar, A.: Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638 (2019) 11. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L.J., Fei-Fei, L., Yuille, A., Huang, J., Murphy, K.: Progressive neural architecture search. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 19–34 (2018) 12. Liu, H., Simonyan, K., Yang, Y.: Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055 (2018) 13. Liu, Z., Wu, B., Luo, W., Yang, X., Liu, W., Cheng, K.T.: Bi-real net: Enhanc- ing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In: Proceedings of the European Conference on Com- puter Vision (ECCV). pp. 722–737 (2018) 14. Ma, N., Zhang, X., Zheng, H.T., Sun, J.: Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 116–131 (2018) 15. Pham, H., Guan, M.Y., Zoph, B., Le, Q.V., Dean, J.: Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268 (2018) 16. Real, E., Aggarwal, A., Huang, Y., Le, Q.V.: Regularized evolution for image clas- sifier architecture search. arXiv preprint arXiv:1802.01548 (2018) 17. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recog- nition challenge. International journal of computer vision 115(3), 211–252 (2015) 18. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: In- verted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520 (2018) 19. Sciuto, C., Yu, K., Jaggi, M., Musat, C., Salzmann, M.: Evaluating the search phase of neural architecture search. arXiv preprint arXiv:1902.08142 (2019) 16 Zichao Guo, et al. 20. Stamoulis, D., Ding, R., Wang, D., Lymberopoulos, D., Priyantha, B., Liu, J., Marculescu, D.: Single-path nas: Designing hardware-efficient convnets in less than 4 hours. arXiv preprint arXiv:1904.02877 (2019) 21. Tan, M., Chen, B., Pang, R., Vasudevan, V., Le, Q.V.: Mnasnet: Platform-aware neural architecture search for mobile. arXiv preprint arXiv:1807.11626 (2018) 22. V´eniat, T., Denoyer, L.: Learning time/memory-efficient deep architectures with budgeted super networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3492–3500 (2018) 23. Wu, B., Dai, X., Zhang, P., Wang, Y., Sun, F., Wu, Y., Tian, Y., Vajda, P., Jia, Y., Keutzer, K.: Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. arXiv preprint arXiv:1812.03443 (2018) 24. Wu, B., Wang, Y., Zhang, P., Tian, Y., Vajda, P., Keutzer, K.: Mixed preci- sion quantization of convnets via differentiable neural architecture search. arXiv preprint arXiv:1812.00090 (2018) 25. Xie, S., Kirillov, A., Girshick, R., He, K.: Exploring randomly wired neural net- works for image recognition. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1284–1293 (2019) 26. Xie, S., Zheng, H., Liu, C., Lin, L.: Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926 (2018) 27. Yang, A., Esperan¸ca, P.M., Carlucci, F.M.: Nas evaluation is frustratingly hard. arXiv preprint arXiv:1912.12522 (2019) 28. Yao, Q., Xu, J., Tu, W.W., Zhu, Z.: Differentiable neural architecture search via proximal iterations. arXiv preprint arXiv:1905.13577 (2019) 29. Zhang, D., Yang, J., Ye, D., Hua, G.: Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In: Proceedings of the European Con- ference on Computer Vision (ECCV). pp. 365–382 (2018) 30. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolu- tional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856 (2018) 31. Zhang, X., Huang, Z., Wang, N.: You only search once: Single shot neural ar- chitecture search via direct sparse optimization. arXiv preprint arXiv:1811.01567 (2018) 32. Zhong, Z., Yan, J., Wu, W., Shao, J., Liu, C.L.: Practical block-wise neural network architecture generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2423–2432 (2018) 33. Zhong, Z., Yang, Z., Deng, B., Yan, J., Wu, W., Shao, J., Liu, C.L.: Block- qnn: Efficient block-wise neural network architecture generation. arXiv preprint arXiv:1808.05584 (2018) 34. Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016) 35. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016) 36. Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8697–8710 (2018)
{ "id": "1606.06160" }
1903.12436
From Variational to Deterministic Autoencoders
Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of VAEs. We observe that sampling a stochastic encoder in a Gaussian VAE can be interpreted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and implicit regularization schemes, can lead to an equally smooth and meaningful latent space without forcing it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data, we introduce an ex-post density estimation step that can be readily applied also to existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regularized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. \footnote{An implementation is available at: \url{https://github.com/ParthaEth/Regularized_autoencoders-RAE-}}
http://arxiv.org/pdf/1903.12436
Partha Ghosh, Mehdi S. M. Sajjadi, Antonio Vergari, Michael Black, Bernhard Schölkopf
cs.LG, stat.ML
Partha Ghosh and Mehdi S. M. Sajjadi contributed equally to this work
null
cs.LG
20190329
20200529
0 2 0 2 y a M 9 2 ] G L . s c [ 4 v 6 3 4 2 1 . 3 0 9 1 : v i X r a Published as a conference paper at ICLR 2020 # FROM VARIATIONAL TO DETERMINISTIC AUTOENCODERS # Partha Ghosh†∗ # Mehdi S. M. Sajjadi†∗ # Antonio Vergari‡ # Michael Black† # Bernhard Sch¨olkopf† † Max Planck Institute for Intelligent Systems, T¨ubingen, Germany {pghosh,msajjadi,black,bs}@tue.mpg.de ‡ University of California, Los Angeles, USA [email protected] # ABSTRACT Variational Autoencoders (VAEs) provide a theoretically-backed and popular framework for deep generative models. However, learning a VAE from data poses still unanswered theoretical questions and considerable practical challenges. In this work, we propose an alternative framework for generative modeling that is simpler, easier to train, and deterministic, yet has many of the advantages of VAEs. We observe that sampling a stochastic encoder in a Gaussian VAE can be inter- preted as simply injecting noise into the input of a deterministic decoder. We investigate how substituting this kind of stochasticity, with other explicit and im- plicit regularization schemes, can lead to an equally smooth and meaningful latent space without forcing it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism to sample new data, we introduce an ex-post density es- timation step that can be readily applied also to existing VAEs, improving their sample quality. We show, in a rigorous empirical study, that the proposed regular- ized deterministic autoencoders are able to generate samples that are comparable to, or better than, those of VAEs and more powerful alternatives when applied to images as well as to structured data such as molecules. 1 # INTRODUCTION Generative models lie at the core of machine learning. By capturing the mechanisms behind the data generation process, one can reason about data probabilistically, access and traverse the low- dimensional manifold the data is assumed to live on, and ultimately generate new data. It is there- fore not surprising that generative models have gained momentum in applications such as computer vision (Sohn et al., 2015; Brock et al., 2019), NLP (Bowman et al., 2016; Severyn et al., 2017), and chemistry (Kusner et al., 2017; Jin et al., 2018; G´omez-Bombarelli et al., 2018). Variational Autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) cast learning rep- resentations for high-dimensional distributions as a variational inference problem. Learning a VAE amounts to the optimization of an objective balancing the quality of samples that are autoencoded through a stochastic encoder–decoder pair while encouraging the latent space to follow a fixed prior distribution. Since their introduction, VAEs have become one of the frameworks of choice among the different generative models. VAEs promise theoretically well-founded and more stable training than Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) and more efficient sam- pling mechanisms than autoregressive models (Larochelle & Murray, 2011; Germain et al., 2015). However, the VAE framework is still far from delivering the promised generative mechanism, as there are several practical and theoretical challenges yet to be solved. A major weakness of VAEs is ∗Equal contribution. 1An implementation is available at: https://github.com/ParthaEth/Regularized_ autoencoders-RAE- 1 Published as a conference paper at ICLR 2020 the tendency to strike an unsatisfying compromise between sample quality and reconstruction qual- ity. In practice, this has been attributed to overly simplistic prior distributions (Tomczak & Welling, 2018; Dai & Wipf, 2019) or alternatively, to the inherent over-regularization induced by the KL divergence term in the VAE objective (Tolstikhin et al., 2017). Most importantly, the VAE objective itself poses several challenges as it admits trivial solutions that decouple the latent space from the input (Chen et al., 2017; Zhao et al., 2017), leading to the posterior collapse phenomenon in conjunc- tion with powerful decoders (van den Oord et al., 2017). Furthermore, due to its variational formula- tion, training a VAE requires approximating expectations through sampling at the cost of increased variance in gradients (Burda et al., 2015; Tucker et al., 2017), making initialization, validation, and annealing of hyperparameters essential in practice (Bowman et al., 2016; Higgins et al., 2017; Bauer & Mnih, 2019). Lastly, even after a satisfactory convergence of the objective, the learned aggregated posterior distribution rarely matches the assumed latent prior in practice (Kingma et al., 2016; Bauer & Mnih, 2019; Dai & Wipf, 2019), ultimately hurting the quality of generated samples. All in all, much of the attention around VAEs is still directed towards “fixing” the aforementioned drawbacks associated with them. In this work, we take a different route: we question whether the variational framework adopted by VAEs is necessary for generative modeling and, in particular, to obtain a smooth latent space. We propose to adopt a simpler, deterministic version of VAEs that scales better, is simpler to optimize, and, most importantly, still produces a meaningful latent space and equivalently good or better sam- ples than VAEs or stronger alternatives, e.g., Wasserstein Autoencoders (WAEs) (Tolstikhin et al., 2017). We do so by observing that, under commonly used distributional assumptions, training a stochastic encoder–decoder pair in VAEs does not differ from training a deterministic architecture where noise is added to the decoder’s input. We investigate how to substitute this noise injection mechanism with other regularization schemes in the proposed deterministic Regularized Autoen- coders (RAEs), and we thoroughly analyze how this affects performance. Finally, we equip RAEs with a generative mechanism via a simple ex-post density estimation step on the learned latent space. In summary, our contributions are as follows: i) we introduce the RAE framework for generative modeling as a drop-in replacement for many common VAE architectures; ii) we propose an ex-post density estimation scheme which greatly improves sample quality for VAEs, WAEs and RAEs with- out the need to retrain the models; iii) we conduct a rigorous empirical evaluation to compare RAEs with VAEs and several baselines on standard image datasets and on more challenging structured domains such as molecule generation (Kusner et al., 2017; G´omez-Bombarelli et al., 2018). # 2 VARIATIONAL AUTOENCODERS For a general discussion, we consider a collection of high-dimensional i.i.d. samples X = {xi}N i=1 drawn from the true data distribution pdata(x) over a random variable X taking values in the input space. The aim of generative modeling is to learn from X a mechanism to draw new samples xnew ∼ pdata. Variational Autoencoders provide a powerful latent variable framework to infer such a mechanism. The generative process of the VAE is defined as Znew ~ P(Z), — Xnew ~ Po(X| Z = Znew) (ly where p(Z) is a fixed prior distribution over a low-dimensional latent space Z. A stochastic decoder Do(2) = x ~ po(x| 2) = p(X | go(2)) (2) links the latent space to the input space through the likelihood distribution pg, where go is an expres- sive non-linear function parameterized by 6.2 As a result, a VAE estimates Pdata(X) as the infinite mixture model po(x) = | pe(x|z)p(z)dz. At the same time, the input space is mapped to the latent space via a stochastic encoder Eφ(x) = z ∼ qφ(z | x) = q(Z | fφ(x)) (3) where qφ(z | x) is the posterior distribution given by a second function fφ parameterized by φ. Computing the marginal log-likelihood log pθ(x) is generally intractable. One therefore follows a variational approach, maximizing the evidence lower bound (ELBO) for a sample x: log pθ(x) ≥ ELBO(φ, θ, x) = Ez∼qφ(z | x) log pθ(x | z) − KL(qφ(z | x)||p(z)) 2With slight abuse of notation, we use lowercase letters for both random variables and their realizations, e.g., pθ(x | z) instead of p(X | Z = z), when it is clear to discriminate between the two. 2 (4) Published as a conference paper at ICLR 2020 Maximizing Eq. 4 over data X w.r.t. model parameters φ, θ corresponds to minimizing the loss arg min φ,θ Ex∼pdata LELBO = Ex∼pdata LREC + LKL (5) where LREC and LKL are defined for a sample x as follows: LREC = −Ez∼qφ(z | x) log pθ(x | z) LKL = KL(qφ(z | x)||p(z)) (6) Intuitively, the reconstruction loss LREC takes into account the quality of autoencoded samples x through Dθ(Eφ(x)), while the KL-divergence term LKL encourages qφ(z | x) to match the prior p(z) for each z which acts as a regularizer during training (Hoffman & Johnson, 2016). # 2.1 PRACTICE AND SHORTCOMINGS OF VAES To fit a VAE to data through Eq. 5 one has to specify the parametric forms for p(z), qφ(z | x), pθ(x | z), and hence the deterministic mappings fφ and gθ. In practice, the choice for the above distributions is guided by trading off computational complexity with model expressiveness. In the most commonly adopted formulation of the VAE, qφ(z | x) and pθ(x | z) are assumed to be Gaussian: Eφ(x) ∼ N (Z|µφ(x), diag(σφ(x))) Dθ(Eφ(x)) ∼ N (X|µθ(z), diag(σθ(z))) (7) with means µφ, µθ and covariance parameters σφ, σθ given by fφ and gθ. In practice, the covariance of the decoder is set to the identity matrix for all z, i.e., σθ(z) = 1 (Dai & Wipf, 2019). The expec- tation of LREC in Eq. 6 must be approximated via k Monte Carlo point estimates. It is expected that the quality of the Monte Carlo estimate, and hence convergence during learning and sample quality increases for larger k (Burda et al., 2015). However, only a 1-sample approximation is generally carried out (Kingma & Welling, 2014) since memory and time requirements are prohibitive for large k. With the 1-sample approximation, LREC can be computed as the mean squared error between input samples and their mean reconstructions µθ by a decoder that is deterministic in practice: LREC = ||x − µθ(Eφ(x))||2 2 (8) Gradients w.r.t. the encoder parameters ¢ are computed through the expectation of Crec in Eq. 6 via the reparametrization trick (Kingma & Welling, 2014) where the stochasticity of Ey is relegated to an auxiliary random variable € which does not depend on ¢: E4(x) = Me(x) + o4(x) O€, e~N(0,T) (9) where © denotes the Hadamard product. An additional simplifying assumption involves fixing the prior p(z) to be a d-dimensional isotropic Gaussian N(Z | 0,1). For this choice, the KL-divergence for a sample x is given in closed form: 2L«. = ||~g(x)||3 +d + 4 o4(x)i — log o4(x)i- While the above assumptions make VAEs easy to implement, the stochasticity in the encoder and decoder are still problematic in practice (Makhzani et al., 2016; Tolstikhin et al., 2017; Dai & Wipf, 2019). In particular, one has to carefully balance the trade-off between the LKL term and LREC during optimization (Dai & Wipf, 2019; Bauer & Mnih, 2019). A too-large weight on the LKL term can dominate LELBO, having the effect of over-regularization. As this would smooth the latent space, it can directly affect sample quality in a negative way. Heuristics to avoid this include manually fine- tuning or gradually annealing the importance of LKL during training (Bowman et al., 2016; Bauer & Mnih, 2019). We also observe this trade-off in a practical experiment in Appendix A. Even after employing the full array of approximations and “tricks” to reach convergence of Eq. 5 for a satisfactory set of parameters, there is no guarantee that the learned latent space is distributed according to the assumed prior distribution. In other words, the aggregated posterior distribution qφ(z) = Ex∼pdata q(z|x) has been shown not to conform well to p(z) after training (Tolstikhin et al., 2017; Bauer & Mnih, 2019; Dai & Wipf, 2019). This critical issue severely hinders the generative mechanism of VAEs (cf. Eq. 1) since latent codes sampled from p(z) (instead of q(z)) might lead to regions of the latent space that are previously unseen to Dθ during training. This results in generating out-of-distribution samples. We refer the reader to Appendix H for a visual demonstration of this phenomenon on the latent space of VAEs. We analyze solutions to this problem in Section 4. 3 Published as a conference paper at ICLR 2020 # 2.2 CONSTANT-VARIANCE ENCODERS Before introducing our fully-deterministic take on VAEs, it is worth investigating intermediate fla- vors of VAEs with reduced stochasticity. Analogous to what is commonly done for decoders as discussed in the previous section, one can fix the variance of qφ(z | x) to be constant for all x. This simplifies the computation of Eφ from Eq. 9 to (10) where σ is a fixed scalar. Then, the KL loss term in a Gaussian VAE simplifies (up to a constant) to LCV 2. We name this variant Constant-Variance VAEs (CV-VAEs). While CV-VAEs have been adopted in some applications such as variational image compression (Ball´e et al., 2017) and adversarial robustness (Ghosh et al., 2019), to the best of our knowledge, there is no systematic study of them in the literature. We will fill this gap in our experiments in Section 6. Lastly, note that now σ in Eq.10 is not learned along the encoder as in Eq. 9. Nevertheless, it can still be fitted as an hyperparameter, e.g., by cross-validation, to maximise the model likelihood. This highlights the possibility to estimate a better parametric form for the latent space distribution after training, or in a outer-loop including training. We address this provide a more complex and flexible solution to deal with the prior structure over Z via ex-post density estimation in Section 4. # 3 DETERMINISTIC REGULARIZED AUTOENCODERS Autoencoding in VAEs is defined in a probabilistic fashion: Eφ and Dθ map data points not to a single point, but rather to parameterized distributions (cf. Eq. 7). However, common implementa- tions of VAEs as discussed in Section 2 admit a simpler, deterministic view for this probabilistic mechanism. A glance at the autoencoding mechanism of the VAE is revealing. The encoder deterministically maps a data point x to mean µφ(x) and variance σφ(x) in the latent space. The input to Dθ is then simply the mean µφ(x) augmented with Gaussian noise scaled by σφ(x) via the reparametrization trick (cf. Eq. 9). In the CV-VAE, this relationship is even more obvious, as the magnitude of the noise is fixed for all data points (cf. Eq. 10). In this light, a VAE can be seen as a deterministic autoencoder where (Gaussian) noise is added to the decoder’s input. We argue that this noise injection mechanism is a key factor in having a regularized decoder. Using random noise injection to regularize neural networks is a well-known technique that dates back sev- eral decades (Sietsma & Dow, 1991; An, 1996). It implicitly helps to smooth the function learned by the network at the price of increased variance in the gradients during training. In turn, decoder regularization is a key component in generalization for VAEs, as it improves random sample quality and achieves a smoother latent space. Indeed, from a generative perspective, regularization is moti- vated by the goal to learn a smooth latent space where similar data points x are mapped to similar latent codes z, and small variations in Z lead to reconstructions by Dθ that vary only slightly. We propose to substitute noise injection with an explicit regularization scheme for the decoder. This entails the substitution of the variational framework in VAEs, which enforces regularization on the encoder posterior through LKL, with a deterministic framework that applies other flavors of decoder regularization. By removing noise injection from a CV-VAE, we are effectively left with a deterministic autoencoder (AE). Coupled with explicit regularization for the decoder, we obtain a Regularized Autoencoder (RAE). Training a RAE thus involves minimizing the simplified loss (11) where LREG represents the explicit regularizer for Dθ (discussed in Section 3.1) and LRAE Z = 1/2||z||2 KL ) is equivalent to constraining the size of the learned la- tent space, which is still needed to prevent unbounded optimization. Finally, β and λ are two hyper parameters that balance the different loss terms. Note that for RAEs, no Monte Carlo approximation is required to compute LREC. This relieves the need for more samples from qφ(z | x) to achieve better image quality (cf. Appendix A). Moreover, by abandoning the variational framework and the LKL term, there is no need in RAEs for a fixed prior distribution over Z. Doing so however loses a clear generative mechanism for RAEs to sample from Z. We propose a method to regain random sampling ability in Section 4 by performing density estimation on Z ex-post, a step that is otherwise still needed for VAEs to alleviate the posterior mismatch issue. 4 Published as a conference paper at ICLR 2020 3.1 REGULARIZATION SCHEMES FOR RAES Among possible choices for LREG, a first obvious candidate is Tikhonov regularization (Tikhonov & Arsenin, 1977) since is known to be related to the addition of low-magnitude input noise (Bishop, 2006). Training a RAE within this framework thus amounts to adopting LREG = LL2 = ||θ||2 2 which effectively applies weight decay on the decoder parameters θ. Another option comes from the recent GAN literature where regularization is a hot topic (Kurach et al., 2018) and where injecting noise to the input of the adversarial discriminator has led to im- proved performance in a technique called instance noise (Sønderby et al., 2017). To enforce Lips- chitz continuity on adversarial discriminators, weight clipping has been proposed (Arjovsky et al., 2017), which is however known to significantly slow down training. More successfully, a gradient penalty on the discriminator can be used similar to Gulrajani et al. (2017); Mescheder et al. (2018), yielding the objective LREG = LGP = ||∇Dθ(Eφ(x))||2 2 which bounds the gradient norm of the decoder w.r.t. its input. Additionally, spectral normalization (SN) has been successfully proposed as an alternative way to bound the Lipschitz norm of an adversarial discriminator (Miyato et al., 2018). SN normalizes each weight matrix 6, in the decoder by an estimate of its largest singular value: 63% = @,/s(@/) where s(@¢) is the current estimate obtained through the power method. In light of the recent successes of deep networks without explicit regularization (Zagoruyko & Ko- modakis, 2016; Zhang et al., 2017), it is intriguing to question the need for explicit regularization of the decoder in order to obtain a meaningful latent space. The assumption here is that techniques such as dropout (Srivastava et al., 2014), batch normalization (Ioffe & Szegedy, 2015), adding noise during training (An, 1996) implicitly regularize the networks enough. Therefore, as a natural base- line to the LRAE objectives introduced above, we also consider the RAE framework without LREG and LRAE To complete our “autopsy” of the VAE loss, we additionally investigate deterministic autoencoders with decoder regularization, but without the LRAE term, as well as possible combinations of different regularizers in our RAE framework (cf. Table 3 in Appendix I). Lastly, it is worth questioning if it is possible to formally derive our RAE framework from first principles. We answer this affirmatively, and show how to augment the ELBO optimization problem of a VAE with an explicit constraint, while not fixing a parametric form for qφ(z | x). This indeed leads to a special case of the RAE loss in Eq. 11. Specifically, we derive a regularizer like LGP for a deterministic version of the CV-VAE. Note that this derivation legitimates bounding the decoder’s gradients and as such it justifies the spectral norm regularizer as well since the latter enforces the decoder’s Lipschitzness. We accommodate the full derivation in Appendix B. # 4 EX-POST DENSITY ESTIMATION By removing stochasticity and ultimately, the KL divergence term LKL from RAEs, we have sim- plified the original VAE objective at the cost of detaching the encoder from the prior p(z) over the latent space. This implies that i) we cannot ensure that the latent space Z is distributed according to a simple distribution (e.g., isotropic Gaussian) anymore and consequently, ii) we lose the simple mechanism provided by p(z) to sample from Z as in Eq. 1. As discussed in Section 2.1, issue i) is compromising the VAE framework in any case, as reported in several works (Hoffman & Johnson, 2016; Rosca et al., 2018; Dai & Wipf, 2019). To fix this, some works extend the VAE objective by encouraging the aggregated posterior to match p(z) (Tolstikhin et al., 2017) or by utilizing more complex priors (Kingma et al., 2016; Tomczak & Welling, 2018; Bauer & Mnih, 2019). To overcome both i) and ii), we instead propose to employ ex-post density estimation over Z. We fit a density estimator denoted as qδ(z) to {z = Eφ(x)|x ∈ X }. This simple approach not only fits our RAE framework well, but it can also be readily adopted for any VAE or variants thereof such as the WAE as a practical remedy to the aggregated posterior mismatch without adding any computational overhead to the costly training phase. 5 Published as a conference paper at ICLR 2020 The choice of qδ(z) needs to trade-off expressiveness – to provide a good fit of an arbitrary space for Z – with simplicity, to improve generalization. For example, placing a Dirac distribution on each latent point z would allow the decoder to output only training sample reconstructions which have a high quality, but do not generalize. Striving for simplicity, we employ and compare a full covariance multivariate Gaussian with a 10-component Gaussian mixture model (GMM) in our experiments. # 5 RELATED WORKS Many works have focused on diagnosing the VAE framework, the terms in its objective (Hoffman & Johnson, 2016; Zhao et al., 2017; Alemi et al., 2018), and ultimately augmenting it to solve optimization issues (Rezende & Viola, 2018; Dai & Wipf, 2019). With RAE, we argue that a simpler deterministic framework can be competitive for generative modeling. Deterministic denoising (Vincent et al., 2008) and contractive autoencoders (CAEs) (Rifai et al., 2011) have received attention in the past for their ability to capture a smooth data manifold. Heuristic attempts to equip them with a generative mechanism include MCMC schemes (Rifai et al., 2012; Bengio et al., 2013). However, they are hard to diagnose for convergence, require a considerable effort in tuning (Cowles & Carlin, 1996), and have not scaled beyond MNIST, leading to them being superseded by VAEs. While computing the Jacobian for CAEs (Rifai et al., 2011) is close in spirit to LGP for RAEs, the latter is much more computationally efficient. Approaches to cope with the aggregated posterior mismatch involve fixing a more expressive form for p(z) (Kingma et al., 2016; Bauer & Mnih, 2019) therefore altering the VAE objective and re- quiring considerable additional computational efforts. Estimating the latent space of a VAE with a second VAE (Dai & Wipf, 2019) reintroduces many of the optimization shortcomings discussed for VAEs and is much more expensive in practice compared to fitting a simple qδ(z) after training. Adversarial Autoencoders (AAE) (Makhzani et al., 2016) add a discriminator to a deterministic encoder–decoder pair, leading to sharper samples at the expense of higher computational overhead and the introduction of instabilities caused by the adversarial nature of the training process. RECONSTRUCTIONS RANDOM SAMPLES INTERPOLATIONS GT VAE CV-VAE WAE 2SVAE RAE-GP RAE-L2 RAE-SN RAE AE @ @ & f @ eee 2 'e a a ee if SG SiGe @ Seielele = ivit 2 a SSS isis CSS Sis @ 2 (ee (eee lees dy @ Wf 2 1S (8 i u i & = @ Figure 1: Qualitative evaluation of sample quality for VAEs, WAEs, 2sVAEs, and RAEs on CelebA. RAE provides slightly sharper samples and reconstructions while interpolating smoothly in the latent space. Corresponding qualitative overviews for MNIST and CIFAR-10 are provided in Appendix F. Wasserstein Autoencoders (WAE) (Tolstikhin et al., 2017) have been introduced as a generaliza- tion of AAEs by casting autoencoding as an optimal transport (OT) problem. Both stochastic and deterministic models can be trained by minimizing a relaxed OT cost function employing either an adversarial loss term or the maximum mean discrepancy score between p(z) and qφ(z) as a reg- 6 Published as a conference paper at ICLR 2020 ularizer in place of LKL. Within the RAE framework, we look at this problem from a different perspective: instead of explicitly imposing a simple structure on Z that might impair the ability to fit high-dimensional data during training, we propose to model the latent space by an ex-post density estimation step. The most successful VAE architectures for images and audio so far are variations of the VQ- VAE (van den Oord et al., 2017; Razavi et al., 2019). Despite the name, VQ-VAEs are neither stochastic, nor variational, but they are deterministic autoencoders. VQ-VAEs are similar to RAEs in that they adopt ex-post density estimation. However, VQ-VAEs necessitates complex discrete autoregressive density estimators and a training loss that is non-differentiable due to quantizing Z. Lastly, RAEs share some similarities with GLO (Bojanowski et al., 2018). However, differently from RAEs, GLO can be interpreted as a deterministic AE without and encoder, and when the latent space is built “on-demand” by optimization. On the other hand, RAEs augment deterministic decoders as in GANs with deterministic encoders. # 6 EXPERIMENTS Our experiments are designed to answer the following questions: Q1: Are sample quality and latent space structure in RAEs comparable to VAEs? Q2: How do different regularizations impact RAE performance? Q3: What is the effect of ex-post density estimation on VAEs and its variants? MNIST CIFAR CELEBA SAMPLES SAMPLES SAMPLES REC. N GMM Interp. REC. N GMM Interp. REC. N GMM Interp. VAE CV-VAE WAE 2SVAE 18.26 15.15 10.03 20.31 19.21 33.79 20.42 18.81 17.66 17.87 9.39 – 18.21 25.12 14.34 18.35 57.94 37.74 35.97 62.54 106.37 94.75 117.44 109.77 103.78 86.64 93.53 – 88.62 69.71 76.89 89.06 39.12 40.41 34.81 42.04 48.12 48.87 53.67 49.70 45.52 49.30 42.73 – 44.49 44.96 40.93 47.54 RAE-GP RAE-L2 RAE-SN RAE AE AE-L2 14.04 10.53 15.65 11.67 12.95 11.19 22.21 22.22 19.67 23.92 58.73 315.15 11.54 8.69 11.74 9.81 10.66 9.36 15.32 14.54 15.15 14.67 17.12 17.15 32.17 32.24 27.61 29.05 30.52 34.35 83.05 80.80 84.25 83.87 84.74 247.48 76.33 74.16 75.30 76.28 76.47 75.40 64.08 62.54 63.62 63.27 61.57 61.09 39.71 43.52 36.01 40.18 40.79 44.72 116.30 51.13 44.74 48.20 127.85 346.29 45.63 47.97 40.95 44.68 45.10 48.42 47.00 45.98 39.53 43.67 50.94 56.16 Table 1: Evaluation of all models by FID (lower is better, best models in bold). We evaluate each model by REC.: test sample reconstruction; N : random samples generated according to the prior distribution p(z) (isotropic Gaussian for VAE / WAE, another VAE for 2SVAE) or by fitting a Gaus- sian to qδ(z) (for the remaining models); GMM: random samples generated by fitting a mixture of 10 Gaussians in the latent space; Interp.: mid-point interpolation between random pairs of test reconstructions. The RAE models are competitive with or outperform previous models throughout the evaluation. Interestingly, interpolations do not suffer from the lack of explicit priors on the latent space in our models. # 6.1 RAES FOR IMAGE MODELING We evaluate all regularization schemes from Section 3.1: RAE-GP, RAE-L2, and RAE-SN. For a thorough ablation study, we also consider only adding the latent code regularizer LRAE to LREC (RAE), and an autoencoder without any explicit regularization (AE). We check the effect of applying one regularization scheme while not including the LRAE # Z As baselines, we employ the regular VAE, constant-variance VAE (CV-VAE), Wasserstein Au- toencoder (WAE) with the MMD loss as a state-of-the-art method, and the recent 2-stage VAE (2sVAE) (Dai & Wipf, 2019) which performs a form of ex-post density estimation via another VAE. For a fair comparison, we use the same network architecture for all models. Further details about the architecture and training are given in Appendix C. We measure the following quantities: held-out sample reconstruction quality, random sample qual- ity, and interpolation quality. While reconstructions give us a lower bound on the best quality 7 Published as a conference paper at ICLR 2020 achievable by the generative model, random sample quality indicates how well the model gener- alizes. Finally, interpolation quality sheds light on the structure of the learned latent space. The evaluation of generative models is a nontrivial research question (Theis et al., 2016; Sajjadi et al., 2017; Lucic et al., 2018a). We report here the ubiquitous Fr´echet Inception Distance (FID) (Heusel et al., 2017) and we provide precision and recall scores (PRD) (Sajjadi et al., 2018) in Appendix E. Table 1 summarizes our main results. All of the proposed RAE variants are competitive with the VAE, WAE and 2sVAE w.r.t. generated image quality in all settings. Sampling RAEs achieve the best FIDs across all datasets when a modest 10-component GMM is employed for ex-post density estimation. Furthermore, even when N is considered as qδ(z), RAEs rank first with the exception of MNIST, where it competes for the second position with a VAE. Our best RAE FIDs are lower than the best results reported for VAEs in the large scale comparison of (Lucic et al., 2018a), challenging even the best scores reported for GANs. While we are employing a slightly different architecture than theirs, our models underwent only modest finetuning instead of an extensive hyperparameter search. A comparison of the different regularization schemes for RAEs (Q2) yields no clear winner across all settings as all perform equally well. Striving for a simpler implementation, one may prefer RAE-L2 over the GP and SN variants. For completeness, we investigate applying multiple regularization schemes to our RAE models. We report the results of all possible combinations in Table 3, Appendix I. There, no significant boost of performance can be spotted when comparing to singly regularized RAEs. Surprisingly, the implicitly regularized RAE and AE models are shown to be able to score impressive FIDs when qδ(z) is fit through GMMs. FIDs for AEs decrease from 58.73 to 10.66 on MNIST and from 127.85 to 45.10 on CelebA – a value close to the state of the art. This is a remarkable result that follows a long series of recent confirmations that neural networks are surprisingly smooth by design (Neyshabur et al., 2017). It is also surprising that the lack of an explicitly fixed structure on the latent space of the RAE does not impede interpolation quality. This is further confirmed by the qualitative evaluation on CelebA as reported in Fig. 1 and for the other datasets in Appendix F, where RAE interpolated samples seem sharper than competitors and transitions smoother. Our results further confirm and quantify the effect of the aggregated posterior mismatch. In Table 1, ex-post density estimation consistently improves sample quality across all settings and models. A 10-component GMM halves FID scores from ∼20 to ∼10 for WAE and RAE models on MNIST and from 116 to 46 on CelebA. This is especially striking since this additional step is much cheaper and simpler than training a second-stage VAE as in 2sVAE (Q3). In summary, the results strongly sup- port the conjecture that the simple deterministic RAE framework can challenge VAEs and stronger alternatives (Q1). 6.2 GRAMMARRAE: MODELING STRUCTURED INPUTS We now evaluate RAEs for generating complex structured objects such as molecules and arithmetic expressions. We do this with a twofold aim: i) to investigate the latent space learned by RAE for more challenging input spaces that abide to some structural constraints, and ii) to quantify the gain of replacing the VAE in a state-of-the-art generative model with a RAE. To this end, we adopt the exact architectures and experimental settings of the GrammarVAE (GVAE) (Kusner et al., 2017), which has been shown to outperform other generative alternatives such as the CharacterVAE (CVAE) (G´omez-Bombarelli et al., 2018). As in Kusner et al. (2017), we are interested in traversing the latent space learned by our models to generate samples (molecules or expressions) that best fit some downstream metric. This is done by Bayesian optimization (BO) by considering the log(1 + MSE) (lower is better) for the generated expressions w.r.t. some ground truth points, and the water-octanol partition coefficient (log P ) (Pyzer-Knapp et al., 2015) (higher is better) in the case of molecules. A well-behaved latent space will not only generate molecules or expressions with better scores during the BO step, but it will also contain syntactically valid ones, i.e., , samples abide to a grammar of rules describing the problem. Figure 2 summarizes our results over 5 trials of BO. Our GRAEs (Grammar RAE) achieve better average scores than CVAEs and GVAEs in generating expressions and molecules. This is visible also for the three best samples and their scores for all models, with the exception of the first best expression of GVAE. We include in the comparison also the GCVVAE, the equivalent of a CV-VAE 8 Published as a conference paper at ICLR 2020 PROBLEM MODEL % VALID AVG. SCORE MODEL 1ST 2ND 3RD EXPRESSIONS 1.00 ± 0.00 GRAE GCVVAE 0.99 ± 0.01 0.99 ± 0.01 GVAE 0.82 ± 0.07 CVAE 3.22 ± 0.03 2.85 ± 0.08 3.26 ± 0.20 4.74 ± 0.25 GRAE SCORE 3.74 3.52 3.14 MOLECULES -5.62 ± 0.71 0.72 ± 0.09 GRAE -6.40 ± 0.80 GCVVAE 0.76 ± 0.06 0.28 ± 0.04 -7.89 ± 1.90 GVAE 0.16 ± 0.04 -25.64 ± 6.35 CVAE GCVVAE SCORE 3.22 2.83 2.63 MODEL # EXPRESSION SCORE GRAE 1 sin(3) + x 2 x + 1/ exp(1) 3 x + 1 + 2 ∗ sin(3 + 1 + 2) 0.39 0.39 0.43 GVAE GCVVAE 1 x + sin(3) ∗ 1 0.39 2 x/x/3 + x 0.40 3 sin(exp(exp(1))) + x/2 ∗ 2 0.43 SCORE CVAE 3.13 3.10 2.37 GVAE 1 x/1 + sin(x) + sin(x ∗ x) 2 1/2 + (x) + sin(x ∗ x) 3 x/2 + sin(1) + (x/2) 0.10 0.46 0.52 SCORE 2.75 0.82 0.63 1 x ∗ 1 + sin(x) + sin(3 + x) 0.45 2 x/1 + sin(1) + sin(2 ∗ 2) 0.48 3 1/1 + (x) + sin(1/2) 0.61 Figure 2: Generating structured objects by GVAE, CVAE and GRAE. (Upper left) Percentage of valid samples and their average mean score (see text, Section 6.2). The three best expressions (lower left) and molecules (upper right) and their scores are reported for all models. for structured objects, as an additional baseline. We can observe that while the GCVVAE delivers better average scores for the simpler task of generating equations (even though the single three best equations are on par with GRAE), when generating molecules GRAEs deliver samples associated to much higher scores. More interestingly, while GRAEs are almost equivalent to GVAEs for the easier task of generat- ing expressions, the proportion of syntactically valid molecules for GRAEs greatly improves over GVAEs (from 28% to 72%). # 7 CONCLUSION While the theoretical derivation of the VAE has helped popularize the framework for generative modeling, recent works have started to expose some discrepancies between theory and practice. We have shown that viewing sampling in VAEs as noise injection to enforce smoothness can enable one to distill a deterministic autoencoding framework that is compatible with several regularization techniques to learn a meaningful latent space. We have demonstrated that such an autoencoding framework can generate comparable or better samples than VAEs while getting around the practical drawbacks tied to a stochastic framework. Furthermore, we have shown that our solution of fitting a simple density estimator on the learned latent space consistently improves sample quality both for the proposed RAE framework as well as for VAEs, WAEs, and 2sVAEs which solves the mismatch between the prior and the aggregated posterior in VAEs. # ACKNOWLEDGEMENTS We would like to thank Anant Raj, Matthias Bauer, Paul Rubenstein and Soubhik Sanyal for fruitful discussions. 9 Published as a conference paper at ICLR 2020 # REFERENCES Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken ELBO. In ICML, 2018. Guozhong An. The effects of adding noise during backpropagation training on a generalization performance. In Neural computation, 1996. Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein generative adversarial networks. In ICML, 2017. Johannes Ball´e, Valero Laparra, and Eero P Simoncelli. End-to-end optimized image compression. In ICLR, 2017. M. Bauer and A. Mnih. Resampled priors for variational autoencoders. In AISTATS, 2019. Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as generative models. In NeurIPS, 2013. Christopher M Bishop. Pattern recognition and machine learning. Springer, 2006. Piotr Bojanowski, Armand Joulin, David Lopez-Paz, and Arthur Szlam. Optimizing the latent space of generative networks. In International Conference on Machine Learning, 2018. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben- gio. Generating sentences from a continuous space. In CoNLL, 2016. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In ICLR, 2019. Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. Xi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. Variational lossy autoencoder. In ICLR, 2017. Mary Kathryn Cowles and Bradley P Carlin. Markov chain Monte Carlo convergence diagnostics: a comparative review. In Journal of the American Statistical Association, 1996. Bin Dai and David Wipf. Diagnosing and enhancing VAE models. In ICLR, 2019. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. In International Conference on Machine Learning, pp. 881–889, 2015. Partha Ghosh, Arpan Losalka, and Michael J Black. Resisting adversarial attacks using Gaussian mixture variational autoencoders. In AAAI, 2019. Rafael G´omez-Bombarelli, Jennifer N Wei, David Duvenaud, Jos´e Miguel Hern´andez-Lobato, Benjam´ın S´anchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D Hirzel, Ryan P Adams, and Al´an Aspuru-Guzik. Automatic chemical design using a data-driven contin- uous representation of molecules. In ACS central science, 2018. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Im- proved training of Wasserstein GANs. In NeurIPS, 2017. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, G¨unter Klambauer, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a Nash equilibrium. In NeurIPS, 2017. Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. Beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. 10 Published as a conference paper at ICLR 2020 Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the vari- In Workshop in Advances in Approximate Bayesian Inference, ational evidence lower bound. NeurIPS, 2016. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. arXiv preprint arXiv:1802.04364, 2018. Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. In ICLR, 2014. Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improving variational inference with inverse autoregressive flow. In NeurIPS, 2016. Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images, 2009. Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The GAN land- scape: Losses, architectures, regularization, and normalization. arXiv preprint arXiv:1807.04720, 2018. Matt J Kusner, Brooks Paige, and Jos´e Miguel Hern´andez-Lobato. Grammar variational autoen- coder. In ICML, 2017. Hugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS, 2011. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In IEEE, 1998. Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep Learning Face Attributes in the Wild. In ICCV, 2015. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are GANs created equal? A large-scale study. In NeurIPS, 2018a. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pp. 700–709, 2018b. Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. In ICLR, 2016. Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In ICML, 2018. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018. Behnam Neyshabur, Ryota Tomioka, Ruslan Salakhutdinov, and Nathan Srebro. Geometry of opti- mization and implicit regularization in deep learning. arXiv preprint arXiv:1705.03071, 2017. Edward O Pyzer-Knapp, Changwon Suh, Rafael G´omez-Bombarelli, Jorge Aguilera-Iparraguirre, and Al´an Aspuru-Guzik. What is high-throughput virtual screening? A perspective from organic materials discovery. Annual Review of Materials Research, 2015. Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. arXiv preprint arXiv:1906.00446, 2019. Danilo Jimenez Rezende and Fabio Viola. Taming VAEs. arXiv preprint arXiv:1810.00597, 2018. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. 11 Published as a conference paper at ICLR 2020 Salah Rifai, Pascal Vincent, Xavier Muller, Xavier Glorot, and Yoshua Bengio. Contractive auto- encoders: Explicit invariance during feature extraction. In ICML, 2011. Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractive auto-encoders. In ICML, 2012. Mihaela Rosca, Balaji Lakshminarayanan, and Shakir Mohamed. Distribution matching in varia- tional inference. arXiv preprint arXiv:1802.06847, 2018. Mehdi S. M. Sajjadi, Bernhard Sch¨olkopf, and Michael Hirsch. Enhancenet: Single image super- resolution through automated texture synthesis. In ICCV, 2017. Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. In NeurIPS, 2018. Aliaksei Severyn, Erhardt Barth, and Stanislau Semeniuta. A hybrid convolutional variational au- toencoder for text generation. In Empirical Methods in Natural Language Processing, 2017. Jocelyn Sietsma and Robert JF Dow. Creating artificial neural networks that generalize. In Neural networks. Elsevier, 1991. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In NeurIPS, 2015. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, and Ferenc Husz´ar. Amortised MAP Inference for Image Super-resolution. In ICLR, 2017. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014. Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In ICLR, 2016. Andrey N Tikhonov and Vasilii Iakkovlevich Arsenin. Solutions of ill-posed problems, volume 14. Winston, Washington, DC, 1977. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Sch¨olkopf. Wasserstein auto- encoders. In ICLR, 2017. Jakub Tomczak and Max Welling. VAE with a VampPrior. In AISTATS, 2018. George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, and Jascha Sohl-Dickstein. REBAR: low-variance, unbiased gradient estimates for discrete latent variable models. In NeurIPS, 2017. Aaron van den Oord, Oriol Vinyals, et al. Neural discrete representation learning. In NeurIPS, 2017. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. In BMVC, 2016. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017. Shengjia Zhao, Jiaming Song, and Stefano Ermon. Towards deeper understanding of variational autoencoding models. arXiv preprint arXiv:1702.08658, 2017. 12 Published as a conference paper at ICLR 2020 APPENDIX # A RECONSTRUCTION AND REGULARIZATION TRADE-OFF We train a VAE on MNIST while monitoring the test set reconstruction quality by FID. Figure 3 (left) clearly shows the impact of more expensive k > 1 Monte Carlo approximations of Eq. 7 on sample quality during training. The commonly used 1-sample approximation is a clear limitation for VAE training. Figure 3 (right) depicts the inherent trade-off between reconstruction and random sample quality in VAEs. Enforcing structure and smoothness in the latent space of a VAE affects random sample quality in a negative way. In practice, a compromise needs to be made, ultimately leading to subpar performance. VAE K-Sample Approximation \VAE Reconstruction / KL Loss Tradeoff Reconstructions ——= Random Samples 0 2 4 6 8 +0 10° 10 10° 10° 10 10 10 Training Batches 405 KL loss weight Figure 3: (Left) Test reconstruction quality for a VAE trained on MNIST with different numbers of samples in the latent space as in Eq. 7 measured by FID (lower is better). Larger numbers of Monte-Carlo samples clearly improve training, however, the increased accuracy comes with larger requirements for memory and computation. In practice, the most common choice is therefore k = 1. (Right) Reconstruction and random sample quality (FID, y-axis, lower is better) of a VAE on MNIST for different trade-offs between LREC and LKL (x-axis, see Eq. 5). Higher weights for LKL improve random samples but hurt reconstruction. This is especially noticeable towards the optimality point (β ≈ 101). This indicates that enforcing structure in the VAE latent space leads to a penalty in quality. # B A PROBABILISTIC DERIVATION OF REGULARIZATION In this section, we propose an alternative view on enforcing smoothness on the output of Dθ by augmenting the ELBO optimization problem for VAEs with an explicit constraint. While we keep the Gaussianity assumptions over a stochastic Dθ and p(z) for convenience, we however are not fixing a parametric form for qφ(z | x) yet. We discuss next how some parametric restrictions over qφ(z | x) lead to a variation of the RAE framework in Eq. 11, specifically the introduction of LGP as a regularizer of a deterministic version of the CV-VAE. To start, we augment Eq. 5 as: arg min φ,θ Ex∼pdata(X) LREC + LKL (12) s.t. ||Do(z1) — Do(Z2)\lp <€ V 21,22 ~ e(Z|x) VX ~ Daata where Dg(z) = 4o(Ey(x)) and the constraint on the decoder encodes that the output has to vary, in the sense of an L, norm, only by a small amount ¢ for any two possible draws from the encoding of x. Let Do(z) : R™@) Rt) be given by a set of dim(x) given by {d;(z) : RU™@) > R!}. Now we can upper bound the quantity ||Dg(z1) —_Do(z2)||p by dim(x) * sup; {||di(z1) —d;(z2)||p}- Using mean value theorem ||d;(z1) — d;(Z2)||p < ||Vedi((1 — t)a1 + tz2)||p * ||Z1 — Z2||p. Hence supi{||di (2a) — di(22)||p} < supe{||Vedi((1 — t)za + t22)|Ip *||21 — 22||p}. Now if we choose the domain of q4(z |x) to be isotopic the contribution of ||z2 — z1||p to the afore mentioned quantity becomes a constant factor. Loosely speaking it is the radios of the bounding ball of domain of q¢(z|x). Hence the above term simplifies to sup;{||Vidi((1 — t)z1 + tz2)||p}. Recognizing that here z, and Zz is arbitrary lets us simplify this further to sup; {||V-d;(z)||p} From this form of the smoothness constraint, it is apparent why the choice of a parametric form for qφ(z | x) can be impactful during training. For a compactly supported isotropic PDF qφ(z|x), the 13 Published as a conference paper at ICLR 2020 extension of the support sup{||z1 − z2||p} would depend on its entropy H(qφ(z | x)). through some functional r. For instance, a uniform posterior over a hypersphere in z would ascertain r(H(qφ(z | x))) ∼= eH(qφ(z | x))/n where n is the dimensionality of the latent space. Intuitively, one would look for parametric distributions that do not favor overfitting, e.g., degenerat- ing in Dirac-deltas (minimal entropy and support) along any dimensions. To this end, an isotropic nature of qφ(z|x) would favor such a robustness against decoder over-fitting. We can now rewrite the constraint as r(H(qo(2|))) - sup{||VDo(2)|Ilp} < (13) The Lx. term can be expressed in terms of H(q4(z|x)), by decomposing it as Lut = Lce — Lu, where Ly = H(qy(z|x)) and Lce = H(qa(z| x), p(z)) represents a cross-entropy term. Therefore, the constrained problem in Eq. 12 can be written in a Lagrangian formulation by including Eq. 13: arg min φ,θ Ex∼pdata LREC + LCE − LH + λLLANG (14) where LLANG = r(H(qφ(z | x))) ∗ ||∇Dθ(z)||p. We argue that a reasonable simplifying assumption for qφ(z | x) is to fix H(qφ(z | x)) to a single constant for all samples x. Intuitively, this can be understood as fixing the variance in qφ(z | x) as we did for the CV-VAE in Section 2.2. With this simplification, Eq. 14 further reduces to arg min φ,θ Ex∼pdata(X) LREC + LCE + λ||∇Dθ(z)||p (15) We can see that ||∇Dθ(z)||p results to be the gradient penalty LGP and LCE = ||z||2 LRAE # C NETWORK ARCHITECTURE, TRAINING DETAILS AND EVALUATION We follow the models adopted by Tolstikhin et al. (2017) with the difference that we consistently ap- ply batch normalization (Ioffe & Szegedy, 2015). The latent space dimension is 16 for MNIST (Le- Cun et al., 1998), 128 for CIFAR-10 (Krizhevsky & Hinton, 2009) and 64 for CelebA (Liu et al., 2015). For all experiments, we use the Adam optimizer with a starting learning rate of 10−3 which is cut in half every time the validation loss plateaus. All models are trained for a maximum of 100 epochs on MNIST and CIFAR and 70 epochs on CelebA. We use a mini-batch size of 100 and pad MNIST digits with zeros to make the size 32×32. We use the official train, validation and test splits of CelebA. For MNIST and CIFAR, we set aside 10k train samples for validation. For random sample evaluation, we draw samples from N (0, I) for VAE and WAE-MMD and for all remaining models, samples are drawn from a multivariate Gaussian whose parameters (mean and covariance) are estimated using training set embeddings. For the GMM density estimation, we also utilize the training set embeddings for fitting and validation set embeddings to verify that GMM models are not over fitting to training embeddings. However, due to the very low number of mixture components (10), we did not encounter overfitting at this step. The GMM parameters are estimated by running EM for at most 100 iterations. MNIST x ∈ R32×32 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC16×M CIFAR 10 x ∈ R32×32 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC128×M CELEBA x ∈ R64×64 → CONV128 → BN → RELU → CONV256 → BN → RELU → CONV512 → BN → RELU → CONV1024 → BN → RELU → FLATTEN → FC64×M z ∈ R16 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT1 z ∈ R128 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT1 z ∈ R64 → FC8×8×1024 → BN → RELU → CONVT512 → BN → RELU → CONVT256 → BN → RELU → CONVT128 → BN → RELU → CONVT1 Convn represents a convolutional layer with n filters. All convolutions Convn and transposed con- volutions ConvTn have a filter size of 4×4 for MNIST and CIFAR-10 and 5×5 for CELEBA. They 14 Published as a conference paper at ICLR 2020 all have a stride of size 2 except for the last convolutional layer in the decoder. Finally, M = 1 for all models except for the VAE which has M = 2 as the encoder has to produce both mean and variance for each input. # D EVALUATION SETUP We compute the FID of the reconstructions of random validation samples against the test set to evaluate reconstruction quality. For evaluating generative modeling capabilities, we compute the FID between the test data and randomly drawn samples from a single Gaussian that is either the isotropic p(z) fixed for VAEs and WAEs, a learned second stage VAE for 2sVAEs, or a single Gaussian fit to qδ(z) for CV-VAEs and RAEs. For all models, we also evaluate random samples from a 10-component Gaussian Mixture model (GMM) fit to qδ(z). Using only 10 components prevents us from overfitting (which would indeed give good FIDs when compared with the test set)3. For interpolations, we report the FID for the furthest interpolation points resulted by applying spher- ical interpolation to randomly selected validation reconstruction pairs. We use 10k samples for all FID and PRD evaluations. Scores for random samples are evaluated against the test set. Reconstruction scores are computed from validation set reconstructions against the respective test set. Interpolation scores are computed by interpolating latent codes of a pair of randomly chosen validation embeddings vs test set samples. The visualized interpolation samples are interpolations between two randomly chosen test set images. E EVALUATION BY PRECISION AND RECALL MNIST CIFAR-10 CELEBA N GMM N GMM N GMM VAE CV-VAE WAE 0.96 / 0.92 0.84 / 0.73 0.93 / 0.88 0.95 / 0.96 0.96 / 0.89 0.98 / 0.95 0.25 / 0.55 0.31 / 0.64 0.38 / 0.68 0.37 / 0.56 0.42 / 0.68 0.51 / 0.81 0.54 / 0.66 0.25 / 0.43 0.59 / 0.68 0.50 / 0.66 0.32 / 0.55 0.69 / 0.77 0.93 / 0.87 RAE-GP RAE-L2 0.92 / 0.87 RAE-SN 0.89 / 0.95 0.92 / 0.85 RAE 0.90 / 0.90 AE 0.97 / 0.98 0.98 / 0.98 0.98 / 0.97 0.98 / 0.98 0.98 / 0.97 0.36 / 0.70 0.41 / 0.77 0.36 / 0.73 0.45 / 0.73 0.37 / 0.73 0.46 / 0.77 0.57 / 0.81 0.52 / 0.81 0.53 / 0.80 0.50 / 0.80 0.38 / 0.55 0.36 / 0.64 0.54 / 0.68 0.46 / 0.59 0.45 / 0.66 0.44 / 0.67 0.44 / 0.65 0.55 / 0.74 0.52 / 0.69 0.47 / 0.71 Table 2: Evaluation of random sample quality by precision / recall (Sajjadi et al., 2018) (higher numbers are better, best value for each dataset in bold). It is notable that the proposed ex-post density estimation improves not only precision, but also recall throughout the experiment. For example, WAE seems to have a comparably low recall of only 0.88 on MNIST which is raised considerably to 0.95 by fitting a GMM. In all cases, GMM gives the best results. Another interesting point is the low precision but high recall of all models on CIFAR-10 – this is also visible upon inspection of the samples in Fig. 9. 3We note that fitting GMMs with up to 100 components only improved results marginally. Additionally, we provide nearest-neighbours from the training set in Appendix G to show that our models are not overfitting. 15 Published as a conference paper at ICLR 2020 PRD ALL RAES PRD ALL TRADITIONAL VAES WAE VS RAE-SN VS WAE-GMM Figure 4: PRD curves of all RAE methods (left), reflects a similar story as FID scores do. RAE- SN seems to perform the best in both precision and recall metric. PRD curves of all traditional VAE variants (middle). Similar to the conclusion predicted by FID scores there are no clear winner. PRD curves for the WAE (with isotropic Gaussian prior) , WAE-GMM model with ex-post density estimation by a 10-component GMM and RAE+SN-GMM (right). This finer grained view shows how the WAE-GMM scores higher recall but lower precision than a RAE+SN-GMM while scoring comparable FID scores. Note that ex-post density estimation greatly boosts the WAE model in both PRD and FID scores. 16 Published as a conference paper at ICLR 2020 # MNIST VAE CV-VAE WAE RAE-GP RAE-L2 RAE-SN RAE AE Precision 0.0 T i T 0.00 0.25 0.50 0.75 1.00 Recall Precision = N(\mu, \sigma) == GMM_10 0.00 0.25 0.50 0.75 1.00 Recall Precision 0.0 + i i r 0.00 0.25 0.50 0.75 1.00 Recall 1.0 0.8 § 06 7) g £0.4 0.2 0.0 i T 0.00 0.25 0.50 0.75 1.00 Recall 1.0 0.8 § 06 a] 8 & 0.4 0.2 0.0 i r 0.00 0.25 0.50 0.75 1.00 Recall 5 a 2 a 0.00 0.25 0.50 0.75 1.00 Recall Precision 0.0 0.50 Recall 0.00 0.25 0.75 1.00 Precision 0.0 r= N(\mu, \sigma) == GMM_10 0.50 Recall 0.00 0.25 0.75 1.00 Figure 5: PRD curves of all methods on image data experiments on MNIST. For each plot, we show the PRD curve when applying the fixed or the fitted one by ex-post density estimation (XPDE). XPDE greatly boosts both precision and recall for all models. 17 Published as a conference paper at ICLR 2020 # CIFAR 10 VAE CV-VAE WAE RAE-GP RAE-L2 RAE-SN RAE AE 1.0 — NO.) —= GMM_10 0.8 506 3 £04 0.2 0.0+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 —= Nt\mu, \sigma) —— GMM_10 0.8 506 3 £04 0.2 00+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 — NO.) —= GMM_10 0.8 506 3 £04 0.2 0.0+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 — NO.) —= GMM_10 0.8 506 3 £04 0.2 0.0 0.00 0.25 0.50 0.75 1.00 Recall 1.0 — NO.) —= GMM_10 0.8 506 3 £04 0.2 0.0 0.00 0.25 0.50 0.75 1.00 Recall 1.0 — NO.) —= GMM_10 0.8 506 3 £04 0.2 0.0 0.00 0.25 0.50 0.75 1.00 Recall — NO.) —— GMM_10 0.8 50.6 3 © 0.4 0.0+ 0.00 0.25 0.50 0.75 1.00 Recall 0.8 50.6 3 £04 —= Nt\mu, \sigma) — GMM_10 0.0 0.00 0.25 0.50 0.75 1.00 Recall Figure 6: PRD curves of all methods on image data experiments on CIFAR10. For each plot, we show the PRD curve when applying the fixed or the fitted one by ex-post density estimation (XPDE). XPDE greatly boosts both precision and recall for all models. 18 Published as a conference paper at ICLR 2020 # CELEBA VAE CV-VAE WAE RAE-GP RAE-L2 RAE-SN RAE AE 1.0 mm NO, — GMM_10 08 5 3 & 0.0 + 0.00 0.25 0.50 0.75 1.00 Recall 1.0 = N(\mu, \sigma) — GMM_10 08 5 3 & . 0.00 0.25 0.50 0.75 1.00 Recall 1.0 mm NO, — GMM_10 08 50.6 3 E04 0.2 0.0 0.00 0.25 0.50 0.75 1.00 Recall 1.0 mm NO, — GMM_10 08 5 0.6 3 £o4 0.2 0.0+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 mm NO, — GMM_10 08 5 3 & .0 0.00 0.25 0.50 0.75 1.00 Recall 1.0 mm NO, — GMM_10 08 5 0.6 3 £o4 0.2 0.0+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 — NO.) —— GMM_10 0.8 50.6 3 © 0.4 00+ 0.00 0.25 0.50 0.75 1.00 Recall 1.0 —= Nt\mu, \sigma) — GMM_10 0.8 50.6 3 £04 0) 0.00 0.25 0.50 0.75 1.00 Recall Figure 7: PRD curves of all methods on image data experiments on CELEBA. For each plot, we show the PRD curve when applying the fixed or the fitted one by ex-post density estimation (XPDE). XPDE greatly boosts both precision and recall for all models. 19 Published as a conference paper at ICLR 2020 F MORE QUALITATIVE RESULTS # INTERPOLATIONS # RANDOM SAMPLES # RECONSTRUCTIONS GT VAE CV-VAE WAE 2SVAE RAE-GP RAE-L2 RAE-SN RAE AE GT VAE CV-VAE WAE 2SVAE RAE-GP RAE-L2 RAE-SN RAE AE my / 7536 b /7S3 6 GvgI Y / 75 3 6 b 8FA%8 b S/S a / 7S36 7N26 6 / 7536 0 /l iy 6 2300 7 / 7S 3G b 64900 / / 753 6 /O/3 6 § $4 b / 7536 7%#1 / 753 6 b m+53 353 ZO 84 708% + 5 § 3 $ 3 538 353 24/6 23900 58 3s $ 3 5S = $ 3 ~ Cig t OS 353 327449 2900 3 5 § 3s $ 3 14380 58S 353 mt5S8 353 §O/3 Figure 8: Qualitative evaluation for sample quality for VAEs, WAEs and RAEs on MNIST. Left: reconstructed samples (top row is ground truth). Middle: randomly generated samples. Right: spherical interpolations between two images (first and last column). 20 Published as a conference paper at ICLR 2020 # RECONSTRUCTIONS # RANDOM SAMPLES # INTERPOLATIONS GT VAE CV-VAE WAE 2SVAE RAE-GP RAE-L2 RAE-SN RAE AE GT VAE CV-VAE WAE 2SVAE RAE-GP RAE-L2 RAE-SN RAE AE « a ‘od a s atad 1 +} 4 ‘ee “ats Fada * 'e - Figure 9: Qualitative evaluation for sample quality for VAEs, WAEs and RAEs on CIFAR-10. Left: reconstructed samples (top row is ground truth). Middle: randomly generated samples. Right: spherical interpolations between two images (first and last column). 21 Published as a conference paper at ICLR 2020 G INVESTIGATING OVERFITTING MNIST CIFAR-10 CELEBA VAE CV-VAE WAE RAE-GP RAE-L2 RAE-SN RAE AE es eee, 777 B@avaes i] r Le of of 2 O00 (> ee BSP wee ESBS Ww Wa 2” JY Jv [ARARA fo Fs fs O60 =2 oO S35S n Figure 10: Nearest neighbors to generated samples (leftmost image, red box) from training set. It seems that the models have generalized well and fitting only 10 Gaussians to the latent space prevents overfitting. 22 Published as a conference paper at ICLR 2020 # H VISUALIZING EX-POST DENSITY ESTIMATION To visualize that ex-post density estimation does indeed help reduce the mismatch between the aggregated posterior and the prior we train a VAE on the MNIST dataset whose latent space is 2 dimensional. The unique advantage of this setting is that one can simply visualize the density of test sample in the latent space by plotting them as a scatterplot. As it can be seen from figure 11, an expressive density estimator effectively fixes the miss-match and this as reported earlier results in better sample quality. N (0, I) N (µ, Σ) GMM(k = 10) Figure 11: Different density estimations of the 2-dimensional latent space of a VAE learned on MNIST. The blue points are 2000 test set samples while the orange ones are drawn from the es- timator indicated in each column: isotropic Gaussian (left), multivariate Gaussian with mean and covariance estimated on the training set (center) and a 10-component GMM (right). This clearly shows the aggregated posterior mismatch w.r.t. to the isotropic Gaussian prior imposed by VAEs and how ex-post density estimation can help fix the estimate. Here in figure 12 we perform the same visualization on with all the models trained on the MNIST dataset as employed on our large evaluation in Table 1. Clearly every model depicts rather large mis- match between aggregate posterior and prior. Once again the advantage of ex-post density estimate is clearly visible. 23 Published as a conference paper at ICLR 2020 N (0, I) N (µ, Σ) GMM(k = 10) VAE CV-VAE WAE RAE-GP RAE-L2 RAE-SN RAE AE ig ° 50 Figure 12: Different density estimations of the 16-dimensional latent spaces learned by all models on MNIST (see Table 1) here projected in 2d via T-SNE. The blue points are 2000 test set samples while the orange ones are drawn from the estimator indicated in each column: isotropic Gaussian (left), multivariate Gaussian with mean and covariance estimated on the training set (center) and a 10-component GMM (right). Ex-post density estimation greatly improves sampling the latent space. 24 Published as a conference paper at ICLR 2020 MNIST CIFAR CELEBA SAMPLES SAMPLES SAMPLES REC. N GMM Interp. REC. N GMM Interp. REC. N GMM Interp. RAE-GP RAE-L2 RAE-SN RAE AE AE-L2 14.04 10.53 15.65 11.67 12.95 11.19 22.21 22.22 19.67 23.92 58.73 315.15 11.54 8.69 11.74 9.81 10.66 9.36 15.32 14.54 15.15 14.67 17.12 17.15 32.17 32.24 27.61 29.05 30.52 34.35 83.05 80.80 84.25 83.87 84.74 247.48 76.33 74.16 75.30 76.28 76.47 75.40 64.08 62.54 63.62 63.27 61.57 61.09 39.71 43.52 36.01 40.18 40.79 44.72 116.30 51.13 44.74 48.20 127.85 346.29 45.63 47.97 40.95 44.68 45.10 48.42 47.00 45.98 39.53 43.67 50.94 56.16 9.70 RAE-GP-L2 10.67 RAE-L2-SN RAE-SN-GP 17.00 RAE-L2-SN-GP 16.75 72.64 50.63 139.61 144.51 9.07 9.42 13.12 13.93 16.07 15.73 16.62 16.75 33.25 24.17 33.04 29.96 187.07 240.27 284.36 290.34 79.03 74.10 75.23 74.22 62.48 61.71 62.86 61.93 47.06 39.90 63.75 68.86 72.09 180.39 299.69 318.67 51.55 44.39 71.05 75.04 50.28 42.97 68.87 74.29 Table 3: Comparing multiple regularization schemes for RAE models. The improvement in recon- struction, random sample quality and interpolated test samples is generally comparable, but hardly much better. This can be explained with the fact that the additional regularization losses make tuning their hyperparameters more difficult, in practice. # I COMBINING MULTIPLE REGULARIZATION TERMS The rather intriguing facts that AE without explicit decoder regularization performs reasonably well as seen from table 1, indicates that convolutional neural networks when combined with gradient based optimizers inherit some implicit regularization. This motivates us to investigate a few different combinations of regularizations e.g. we regularize the decoder of an auto-encoder while drop the regularization in the z space. The results of this experiment is reported in the row marked AE-L2 in table 3. Further more a recent GAN literature Lucic et al. (2018b) report that often a combination of regular- izations boost performance of neural networks. Following this, we combine multiple regularization techniques in out framework. However note that this rather drastically increases the hyper param- eters and the models become harder to train and goes against the core theme of this work, which strives for simplicity. Hence we perform simplistic effort to tune all the hyper parameters to see if this can provide boost in the performance, which seem not to be the case. These experiments are summarized in the second half of the table 3 25
{ "id": "1802.06847" }
1903.12136
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
In the natural language processing literature, neural networks are becoming increasingly deeper and complex. The recent poster child of this trend is the deep language representation model, which includes BERT, ELMo, and GPT. These developments have led to the conviction that previous-generation, shallower neural networks for language understanding are obsolete. In this paper, however, we demonstrate that rudimentary, lightweight neural networks can still be made competitive without architecture changes, external training data, or additional input features. We propose to distill knowledge from BERT, a state-of-the-art language representation model, into a single-layer BiLSTM, as well as its siamese counterpart for sentence-pair tasks. Across multiple datasets in paraphrasing, natural language inference, and sentiment classification, we achieve comparable results with ELMo, while using roughly 100 times fewer parameters and 15 times less inference time.
http://arxiv.org/pdf/1903.12136
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin
cs.CL, cs.LG
8 pages, 2 figures; first three authors contributed equally
null
cs.CL
20190328
20190328
9 1 0 2 r a M 8 2 ] L C . s c [ 1 v 6 3 1 2 1 . 3 0 9 1 : v i X r a # Distilling Task-Specific Knowledge from BERT into Simple Neural Networks Raphael Tang∗, Yao Lu∗, Linqing Liu∗, Lili Mou, Olga Vechtomova, and Jimmy Lin University of Waterloo {r33tang, yao.lu, linqing.liu}@uwaterloo.ca [email protected] {ovechtom, jimmylin}@uwaterloo.ca # Abstract In the natural language processing literature, neural networks are becoming increasingly deeper and complex. The recent poster child of this trend is the deep language represen- tation model, which includes BERT, ELMo, and GPT. These developments have led to the conviction that previous-generation, shal- lower neural networks for language under- standing are obsolete. In this paper, however, we demonstrate that rudimentary, lightweight neural networks can still be made competitive without architecture changes, external training data, or additional input features. We propose to distill knowledge from BERT, a state-of- the-art language representation model, into a single-layer BiLSTM, as well as its siamese counterpart for sentence-pair tasks. Across multiple datasets in paraphrasing, natural lan- guage inference, and sentiment classification, we achieve comparable results with ELMo, while using roughly 100 times fewer param- eters and 15 times less inference time. 1 # 1 Introduction In the natural language processing (NLP) litera- ture, the march of the neural networks has been an unending yet predictable one, with new architec- tures constantly surpassing previous ones in not only performance and supposed insight but also complexity and depth. In the midst of all this neural progress, it becomes easy to dismiss ear- lier, “first-generation” neural networks as obso- lete. Ostensibly, this appears to be true: Peters et al. (2018) show that using pretrained deep word representations achieves state of the art on a vari- ety of tasks. Recently, Devlin et al. (2018) have pushed this line of work even further with bidi- rectional encoder representations from transform- ers (BERT), deeper models that greatly improve state of the art on more tasks. More recently, Ope- nAI has described GPT-2, a state-of-the-art, larger transformer model trained on even more data.1 Such large neural networks are, however, prob- lematic in practice. Due to the large number of pa- rameters, BERT and GPT-2, for example, are un- deployable in resource-restricted systems such as mobile devices. They may be inapplicable in real- time systems either, because of low inference-time efficiency. Furthermore, the continued slowdown of Moore’s Law and Dennard scaling (Han, 2017) suggests that there exists a point in time when we must compress our models and carefully evaluate our choice of the neural architecture. In this paper, we propose a simple yet effective approach that transfers task-specific knowledge from BERT to a shallow neural architecture—in particular, a bidirectional long short-term memory network (BiLSTM). Our motivation is twofold: we question whether a simple architecture actually lacks representation power for text modeling, and we wish to study effective approaches to trans- fer knowledge from BERT to a BiLSTM. Con- cretely, we leverage the knowledge distillation approach (Ba and Caruana, 2014; Hinton et al., 2015), where a larger model serves as a teacher and a small model learns to mimic the teacher as a student. This approach is model agnostic, making knowledge transfer possible between BERT and a different neural architecture, such as a single-layer BiLSTM, in our case. To facilitate effective knowledge transfer, how- ever, we often require a large, unlabeled dataset. The teacher model provides the probability logits and estimated labels for these unannotated sam- ples, and the student network learns from the teacher’s outputs. In computer vision, unlabeled images are usually easy to obtain through aug- menting the data using rotation, additive noise, ∗Equal contribution. Ordering decided by coin toss. # 1 https://goo.gl/Frmwqe and other distortions. However, obtaining addi- tional, even unlabeled samples for a specific task can be difficult in NLP. Traditional data augmen- tation in NLP is typically task-specific (Wang and Eisner, 2016; Serban et al., 2016) and difficult to extend to other NLP tasks. To this end, we fur- ther propose a novel, rule-based textual data aug- mentation approach for constructing the knowl- edge transfer set. Although our augmented sam- ples are not fluent natural language sentences, ex- perimental results show that our approach works surprisingly well for knowledge distillation. We evaluate our approach on three tasks in sen- tence classification and sentence matching. Exper- iments show that our knowledge distillation pro- cedure significantly outperforms training the orig- inal simpler network alone. To our knowledge, we are the first to explore distilling knowledge from BERT. With our approach, a shallow BiLSTM- based model achieves results comparable to Em- beddings from Language Models (ELMo; Peters et al., 2018), but uses around 100 times fewer pa- rameters and performs inference 15 times faster. Therefore, our model becomes a state-of-the-art “small” model for neural NLP. # 2 Related Work In the past, researchers have developed and ap- plied various neural architectures for NLP, includ- ing convolutional neural networks (Kalchbrenner et al., 2014; Kim, 2014), recurrent neural net- works (Mikolov et al., 2010, 2011; Graves, 2013), and recursive neural networks (Socher et al., 2010, 2011). These generic architectures can be applied to tasks like sentence classification (Zhang et al., 2015; Conneau et al., 2016) and sentence match- ing (Wan et al., 2016; He et al., 2016), but the model is trained only on data of a particular task. Recently, Peters et al. (2018) introduce Em- beddings from Language Models (ELMo), an ap- proach for learning high-quality, deep contextual- ized representations using bidirectional language models. With ELMo, they achieve large improve- ments on six different NLP tasks. Devlin et al. (2018) propose Bidirectional Encoder Represen- tations from Transformers (BERT), a new lan- guage representation model that obtains state-of- the-art results on eleven natural language process- ing tasks. Trained with massive corpora for lan- guage modeling, BERT has strong syntactic abil- ity (Goldberg, 2019) and captures generic lan- guage features. A typical downstream use of BERT is to fine-tune it for the NLP task at hand. This improves training efficiency, but for infer- ence efficiency, these models are still considerably slower than traditional neural networks. Model compression. A prominent line of work is devoted to compressing large neural networks to accelerate inference. Early pioneering works include LeCun et al. (1990), who propose a lo- cal error-based method for pruning unimportant weights. Recently, Han et al. (2015) propose a simple compression pipeline, achieving 40 times reduction in model size without hurting accuracy. Unfortunately, these techniques induce irregular weight sparsity, which precludes highly optimized computation routines. Thus, others explore prun- ing entire filters (Li et al., 2016; Liu et al., 2017), with some even targeting device-centric metrics, such as floating-point operations (Tang et al., 2018) and latency (Chen et al., 2018). Still other studies examine quantizing neural networks (Wu et al., 2018); in the extreme, Courbariaux et al. (2016) propose binarized networks with both bi- nary weights and binary activations. Unlike the aforementioned methods, the knowl- edge distillation approach (Ba and Caruana, 2014; Hinton et al., 2015) enables the transfer of knowl- edge from a large model to a smaller, “student” network, which is improved in the process. The student network can use a completely different architecture, since distillation works at the out- put level. This is important in our case, since our research objective is to study the representa- tion power of shallower neural networks for lan- guage understanding, while simultaneously com- pressing models like BERT; thus, we follow this approach in our work. In the NLP literature, it has previously been used in neural machine trans- lation (Kim and Rush, 2016) and language model- ing (Yu et al., 2018). # 3 Our Approach First, we choose the desired teacher and student models for the knowledge distillation approach. Then, we describe our distillation procedure, which comprises two major components: first, the addition of a logits-regression objective, and the construction of a transfer dataset, second, which augments the training set for more effective knowledge transfer. Figure 1: The BiLSTM model for single-sentence classification. The labels are (a) input embeddings, (b) BiLSTM, (c, d) backward and forward hid- den states, respectively, (e, g) fully-connected layer; (e) with ReLU, (f) hidden representation, (h) logit out- puts, (i) softmax activation, and (j) final probabilities. # 3.1 Model Architecture For the teacher network, we use the pretrained, fine-tuned BERT (Devlin et al., 2018) model, a deep, bidirectional transformer encoder that achieves state of the art on a variety of language From an input sentence understanding tasks. (pair), BERT computes a feature vector h ∈ Rd, upon which we build a classifier for the task. For single-sentence classification, we directly build a softmax layer, i.e., the predicted probabilities are y(B) = softmax(W h), where W ∈ Rk×d is the softmax weight matrix and k is the number of la- bels. For sentence-pair tasks, we concatenate the BERT features of both sentences and feed them to a softmax layer. During training, we jointly fine- tune the parameters of BERT and the classifier by maximizing the probability of the correct label, us- ing the cross-entropy loss. In contrast, our student model is a single-layer BiLSTM with a non-linear classifier. After feed- ing the input word embeddings into the BiLSTM, the hidden states of the last step in each direction are concatenated and fed to a fully connected layer with rectified linear units (ReLUs), whose output is then passed to a softmax layer for classifica- tion (Figure 1). For sentence-pair tasks, we share BiLSTM encoder weights in a siamese architec- ture between the two sentence encoders, produc- ing sentence vectors hs; and hs2 (Figure 2). We then apply a standard concatenate-compare oper- ation (Wang et al., 2018) between the two sen- tence vectors: f(hsi,hs2) = [hs1,hs2,hs1 © ho, |hsi — Ws2|], where © denotes elementwise multiplication. We feed this output to a ReLU- Input #1 Input #2 Figure 2: The siamese BiLSTM model for sentence matching, with shared encoder weights for both sen- The labels are (a) BiLSTM, (b, c) final tences. backward and forward hidden states, respectively, (d) concatenate–compare unit, (e, g) fully connected layer; (e) with ReLU, (f) hidden representation, (h) logit out- puts, (i) softmax activation, and (j) final probabilities. activated classifier. It should be emphasized that we restrict the ar- chitecture engineering to a minimum to revisit the representation power of BiLSTM itself. We avoid any additional tricks, such as attention and layer normalization. # 3.2 Distillation Objective The distillation approach accomplishes knowl- edge transfer at the output level; that is, the student network learns to mimic a teacher network’s be- havior given any data point. In particular, Ba and Caruana (2014) posit that, in addition to a one-hot predicted label, the teacher’s predicted probability is also important. In binary sentiment classifica- tion, for example, some sentences have a strong sentiment polarity, whereas others appear neutral. If we use only the teacher’s predicted one-hot label to train the student, we may lose valuable informa- tion about the prediction uncertainty. The discrete probability output of a neural net- work is given by exp{w; h} yj exp{w h} Yi = softmax(z) = qd) where w; denotes the i" row of softmax weight W, and z is equivalent to w'h. The argument of the softmax function is known as logits. Train- ing on logits makes learning easier for the student model since the relationship learned by the teacher model across all of the targets are equally empha- sized (Ba and Caruana, 2014). The distillation objective is to penalize the mean-squared-error (MSE) loss between the stu- dent network’s logits against the teacher’s logits: Ldistill = ||zzz(B) − zzz(S)||2 2 (2) where zzz(B) and zzz(S) are the teacher’s and student’s logits, respectively. Other measures such as cross entropy with soft targets are viable as well (Hinton et al., 2015); however, in our preliminary experi- ments, we found MSE to perform slightly better. the distilling objective can be used in conjunction with a traditional cross- entropy loss against a one-hot label t, given by L = α · LCE + (1 − α) · Ldistill ti log y(S) L= a-Loe + (1 —a) - Laistin (3) (3) i − (1 − α)||zzz(B) − zzz(S)||2 2 = -a) > ti logy”) -(1- a)||2?) — 2(5)|)2 i When distilling with a labeled dataset, the one-hot target t is simply the ground-truth label. When distilling with an unlabeled dataset, we use the predicted label by the teacher, i.e., ti = 1 if i = argmax y(B) and 0 otherwise. # 3.3 Data Augmentation for Distillation In the distillation approach, a small dataset may not suffice for the teacher model to fully express its knowledge (Ba and Caruana, 2014). Therefore, we augment the training set with a large, unla- beled dataset, with pseudo-labels provided by the teacher, to aid in effective knowledge distillation. Unfortunately, data augmentation in NLP is usually more difficult than in computer vision. First, there exist a large number of homologous images in computer vision tasks. CIFAR-10, for example, is a subset of the 80 million tiny images dataset (Krizhevsky, 2009). Second, it is possi- ble to synthesize a near-natural image by rotating, adding noise, and other distortions, but if we man- ually manipulate a natural language sentence, the sentence may not be fluent, and its effect in NLP data augmentation less clear. In our work, we propose a set of heuristics for task-agnostic data augmentation: we use the orig- inal sentences in the small dataset as blueprints, and then modify them with our heuristics, a pro- cess analogous to image distortion. Specifically, we randomly perform the following operations. Masking. With probability pmask, we randomly replace a word with [MASK], which corresponds to an unknown token in our models and the masked word token in BERT. Intuitively, this rule helps to clarify the contribution of each word to- ward the label, e.g., the teacher network produces less confident logits for “I [MASK] the comedy” than for “I loved the comedy.” POS-guided word replacement. With probabil- ity ppos, we replace a word with another of the same POS tag. To preserve the original training distribution, the new word is sampled from the un- igram word distribution re-normalized by the part- of-speech (POS) tag. This rule perturbs the se- mantics of each example, e.g., “What do pigs eat?” is different from “How do pigs eat?” nnn-gram sampling. With probability png, we ran- domly sample an n-gram from the example, where n is randomly selected from {1, 2, . . . , 5}. This rule is conceptually equivalent to dropping out all other words in the example, which is a more ag- gressive form of masking. Our data augmentation procedure is as fol- lows: given a training example {w1, . . . wn}, we iterate over the words, drawing from the uniform distribution Xi ∼ UNIFORM[0, 1] for each wi. If Xi < pmask, we apply masking to wi. If pmask ≤ Xi < pmask + ppos, we apply POS-guided word replacement. We treat masking and POS-guided swapping as mutually exclusive: once one rule is applied, the other is disregarded. After iterating through the words, with probability png, we ap- ply n-gram sampling to this entire synthetic ex- ample. The final synthetic example is appended to the augmented, unlabeled dataset. We apply this procedure niter times per example to generate up to niter samples from a single exam- ple, with any duplicates discarded. For sentence- pair datasets, we cycle through augmenting the first sentence only (holding the second fixed), the second sentence only (holding the first fixed), and both sentences. # 4 Experimental Setup For BERT, we use the large variant BERTLARGE (described below) as the teacher network, starting with the pretrained weights and following the orig- inal, task-specific fine-tuning procedure (Devlin et al., 2018). We fine-tune four models using the Adam optimizer with learning rates {2, 3, 4, 5} × 10−5, picking the best model on the validation set. We avoid data augmentation during fine-tuning. For our models, we feed the original dataset to- gether with the synthesized examples to the task- specific, fine-tuned BERT model to obtain the predicted logits. We denote our distilled BiL- STM trained on soft logit targets as BiLSTMSOFT, which corresponds to choosing α = 0 in Sec- tion 3.2. Preliminary experiments suggest that us- ing only the distillation objective works best. # 4.1 Datasets We conduct experiments on the General Language Understanding Evaluation (GLUE; Wang et al., 2018) benchmark, a collection of six natural lan- guage understanding tasks that are classified into three categories: single-sentence tasks, similarity and paraphrase tasks, and inference tasks. Due to restrictions in time and computational resources, we choose the most widely used dataset from each category, as detailed below. SST-2. Stanford Sentiment Treebank 2 (SST-2; Socher et al., 2013) comprises single sentences ex- tracted from movie reviews for binary sentiment classification (positive vs. negative). Following GLUE, we consider sentence-level sentiment only, ignoring the sentiment labels of phrases provided by the original dataset. MNLI. The Multi-genre Natural Language In- ference (MNLI; Williams et al., 2017) corpus is a large-scale, crowdsourced entailment clas- sification dataset. The objective is to predict the relationship between a pair of sentences as one of entailment, neutrality, or contradiction. MNLI-m uses development and test sets that con- tain the same genres from the training set, while MNLI-mm represents development and test sets from the remaining, mismatched genres. QQP. Quora Question Pairs (QQP; Shankar Iyer and Csernai, 2017) consists of pairs of poten- tially duplicate questions collected from Quora, a question-and-answer website. The binary label of each question pair indicates redundancy. # 4.2 Hyperparameters We choose either 150 or 300 hidden units for the BiLSTM, and 200 or 400 units in the ReLU- activated hidden layer, depending on the valida- tion set performance. Following Kim (2014), we use the traditional 300-dimensional word2vec embeddings trained on Google News and multi- channel embeddings. For optimization, we use AdaDelta (Zeiler, 2012) with its default learning rate of 1.0 and ρ = 0.95. For SST-2, we use a batch size of 50; for MNLI and QQP, due to their larger size, we choose 256 for the batch size. For our dataset augmentation hyperparameters, we fix pmask = ppos = 0.1 and png = 0.25 across all datasets. These values have not been tuned at all on the datasets—these are the first values we chose. We choose niter = 20 for SST-2 and niter = 10 for both MNLI and QQP, since they are larger. # 4.3 Baseline Models BERT (Devlin et al., 2018) is a multi-layer, bidi- rectional transformer encoder that comes in two variants: BERTBASE and the larger BERTLARGE. BERTBASE comprises 12 layers, 768 hidden units, 12 self-attention heads, and 110M parameters. BERTLARGE uses 24 layers, 1024 hidden units, 16 self-attention heads, and 340M parameters. OpenAI GPT (Radford et al., 2018) is, like BERT, a generative pretrained transformer (GPT) encoder fine-tuned on downstream tasks. Unlike BERT, however, GPT is unidirectional and only makes use of previous context at each time step. GLUE ELMo baselines. In the GLUE pa- per, Wang et al. (2018) provide a BiLSTM-based model baseline trained on top of ELMo and jointly fine-tuned across all tasks. This model contains 4096 units in the ELMo BiLSTM and more than 93 million total parameters. In the BERT paper, Devlin et al. (2018) provide the same model but a result slightly different from Wang et al. (2018). For fair comparison, we report both results. # 5 Results and Discussion We present the results of our models as well as baselines in Table 1. For QQP, we report both F1 and accuracy, since the dataset is slightly unbal- anced. Following GLUE, we report the average score of each model on the datasets. # 5.1 Model Quality To verify the correctness of our implementation, we train the base BiLSTM model on the original labels, without using distillation (row 7). Across all three datasets, we achieve scores compara- ble with BiLSTMs from previous works (rows 8 and 9), suggesting that our implementation is fair. Note that, on MNLI, the two baselines differ by 4% in accuracy (rows 8 and 9). None of the non- distilled BiLSTM baselines outperform BERT’s # Model SST-2 QQP MNLI-m MNLI-mm Acc F1/Acc Acc Acc 1 BERTLARGE (Devlin et al., 2018) 2 BERTBASE (Devlin et al., 2018) 3 OpenAI GPT (Radford et al., 2018) 4 BERT ELMo baseline (Devlin et al., 2018) 5 GLUE ELMo baseline (Wang et al., 2018) 94.9 93.5 91.3 90.4 90.4 72.1/89.3 71.2/89.2 70.3/88.5 64.8/84.7 63.1/84.3 86.7 84.6 82.1 76.4 74.1 85.9 83.4 81.4 76.1 74.5 6 Distilled BiLSTMSOFT 7 BiLSTM (our implementation) 8 BiLSTM (reported by GLUE) 9 BiLSTM (reported by other papers) 90.7 86.7 85.9 87.6† 68.2/88.1 63.7/86.2 61.4/81.7 – /82.6‡ 73.0 68.7 70.3 66.9* 72.6 68.3 70.8 66.9* Table 1: Test results on different datasets. The BiLSTM results reported by other papers are drawn from Zhou et al. (2016),† Wang et al. (2017),‡ and Williams et al. (2017).∗ All of our test results are obtained from the GLUE benchmark website. ELMo baseline (row 4)—our implementation, al- though attaining a higher accuracy for QQP, falls short in F1 score. We apply our distillation approach of match- ing logits using the augmented training dataset, and achieve an absolute improvement of 1.9– 4.5 points against our base BiLSTM. On SST-2 and QQP, we outperform the best reported ELMo model (row 4), coming close to GPT. On MNLI, our results trail ELMo’s by a few points; how- ever, they still represent a 4.3-point improvement against our BiLSTM, and a 1.8–2.7-point increase over the previous best BiLSTM (row 8). Overall, our distilled model is competitive with two previ- ous implementations of ELMo BiLSTMs (rows 4– 5), suggesting that shallow BiLSTMs have greater representation power than previously thought. # of Par. Inference Time BERTLARGE ELMo BiLSTMSOFT 335 (349×) 93.6 (98×) 0.96 (1×) 1060 (434×) 36.71 (15×) 2.44 (1×) Table 2: Single-sentence model size and inference speed on SST-2. # of Par. denotes number of millions of parameters, and inference time is in seconds. faster. At 2.2 million parameters, the variant with 300-dimensional LSTM units is twice as large, though still substantially smaller than ELMo. For sentence-pair tasks, the siamese counterpart uses no pairwise word interactions, unlike previous state of the art (He and Lin, 2016); its runtime thus scales linearly with sentence length. We do not, however, outperform the deep trans- former models (rows 1–3), doing 4–7 points worse, on average. Nevertheless, our model has much fewer parameters and better efficiency, as detailed in the following section. # Inference Efficiency For our inference speed and parameter analysis, we use the open-source PyTorch implementations for BERT2 and ELMo (Gardner et al., 2017). On a single NVIDIA V100 GPU, we perform model inference with a batch size of 512 on all 67350 sentences of the SST-2 training set. As shown in Table 2, our single-sentence model uses 98 and 349 times fewer parameters than ELMo and BERTLARGE, respectively, and is 15 and 434 times 2 https://goo.gl/iRPhjP # 6 Conclusion and Future Work In this paper, we explore distilling the knowledge from BERT into a simple BiLSTM-based model. The distilled model achieves comparable results with ELMo, while using much fewer parameters and less inference time. Our results suggest that shallow BiLSTMs are more expressive for natural language tasks than previously thought. One direction of future work is to explore ex- tremely simple architectures in the extreme, such as convolutional neural networks and even sup- port vector machines and logistic regression. An- other opposite direction is to explore slightly more complicated architectures using tricks like pair- wise word interaction and attention. # Acknowledgements This research was enabled in part by resources provided by Compute Ontario and Compute Canada. This research was also supported by the Natural Sciences and Engineering Research Coun- cil (NSERC) of Canada. # References Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662. Changan Chen, Frederick Tung, Naveen Vedula, and Greg Mori. 2018. Constraint-aware deep neural In Proceedings of the Eu- network compression. ropean Conference on Computer Vision (ECCV), pages 400–415. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann Lecun. 2016. Very deep convolutional net- works for text classification. arXiv:1606.01781. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Bina- rized neural networks: Training deep neural net- works with weights and activations constrained to +1 or -1. arXiv:1602.02830. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. AllenNLP: A deep semantic natural language processing platform. arXiv:1803.07640. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. arXiv:1901.05287. Alex Graves. 2013. Generating sequences with recur- rent neural networks. arXiv:1308.0850. Song Han. 2017. Efficient methods and hardware for deep learning. Song Han, Huizi Mao, and William J Dally. 2015. Deep compression: Compressing deep neural net- works with pruning, trained quantization and Huff- man coding. arXiv:1510.00149. Hua He and Jimmy Lin. 2016. Pairwise word interac- tion modeling with deep neural networks for seman- In Proceedings of the tic similarity measurement. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 937–948. Hua He, John Wieting, Kevin Gimpel, Jinfeng Rao, and Jimmy Lin. 2016. UMD-TTIC-UW at SemEval- 2016 task 1: Attention-based multi-perspective con- volutional neural networks for textual similarity measurement. In Proceedings of the 10th Interna- tional Workshop on Semantic Evaluation (SemEval- 2016), pages 1103–1108. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531. Nal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. arXiv:1404.2188. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746–1751. Yoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327. Alex Krizhevsky. 2009. Learning multiple layers of features from tiny images. Technical report, Univer- sity of Toronto. Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal brain damage. In Advances in neural infor- mation processing systems, pages 598–605. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for effi- cient convnets. arXiv:1608.08710. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, and Changshui Zhang. 2017. Shoumeng Yan, Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 2736–2744. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech com- munication association. Tom´aˇs Mikolov, Stefan Kombrink, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2011. Exten- sions of recurrent neural network language model. In 2011 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5528–5531. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2227–2237. Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language un- derstanding with unsupervised learning. Technical report, Technical report, OpenAI. Iulian Vlad Serban, Alberto Garc´ıa-Dur´an, Caglar Gulcehre, Sungjin Ahn, Sarath Chandar, Aaron Courville, and Yoshua Bengio. 2016. Generat- ing factoid questions with recurrent neural net- works: The 30m factoid question-answer corpus. arXiv:1603.06807. Nikhil Dandekar Shankar Iyer and Kornl Csernai. 2017. First Quora dataset release: Question pairs. Richard Socher, Cliff C. Lin, Chris Manning, and An- drew Y Ng. 2011. Parsing natural scenes and natu- ral language with recursive neural networks. In Pro- ceedings of the 28th international conference on ma- chine learning (ICML-11), pages 129–136. Richard Socher, Christopher D. Manning, and An- drew Y. Ng. 2010. Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop, volume 2010, pages 1–9. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631–1642. Raphael Tang, Ashutosh Adhikari, and Jimmy Lin. 2018. FLOPs as a direct optimization objective for learning sparse neural networks. arXiv:1811.03060. Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, and Xueqi Cheng. 2016. A deep ar- chitecture for semantic matching with multiple po- sitional sentence representations. In Thirtieth AAAI Conference on Artificial Intelligence. Alex Wang, Amapreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and anal- ysis platform for natural language understanding. arXiv:1804.07461. Dingquan Wang and Jason Eisner. 2016. The galactic dependencies treebanks: Getting more data by syn- thesizing new languages. Transactions of the Asso- ciation for Computational Linguistics, 4:491–505. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural lan- guage sentences. In Proceedings of the 26th Inter- national Joint Conference on Artificial Intelligence, pages 4144–4150. Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2017. A broad-coverage challenge cor- pus for sentence understanding through inference. arXiv:1704.05426. Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. 2018. Training and inference with integers in deep In International Conference on neural networks. Learning Representations. Seunghak Yu, Nilesh Kulkarni, Haejun Lee, and Jihie Kim. 2018. On-device neural language model based word prediction. Proceedings of COLING 2018, the 28th International Conference on Computational Linguistics: Technical Papers, page 128. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv:1212.5701. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- In Advances in neural information pro- sification. cessing systems, pages 649–657. Peng Zhou, Zhenyu Qi, Suncong Zheng, Jiaming Xu, Hongyun Bao, and Bo Xu. 2016. Text classifi- cation improved by integrating bidirectional LSTM with two-dimensional max pooling. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 3485–3495.
{ "id": "1803.07640" }
1903.11728
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be available at: https://github.com/JiahuiYu/slimmable_networks
http://arxiv.org/pdf/1903.11728
Jiahui Yu, Thomas Huang
cs.CV, cs.AI
tech report
null
cs.CV
20190327
20190601
9 1 0 2 n u J 1 ] V C . s c [ 2 v 8 2 7 1 1 . 3 0 9 1 : v i X r a # AutoSlim: Towards One-Shot Architecture Search for Channel Numbers Jiahui Yu Thomas Huang University of Illinois at Urbana-Champaign # Abstract applications [6, 7, 8, 9]. We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can ob- tain the optimized channel configurations under different re- source constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Despite its importance, the number of channels has been chosen mostly based on heuristics. LeNet-5 [10] selected 6 channels in its first convolution layer, which is then pro- jected to 16 channels after sub-sampling. AlexNet [11] adopted five convolutions with channels equal to 96, 256, 384, 384 and 256. A commonly used heuristic, the “half size, double channel” rule, was introduced in VGG nets [12], if not earlier. The rule is that when spatial size of feature map is halved, the number of filters is doubled. This heuristic has been more-or-less used in followup net- work architecture designs including ResNets [13, 14], In- ception nets [15, 16, 17], MobileNets [6, 7] and networks for many vision applications [18, 19, 20, 21, 22]. Other heuristics have also been explored. For example, the pyra- midal rule [23, 24] suggested to gradually increase the chan- nels in all convolutions layer by layer, regardless of spatial size. Figure 1 visually summarizes these heuristics for set- ting channel numbers in a neural network. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). Code and models will be available at: https:// github.com/JiahuiYu/slimmable_networks. # 1. Introduction The channel configuration (a.k.a. filter numbers or chan- nel numbers) of a neural network plays a critical role in its affordability on resource constrained platforms, such as mo- bile phones, wearables and Internet of Things (IoT) devices. The most common constraints [1, 2, 3, 4, 5], i.e., latency, FLOPs and runtime memory footprint, are all bound to the number of channels. For example, in a single convolution or fully-connected layer, the FLOPs (number of Multiply- Adds) increases linearly by the output channels. The mem- ory footprint can also be reduced [6] by reducing the num- ber of channels in bottleneck convolutions for most vision Beyond the macro-level heuristics across entire network, recent works [6, 13, 24, 25, 26] have also digged into chan- nel configuration for micro-level building blocks (a network building block is usually composed of several 1 × 1 and 3 × 3 convolutions). These micro-level heuristics have led to better speed-accuracy trade-offs. The first of its kind, bottleneck residual block, was introduced in ResNet [13]. It is composed of 1 × 1, 3 × 3, and 1 × 1 convolutions, where the 1 × 1 layers are responsible for reducing and then restoring dimensions, leaving the 3 × 3 layer a bottleneck (4× reduction). MobileNet v2 [6], however, argued that the bottleneck design is not efficient and proposed the inverted residual block where 1 × 1 layers are used for expanding feature first (6× expansion) and then projecting back af- ter intermediate 3 × 3 depthwise convolution. Furthermore, MNasNet [25] and ProxylessNAS nets [26] included 3× ex- pansion version of inverted residual block into search space, and achieved even better accuracy under similar runtime la- tency. Apart from these human-designed heuristics, efforts on automatically optimizing channel configuration have been made explicitly or implicitly. A recent work [27] suggested that many network pruning methods [1, 28, 29, 30, 31, 32] 1 c— 3x3 conv 3x3 conv _ +4) downsample 3x3 conv 3x3 conv _——| | 3x3 conv 3 conv downsample Y 33 conv 38 conv ee a + , (A) “half size, double channel” rule @ basic (6) bottleneck (c) wide ——— + [ | = Townsample ‘bd conv Td conv dl conv _——— | 3x3 DW conv 3x3 DW conv 3x3 conv —— * zero-padded downsample J Tt —— deal conv txt conv ‘x1 conv —, T , , (B) pyramidal rule (d) inverted residual 6 x (e) inverted residual 3x (f) pyramidal bottleneck Figure 1. Various heuristics for setting channel numbers across entire network ((A) − (B)) [12, 23, 24], and inside network building blocks ((a) − (f )) [6, 13, 23, 24, 25, 26]. can be thought of as performing network architecture search for channel numbers. Liu et al. [27] showed that train- ing these pruned architectures from scratch leads to simi- lar or even better performance than fine-tuning and prun- ing from a large model. More recently, MNasNet [25] pro- posed to directly search network architectures, including fil- ter sizes, using reinforcement learning algorithms [33, 34]. Although the search is performed on the factorized hierar- chical search space, massive network samples and compu- tational cost [25] are required for an optimized network ar- chitecture. In this work, we study how to set channel numbers in a neural network to achieve better accuracy under constrained resources. To start, the first and the most brute-force ap- proach came in mind is the exhaustive search: training all possible channel configurations of a deep neural network for full epochs (e.g., MobileNets [6, 7] are trained for ap- proximately 480 epochs on ImageNet). Then we can simply select the best performers that are qualified for efficiency constraints. However, it is undoubtedly impractical since the cost of this brute-force approach is too high. For ex- ample, we consider a 8-layer convolutional networks and a search space limited to 10 candidates of channel numbers (e.g., 32, 64, ..., 320) for each layer. As a result, there are totally 108 candidate network architectures. able as benchmark performance estimators for several rea- sons: (1) Training slimmable models (using the sandwich rule [36]) is much faster than the brute-force approach. (2) A trained slimmable model can execute at arbitrary width, which can be used to approximate relative perfor- mance among different channel configurations. (3) The same trained slimmable model can be applied on search of optimal channels for different resource constraints. In AutoSlim, we first train a slimmable model for a few epochs (e.g., 10% to 20% of full training epochs) to quickly get a benchmark performance estimator. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop on validation set (for Im- ageNet, we randomly hold out 50K samples of training set as validation set). After this single pass, we can obtain the optimized channel configurations under different resource constraints (e.g., network FLOPs limited to 150M, 300M and 600M). Finally we train these optimized architectures individually or jointly (as a single slimmable network) for full training epochs. We experiment with various networks including MobileNet v1, MobileNet v2, ResNet-50 and RL- searched MNasNet on the challenging setting of 1000-class ImageNet classification. We compare our results with two baselines: (1) the default channel configuration of these net- works, and (2) channel pruning methods on same network architectures [29, 30, 37, 38]. To address this challenge, we present a simple and one- shot solution AutoSlim. Our main idea lies in training a slimmable network [35] to approximate the network accu- racy of different channel configurations. Yu et al. [35, 36] introduced slimmable networks that can run at arbitrary width with equally or even better performance than same ar- chitecture trained individually. Although the original moti- vation is to provide instant and adaptive accuracy-efficiency trade-offs, we find slimmable networks are especially suit- Our contributions are summarized as follows: • We present the first one-shot approach on network architecture search for channel numbers with experi- ments on large-scale ImageNet classification. • We demonstrate the importance of channel configura- tion in neural networks and the effectiveness of our ap- proach on addressing this challenging problem. • We achieve the state-of-the-art speed-accuracy trade- offs by setting the optimized channel configurations using AutoSlim. # 2. Related Work # 2.1. Architecture Search for Channel Numbers In this part, we mainly discuss previous methods on au- tomatic architecture search for channel numbers. Human- designed heuristics have been introduced in Section 1 and visually summarized in Figure 1. Channel Pruning. Channel pruning (a.k.a., network slimming) methods [1, 30, 39, 40, 41] aim at reducing effective channels of a large neural network to speedup its inference. Both training-based, inference-time and initialization-time pruning methods have been proposed [1, 30, 39, 40, 41, 42] in the literature. Here we selectively review two methods [1, 30]. He et al. [30] proposed an inference-time approach based on an iterative two-step al- gorithm: the LASSO based channel selection and the least square feature reconstruction. Liu ef al. [1], on the other hand, trained neural networks with a ¢; regularization on the scaling factors in batch normalization (BN) [43]. By pushing the factors towards zero, insignificant channels can be identified and removed. In a recent work [27], Liu et al. suggested that many network pruning meth- ods [1, 28, 29, 30, 31, 32] can be thought of as performing network architecture search for channel numbers. In exper- iments, Liu et al. [27] showed that training these pruned architectures from scratch leads to similar or even better performance than iteratively fine-tuning and pruning a large model. Thus, Liu et al. [27] concluded that training a large, over-parameterized model is not necessary to obtain an ef- ficient final model. In our work, we take channel pruning methods [29, 30, 37] as one of baselines. Neural Architecture Search (NAS). Recently there has been a growing interest in automating the neural network architecture design [25, 26, 44, 45, 46, 47, 48, 49, 50, 51]. Significant improvements have been achieved by these au- tomatically searched architectures in many vision and lan- guage tasks [47, 52]. However, most neural architecture search methods [44, 45, 46, 47, 48, 49, 50, 51] did not in- clude channel configuration into search space, and instead applied human-designed heuristics. More recently, the RL- based searching algorithms are also applied to prune chan- nels [37] or search for filter numbers [25] directly. He et al. proposed AutoML for Model Compression (AMC) [37] which leveraged reinforcement learning (deep determinis- tic policy gradient [53]) to provide the model compression policy. MNasNet [25] proposed to directly search network architectures, including filter sizes, for mobile devices. In the search, each sampled model is trained on 5 epochs us- ing an aggressive learning rate schedule, and evaluated on In total, Tan et al. sampled about a 50K validation set. 8, 000 models during architecture search. Further, Proxy- lessNAS [26] proposed to directly learn the architectures for large-scale target tasks and target hardware platforms, based on DARTS [50]. For each residual block, Proxy- lessNAS [26] followed the channel configuration of MNas- Net [25], while inside each block, the choices can be ×3 or ×6 version of inverted residual blocks. The memory con- sumption issue [26, 50] was addressed by binarizing the ar- chitecture parameters and forcing only one path to be active. # 2.2. Slimmable Networks Slimmable networks were firstly introduced in [35]. A general slimmable training algorithm and the switchable batch normalization were introduced to train a single neu- ral network executable at different widths, permitting in- stant and adaptive accuracy-efficiency trade-offs at runtime. However, one drawback of the switchable batch normal- ization is that the width can only be chosen from a pre- defined widths set. The drawback was addressed in [36], where the authors introduced universally slimmable net- works, extending slimmable networks to execute at arbi- trary width, and generalizing to networks both with and without batch normalization layers. Meanwhile, two im- proved training techniques, the sandwich rule and inplace distillation, were proposed [36] to enhance training process and boost testing accuracy. Moreover, with the proposed methods, one can train nonuniform universally slimmable networks, where the width ratio is not uniformly applied to all layers. In other words, each layer in a nonuniform uni- versally slimmable network can adjust its number of chan- nels independently during inference. In this work, we sim- ply refer to nonuniform universally slimmable networks as slimmable networks, if not explicitly noted. While the orig- inal motivation [35, 36] of slimmable networks is to provide instant and adaptive accuracy-efficiency trade-offs at run- time for different devices, we present an approach that uses slimmable networks for searching channel configurations of deep neural networks. # 3. Network Slimming by Slimmable Networks In this section, we first present an overview of our pro- posed approach for searching channel configuration of neu- ral networks. We then discuss and analyze the difference of our approach compared with other baselines, i.e., network pruning methods and network architecture search methods. Afterwards we present each individual module in our pro- posed solution and discuss its non-trivial details. # 3.1. Overview The goal of channel configuration search is to optimize the number of channels in each layer, such that the net- work architecture with optimized channel configuration can +» : Decide which layer to slim by simple feed- forward evaluation on validation set. 900 90 QQ DD 2Qdd Z eer z agooo 60 QQ. 9980 00 oe 5000 5 Jere) fore) 22 FLOPs [exe S00 ele) - fexe) Best architecture oye) under 25 FLOPs Cat Dog Network — Train a —>, Evaluate and —>) Efficient network architecture slimmable model greedily slim architecture Figure 2. The flow diagram of our proposed approach AutoSlim. achieve better accuracy under constrained resources. The constraints can be FLOPs, latency, memory footprint or model size. Our approach is conceptually simple, and it has two essential steps: a Figure 3. The flow diagram of network pruning methods [1]. (1) Given a network architecture (e.g., MobileNets, ResNets), we first train a slimmable model for a few epochs (e.g., 10% to 20% of full training epochs). During the training, many different sub-networks with diverse channel configurations have been sampled and trained. Thus, after training one can directly sample its sub-network architec- tures for instant inference, using the correspondent compu- tational graph and same trained weights. (2) Next, we iteratively evaluate the trained slimmable model on the validation set. In each iteration, we decide which layer to slim by comparing their feed-forward evalu- ation accuracy on validation set. We greedily slim the layer with minimal accuracy drop, until reaching the efficiency constraints. No training is required in this step. The flow diagram of our approach is shown in Figure 2. Our approach is also flexible for different resource con- straints, since the FLOPs, latency, memory footprint and model size are all deterministic given a channel configura- tion and a runtime environment. By a single pass of greedy slimming in step (2), we can obtain the (FLOPs, latency, memory footprint, model size, accuracy) tuples of different channel configurations. It is noteworthy that the latency and accuracy are relative values, since the latency may be different across different hardware and the accuracy can be improved by training the network for full epochs. In the set- ting of optimizing channel numbers, we benefit from these relative values as performance estimators. Discussion. We compare the flow diagram of our ap- proach with the baselines, i.e., network pruning methods and network architecture search methods. Figure 4. The flow diagram of network architecture search meth- ods [25, 26, 47, 52]. 32] follow a typical iterative training-pruning-finetuning pipeline, as shown in Figure 3. For example, Liu et al. [1] trained neural networks with a ¢; regularization on the scal- ing factors in batch normalization (BN). After training, the method obtains channels in which many scaling factors are near zero for pruning. Pruning will temporarily lead to ac- curacy loss, thus the fine-tuning process and a repetitive multi-pass procedure are introduced for enhancement of fi- nal accuracy. Compared with our approach, a notable dif- ference is that most network channel pruning methods are grounded on the importance of trained weights, thus the slimmed layer usually consists channels of discrete index (e.g., the 4th, 7th, 9th channel are left as important chan- nels while all others are pruned). In our approach, after slimmable training, the importance of the weight is implic- itly ranked by its index. Thus our approach focuses more on the importance of channel numbers, and we always keep the lower-index channels (e.g., all 1st to 3rd channels are left while 4th to 10th channels are slimmed in step (2)). We demonstrate the advantage of our approach by empirical evidences on ImageNet classification with various network Many network channel pruning methods [1, 4, 29, architectures. Network architecture search methods [25, 26, 47, 52] commonly consist of three major components: search space, search strategy, and performance estimation strategy. A typical pipeline is shown in Figure 4. First the search space is defined, based on which the search agent samples network architectures. The architecture is then passed to a performance estimator, which returns rewards (e.g., pre- dictive accuracy after training and/or network runtime la- tency) to the search agent. In the process, the search agent learns from the repetitive loop to design better network ar- chitectures. One major drawback of network architecture search methods is their high computational cost and time cost [46, 50]. Although recently differentiable architec- ture search methods [50, 54] were proposed, they cannot be applied on search of channel numbers directly. Most of them [50, 54] were still using human-designed heuristics for setting channel numbers, which may introduce human bias. # 3.2. Training Slimmable Networks Warmup. We warmup by a brief review of training techniques for slimmable networks. More details can be found in [35, 36]. Slimmable networks were firstly intro- duced and trained with switchable batch normalization [43], which employed individual BNs for different sub-networks. During training, features are normalized with current mini- batch mean and variance, thus a simple modification to switchable batch normalization is introduced in [36]: re- calibrating BN statistics after training. With this sim- ple modification, one can train universally slimmable net- works [36] that can run with arbitrary channel numbers. Moreover, two improved training techniques the sandwich rule and inplace distillation were introduced to enhance training process and boost testing accuracy. We use all these techniques in training slimmable models by default. Assumption. Our approach lies in the assumption that the slimmable model is a good accuracy estimator of in- dividually trained models given same channel configura- tion. More specifically, we are interested in the relative ranking of accuracy among networks with different chan- nel configurations. We use the instant inference accuracy of a slimmable model as the performance estimator. We note that assumptions and approximations commonly exist in other related methods. For example, in network channel pruning methods [1, 30], one may assume that weights with smaller norm are less informative and can be pruned, which may not be the case as shown in [39]. Recently the Lot- tery Ticket Hypothesis [42] was also introduced. In network architecture search methods [25, 26], one may believe the transferability among different datasets, accuracy approxi- mations using aggressive learning rates and fewer training epochs, and approximation in runtime latency modeling. The Search Space. The executable sub-networks in a slimmable model compose the search space of chan- nel configurations given a network architecture. To train a slimmable model, we simply apply two width multipli- ers [7, 36] as the upper bound and lower bound of channel numbers. For example, for all mobile networks [6, 7, 25, 26], we train a slimmable model that can execute between 0.15× and 1.5×. In each training iteration, we randomly and independently sample the number of channels in each layer. It is noteworthy that in residual networks, we first sample the channel number of residual identity pathway and then randomly and independently sample channel number inside each residual block. Moreover, we make all layers in a neural network slimmable, including the first convo- lution layer and last fully-connected layer. In each layer, we divide the channels into groups evenly (e.g., 10 groups) to reduce the search space. In other words, during training or slimming, we sample or remove an entire group, instead of an individual channel. We note that even with channel grouping, the search space is still large. We implement a distributed training framework with synchronized stochastic gradient descent (SGD) on Py- Torch [55]. We set different random seeds in different pro- cesses such that each GPU samples diverse channel config- urations in each SGD training step. All other techniques introduced in [35] and distributed training techniques intro- duced in [56] are used by default. All code will be released. # 3.3. Greedy Slimming After training a slimmable model, we evaluate it on the validation set (on ImageNet [58] we randomly hold out 50K images in training set as validation set). We start with the largest model (e.g., 1.5×) and compare the network ac- curacy among the architectures where each layer is slimmed by one channel group. We then greedily slim the layer with minimal accuracy drop. During the iterative slimming, we obtain optimized channel configurations under different re- source constraints. We stop until reaching the strictest con- straint (e.g., 50M FLOPs or 30ms CPU latency). Large Batch Size. During greedy slimming, no train- ing is involved. Thus we directly put the model in evalu- ation mode (no gradients are required), which enables us to use a larger batch size (for example during slimming we use mini-batch size 2048 for each GPU with totally 8 V100 GPUs). Large batch size brings two benefits. First, previ- ous work [36] shows that BN statistics will be accurate if it is calibrated with the batch size larger than 2K. Thus post- statistics of BN in our greedy slimming can be computed online without additional cost. Second, with large batch size we can simply use single feed-forward prediction ac- curacy as the performance estimator. In practice we find it speeds up greedy slimming and simplifies implementation without affecting final performance. Training Optimized Networks. Similar to architecture Table 1. ImageNet classification results with various network architectures. Blue indicates the network pruning methods [27, 29, 30, 37, 38], Cyan indicates the network architecture search methods [25, 47, 48, 57] and Red indicates our results using AutoSlim. Group Model Parameters Memory CPU Latency FLOPs ShuffleNet v1 1.0× [9] ShuffleNet v2 1.0× [8] MobileNet v1 0.5× [7] MobileNet v2 0.75× [6] 1.8M - 1.3M 2.6M 4.9M - 3.8M 8.5M 46ms - 33ms 71ms 138M 32.6 146M 30.6 150M 36.7 209M 30.2 200M FLOPs AMC-MobileNet v2 [37] 2.3M 7.3M 68ms 211M 29.2 (1.0) MNasNet 0.75× [25] 3.1M 7.9M 65ms 216M 28.5 AutoSlim-MobileNet v1 AutoSlim-MobileNet v2 AutoSlim-MNasNet 1.9M 4.1M 4.0M 4.2M 9.1M 7.5M 33ms 70ms 62ms 150M 32.1 (4.6) 207M 27.0 (3.2) 217M 26.8 (1.7) ShuffleNet v1 1.5× [9] ShuffleNet v2 1.5× [8] MobileNet v1 0.75× [7] MobileNet v2 1.0× [6] 3.4M - 2.6M 3.5M 8.0M - 6.4M 10.2M 60ms - 48ms 81ms 292M 28.5 299M 27.4 325M 31.6 300M 28.2 300M FLOPs NetAdapt-MobileNet v1 [38] AMC-MobileNet v1 [37] - 1.8M - 5.6M - 46ms 285M 29.9 (1.7) 285M 29.5 (2.1) MNasNet 1.0× [25] 4.3M 9.8M 76ms 317M 26.0 AutoSlim-MobileNet v1 AutoSlim-MobileNet v2 AutoSlim-MNasNet 4.0M 5.7M 6.0M 6.8M 10.9M 10.3M 43ms 77ms 71ms 325M 28.5 (3.1) 305M 25.8 (2.4) 315M 25.4 (0.6) ShuffleNet v1 2.0× [9] ShuffleNet v2 2.0× [8] MobileNet v1 1.0× [7] MobileNet v2 1.3× [6] 5.4M - 4.2M 5.3M 11.6M - 9.3M 14.3M 92ms - 64ms 106ms 524M 26.3 591M 25.1 569M 29.1 509M 25.6 500M FLOPs MNasNet 1.3× [25] NASNet-A [47] PNASNet-5 [48, 8] Graph-HyperNetwork [57] 6.8M - - - 14.2M - - - 95ms - - - 535M 24.5 564M 26.0 588M 25.8 569M 27.0 AutoSlim-MobileNet v1 AutoSlim-MobileNet v2 AutoSlim-MNasNet 4.6M 6.5M 8.3M 9.5M 14.8M 14.2M 66ms 103ms 95ms 572M 27.0 (2.1) 505M 24.6 (1.0) 532M 24.6 (-0.1) ResNet-50 [13] ResNet-50 0.75× [13, 35] ResNet-50 0.5× [13, 35] ResNet-50 0.25× [13, 35] 25.5M 14.7M 6.8M 1.9M 36.6M 23.1M 12.5M 4.8M 197ms 133ms 81ms 44ms 4.1G 23.9 2.3G 25.1 1.1G 27.9 278M 35.0 He-ResNet-50 [30, 27] - - - ≈2.0G 27.2 Heavy Models ThiNet-ResNet-50 [29, 27] - - - - - - - ≈2.9G 27.0 - ≈2.1G 28.0 - ≈1.2G 30.6 AutoSlim-ResNet-50 search methods, after the search, we train these optimized network architectures from scratch. By default we search for the network FLOPs at approximately 200M, 300M and 500M, and train a slimmable model. # 4. Experiments # 4.1. Main Results Table 1 summarizes our results on ImageNet [58] classi- fication with various network architectures including Mo- bileNet v1 [7], MobileNet v2 [6], MNasNet [25], and one large model ResNet-50 [13]. We compare our results with their default channel configurations and recent chan- nel pruning methods [29, 30, 37]. The top-1 errors of our baselines are from corresponding works [6, 7, 13, 25, 29, 30, 37]. To have a clear view, we divide the network ar- chitectures into four groups, namely, 200M FLOPs, 300M FLOPs, 500M FLOPs and heavy models (basically ResNet- 50 based models). We evaluate their latency on same hard- ware environment with single-core CPU to ensure fairness. Device memory is reported as a summary of all feature maps and weights. We note that the memory footprint can be largely optimized by improving memory reusing and im- plementation of dedicated operators. For example, the in- verted residual block can be optimized by splitting chan- nels into groups and performing partial execution for mul- tiple times [6]. For all network architectures we train 50 epochs with squeezed learning rate schedule to obtain a slimmable model for greedy slimming. After search, we train the optimized network architectures for full epochs (300 epochs with linearly decaying learning rate for mobile networks, 100 epochs with step learning rate schedule for ResNet-50 based models) with other training settings fol- lowing previous works [6, 7, 8, 9, 13, 35, 36] (weight initial- ization, weight decay, data augmentation, training/testing image resolution, optimizer, hyper-parameters of batch nor- malization). We exclude the parameters and FLOPs of Batch Normalization layers [43] following common prac- tice since they can be fused into convolution layers. As shown in Table 1, our models have better top-1 ac- curacy compared with the default channel configuration of MobileNet v1, MobileNet v2 and ResNet-50 across differ- ent computational budgets. We even have improvements over RL-searched MNasNet [25], where the filter numbers are already included in its search space. Notably, by set- ting optimized channel numbers, our AutoSlim-MobileNet- v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depth- wise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs). # 4.2. Visualization and Discussion In this part, we visualize our optimized channel configu- rations and discuss some insights from the results. Comparison with Default Channel Numbers. We first compare our results with default channels in MobileNet v2 [6]. We show the optimized number of channels (left) and the percentage compared with default channels (right) in Figure 5. Compared with default MobileNet v2, our op- timized configuration has fewer channels in shallow layers and more channels in deep ones. Autos Mopiener2 a AiroSim- Mobi of | | = LK 7 re Fe ona CFLS) a LLL. I 5] ae r LLL LL = LLIT TS ' FILLIES, 10) 10 | L LLL LL = COL LS SS a. LLL LL Lo = CL SL SL SS . LLL LL LS = CL SL SL SS = LLL SL LS = & CLL LS Ss. & LL SL LL LL — 52) ET, : a SLL LSS LS — LLL LL ‘ D CLT TLL TS — _————— 2| | Spares, Es FLL SLL SAS SS. = —_ a = 0) 30) TT TTT, = eae ee ee ee a — LLL SLL LL LL fl CLS SS SS FS SL el 35) ae, 3 m0 500 750 1900 1250 1500 1750 2000 Th 2m 40% 60m OOH 100% 120% 140% numberof percentage of channel (bared on default MobieNet v2) Figure 5. The optimized number of channels (left) and the per- centage compared with default channels (right) of MobileNet v2. The channels of depthwise convolutions are ignored in the figure, since its output channels are always equal to the previous 1 × 1 convolution outputs. Comparison with Width Multiplier Heuristic. Apply- ing width multiplier [7], a global hyper-parameter across all layers, is a commonly used heuristic to trade off between model accuracy and efficiency [6, 7, 8, 9]. We search opti- mal channels at 207M, 305M and 505M FLOPs correspond- ing to MobileNet v2 0.75×, 1.0× and 1.3×. Figure 6 shows the pattern that under different budgets, AutoSlim applies different width scaling in each layer. Comparison with Model Pruning Methods. Next, we compare our optimized channel configuration with model pruning method AMC [37]. In Figure 6, we show the num- ber of channels in all layers of optimized MobileNet v2. We observe several characteristics of our optimized channel configurations. First, AutoSlim-MobileNet-v2 has much more channels in deep layers, especially for deep depth- wise convolutions. For example, AutoSlim-MobileNet-v2 has 1920 channels in the second last layer, compared with 848 channels in AMC-MobileNet-v2. Second, AutoSlim- MobileNet-v2 has fewer channels in shallow layers. For example, AutoSlim-MobileNet-v2 has only 8 channels in first convolution layer, while AMC-MobileNet-v2 has 24 channels. It is noteworthy that although shallow layers have a small number of channels, the spatial size of feature maps is large. Thus overall these layers take up large computa- tional overheads. lm AutoSlim-MobileNet-v2, 207M FLOPs lm AutoSlim-MobileNet-v2, 305M FLOPs ME AutoSlim-MobileNet-v2, 505M FLOPs 10 layer index a N 3 25 30 35 0 250 500 750 1000 1250 1500 1750 2000 number of channels Figure 6. The channel configurations of AutoSlim-MobileNet-v2 at 207M, 305M and 505M FLOPs. # Table 2. CIFAR10 classification results with default MobileNet v2 and AutoSlim-MobileNet-v2. Model Parameters FLOPs Top-1 Err. MobileNet v2 1.0× MobileNet v2 0.75× MobileNet v2 0.5× 2.2M 1.3M 0.7M 88M 8.1 59M 8.6 28M 10.4 AutoSlim-MobileNet v2 AutoSlim-MobileNet v2 AutoSlim-MobileNet v2 1.5M 0.7M 0.3M 88M 6.8 (1.3) 59M 7.0 (1.6) 28M 8.0 (2.4) # 4.3. CIFAR10 Experiments In addition to ImageNet dataset, we also conduct exper- iments on CIFAR10 [59] dataset. We use same weight de- cay hyper-parameter, initial learning rate and learning rate schedule as ImageNet experiments. We note that these training settings may not be optimal for CIFAR10 dataset, nevertheless we report ablative study with same hyper- parameters and settings. We first report the performance of MobileNet v2 [6] with the default channel configura- tions. We then search with proposed AutoSlim to obtain optimized channel configurations at same FLOPs (we hold out 5K images from training set as validation set during the search). Finally we train the optimized architectures indi- vidually with same settings as the baselines. Table 2 shows that AutoSlim models have higher accuracy than baselines on CIFAR10 dataset. (m= AMC-MobileNet-v2, 211M FLOPs 0 ll AutoSlim-MobileNet-v2, 207M FLOPs 0 250 500 750 1000 1250 1500 1750 2000 number of channels Figure 7. The channel configurations of AutoSlim-MobileNet-v2 compared with AMC-MobileNet-v2 [37]. Table 3. CIFAR10 results with AutoSlim-MobileNet-v2 searched on CIFAR10 or ImageNet. Model Search On FLOPs Top-1 Err. MobileNet v2 0.75× AutoSlim-MobileNet v2 AutoSlim-MobileNet v2 - CIFAR10 ImageNet 59M 8.6 59M 7.0 (1.6) 63M 9.9 (-1.3) We further study the transferability of the network archi- tectures learned from ImageNet to CIFAR10 dataset, and compare it with the channel configuration searched on CI- FAR10 directly. The results are shown in Table 3. It sug- gests that the optimized channel configuration on ImageNet cannot generalize to CIFAR10. Compared with the opti- mized architecture for ImageNet, we observed that the op- timized architecture for CIFAR10 have much fewer chan- nels in deep layers, which we guess may lead to better gen- eralization on test set for small datasets like CIFAR10. It may also due to inconsistent image resolutions between Im- ageNet (224 × 224) and CIFAR10 (32 × 32). # 5. Conclusion We presented the first one-shot approach on network ar- chitecture search for channel numbers, with extensive ex- periments on large-scale ImageNet classification. Our pro- posed solution AutoSlim automates the design of efficient network architectures for resource constrained devices. # References [1] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang, “Learning efficient convolutional networks through network slimming,” in Computer Vision (ICCV), 2017 IEEE Interna- tional Conference on. IEEE, 2017, pp. 2755–2763. 1, 3, 4, 5 [2] G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger, “Multi-scale dense networks for image classification,” arXiv preprint resource efficient arXiv:1703.09844, 2017. 1 [3] X. Wang, F. Yu, Z.-Y. Dou, and J. E. Gonzalez, “Skipnet: Learning dynamic routing in convolutional networks,” arXiv preprint arXiv:1711.09485, 2017. 1 [4] S. Han, H. Mao, and W. J. Dally, “Deep compres- sion: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015. 1, 4 [5] J. Yu, Y. Fan, J. Yang, N. Xu, X. Wang, and T. S. Huang, “Wide activation for efficient and accurate image super- resolution,” arXiv preprint arXiv:1808.08718, 2018. 1 [6] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Inverted residuals and linear bottlenecks: Mobile net- works for classification, detection and segmentation,” arXiv preprint arXiv:1801.04381, 2018. 1, 2, 5, 6, 7, 8 [7] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Effi- cient convolutional neural networks for mobile vision appli- cations,” arXiv preprint arXiv:1704.04861, 2017. 1, 2, 5, 6, 7 [8] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European Conference on Computer Vi- sion (ECCV), 2018, pp. 116–131. 1, 6, 7 [9] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” arXiv preprint arXiv:1707.01083, 2017. 1, 6, 7 [10] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner et al., “Gradient- based learning applied to document recognition,” Proceed- ings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. 1 [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105. 1 [12] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. 1, 2 [13] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE confer- ence on computer vision and pattern recognition, 2016, pp. 770–778. 1, 2, 6, 7 [14] S. Xie, R. Girshick, P. Doll´ar, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Com- puter Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. [15] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recogni- tion, 2015, pp. 1–9. 1 [16] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826. 1 [17] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017. 1 [18] J. Yu, Y. Jiang, Z. Wang, Z. Cao, and T. Huang, “Unit- box: An advanced object detection network,” in Proceed- ings of the 24th ACM international conference on Multime- dia. ACM, 2016, pp. 516–520. 1 [19] Z. Zhang, S. Qiao, C. Xie, W. Shen, B. Wang, and A. L. Yuille, “Single-shot object detection with enriched seman- tics,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5813–5821. 1 [20] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505–5514. 1 [21] Z. Shen, Z. Liu, J. Li, Y.-G. Jiang, Y. Chen, and X. Xue, “Dsod: Learning deeply supervised object detectors from scratch,” in Proceedings of the IEEE International Confer- ence on Computer Vision, 2017, pp. 1919–1927. 1 [22] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Free-form image inpainting with gated convolution,” arXiv preprint arXiv:1806.03589, 2018. 1 [23] D. Han, J. Kim, and J. Kim, “Deep pyramidal residual net- works,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5927–5935. 1, 2 [24] K. Zhang, L. Guo, C. Gao, and Z. Zhao, “Pyramidal ror for image classification,” Cluster Computing, pp. 1–11, 2017. 1, 2 [25] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le, “Mnasnet: Platform-aware neural architecture search for mobile,” arXiv preprint arXiv:1807.11626, 2018. 1, 2, 3, 4, 5, 6, 7 [26] H. Cai, L. Zhu, and S. Han, “Proxylessnas: Direct neu- ral architecture search on target task and hardware,” arXiv preprint arXiv:1812.00332, 2018. 1, 2, 3, 4, 5 [27] Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell, “Re- thinking the value of network pruning,” arXiv preprint arXiv:1810.05270, 2018. 1, 2, 3, 6 [28] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” arXiv preprint arXiv:1608.08710, 2016. 1, 3 [29] J.-H. Luo, J. Wu, and W. Lin, “Thinet: A filter level prun- ing method for deep neural network compression,” arXiv preprint arXiv:1707.06342, 2017. 1, 2, 3, 4, 6, 7 [30] Y. He, X. Zhang, and J. Sun, “Channel pruning for accelerat- ing very deep neural networks,” in Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017, pp. 1398–1406. 1, 2, 3, 5, 6, 7 [31] Z. Huang and N. Wang, “Data-driven sparse structure selec- tion for deep neural networks,” in Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), 2018, pp. 304–320. 1, 3 [32] S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” in Ad- vances in neural information processing systems, 2015, pp. 1135–1143. 1, 3, 4 [33] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. 2 [34] N. Heess, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang, S. Eslami, M. Riedmiller et al., “Emergence of locomotion behaviours in rich envi- ronments,” arXiv preprint arXiv:1707.02286, 2017. 2 [35] J. Yu, L. Yang, N. Xu, J. Yang, and T. Huang, “Slimmable neural networks,” arXiv preprint arXiv:1812.08928, 2018. 2, 3, 5, 6, 7 “Universally slimmable net- works and improved training techniques,” arXiv preprint arXiv:1903.05134, 2019. 2, 3, 5, 7 [37] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han, “Amc: Automl for model compression and acceleration on mobile devices,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 784–800. 2, 3, 6, 7, 8 [38] T.-J. Yang, A. Howard, B. Chen, X. Zhang, A. Go, M. San- dler, V. Sze, and H. Adam, “Netadapt: Platform-aware neural network adaptation for mobile applications,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 285–300. 2, 6 [39] J. Ye, X. Lu, Z. Lin, and J. Z. Wang, “Rethinking the smaller-norm-less-informative assumption in channel prun- ing of convolution layers,” arXiv preprint arXiv:1802.00124, 2018. 3, 5 [40] Q. Huang, K. Zhou, S. You, and U. Neumann, “Learning to prune filters in convolutional neural networks,” in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). [41] N. Lee, T. Ajanthan, and P. H. Torr, “Snip: Single-shot network pruning based on connection sensitivity,” arXiv preprint arXiv:1810.02340, 2018. 3 [42] J. Frankle and M. Carbin, “The lottery ticket hy- Training pruned neural networks,” CoRR, http: pothesis: vol. abs/1803.03635, 2018. //arxiv.org/abs/1803.03635 3, 5 [Online]. Available: [43] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, 2015. 3, 5, 7 [44] T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” arXiv preprint arXiv:1808.05377, 2018. 3 [45] G. Bender, P.-J. Kindermans, B. Zoph, V. Vasudevan, and Q. Le, “Understanding and simplifying one-shot architecture search,” in International Conference on Machine Learning, 2018, pp. 549–558. 3 [46] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean, “Effi- cient neural architecture search via parameter sharing,” arXiv preprint arXiv:1802.03268, 2018. 3, 5 [47] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 8697–8710. 3, 4, 5, 6 [48] C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.-J. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy, “Progressive neural architecture search,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 19–34. 3, 6 [49] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu, “Hierarchical representations for effi- cient architecture search,” arXiv preprint arXiv:1711.00436, 2017. 3 [50] H. Liu, K. Simonyan, and Y. Yang, “Darts: Differentiable ar- chitecture search,” arXiv preprint arXiv:1806.09055, 2018. 3, 5 [51] A. Brock, T. Lim, J. M. Ritchie, and N. Weston, “Smash: one-shot model architecture search through hypernetworks,” arXiv preprint arXiv:1708.05344, 2017. 3 [52] B. Zoph and Q. V. Le, “Neural architecture search with reinforcement learning,” arXiv preprint arXiv:1611.01578, 2016. 3, 4, 5 [53] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous con- learning,” arXiv preprint trol with deep reinforcement arXiv:1509.02971, 2015. 3 [54] R. Luo, F. Tian, T. Qin, E. Chen, and T.-Y. Liu, “Neural ar- chitecture optimization,” in Advances in Neural Information Processing Systems, 2018, pp. 7827–7838. 5 [55] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. De- Vito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Auto- matic differentiation in pytorch,” in NIPS-W, 2017. 5 P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He, large minibatch sgd: Training imagenet in 1 “Accurate, hour,” arXiv preprint arXiv:1706.02677, 2017. 5 [57] C. Zhang, M. Ren, and R. Urtasun, “Graph hyper- networks for neural architecture search,” arXiv preprint arXiv:1810.05749, 2018. 6 [58] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. Ieee, 2009, pp. 248–255. 5, 7 [59] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Citeseer, Tech. Rep., 2009. 8
{ "id": "1808.05377" }
1903.10676
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.
http://arxiv.org/pdf/1903.10676
Iz Beltagy, Kyle Lo, Arman Cohan
cs.CL
https://github.com/allenai/scibert
EMNLP 2019
cs.CL
20190326
20190910
2019: 9 1 0 2 p e S 0 1 ] L C . s c [ 3 v 6 7 6 0 1 . 3 0 9 1 : v i X r a # SCIBERT: A Pretrained Language Model for Scientific Text # Iz Beltagy Kyle Lo Arman Cohan Allen Institute for Artificial Intelligence, Seattle, WA, USA {beltagy,kylel,armanc}@allenai.org # Abstract Obtaining large-scale annotated data for NLP tasks in the scientific domain is challeng- ing and expensive. We release SCIBERT, a pretrained language model based on BERT (Devlin et al., 2019) to address the lack of high-quality, large-scale labeled scientific SCIBERT leverages unsupervised data. pretraining on a large multi-domain corpus of scientific publications to improve perfor- mance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demon- strate statistically significant improvements over BERT and achieve new state-of-the- art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/. task-specific neural architectures. into minimal Leveraging the success of unsupervised pretrain- ing has become especially important especially when task-specific annotations are difficult to like in scientific NLP. Yet while both obtain, BERT and ELMo have released pretrained models, they are still trained on general domain corpora such as news articles and Wikipedia. In this work, we make the following contribu- tions: (i) We release SCIBERT, a new resource demon- strated to improve performance on a range of NLP tasks in the scientific domain. SCIBERT is a pre- trained language model based on BERT but trained on a large corpus of scientific text. (ii) We perform extensive experimentation to investigate the performance of finetuning ver- sus task-specific architectures atop frozen embed- dings, and the effect of having an in-domain vo- cabulary. # 1 Introduction The exponential increase in the volume of scien- tific publications in the past decades has made NLP an essential tool for large-scale knowledge extraction and machine reading of these docu- ments. Recent progress in NLP has been driven by the adoption of deep neural models, but train- ing such models often requires large amounts of labeled data. In general domains, large-scale train- ing data is often possible to obtain through crowd- sourcing, but in scientific domains, annotated data is difficult and expensive to collect due to the ex- pertise required for quality annotation. (Peters et al., As 2018), and 2018) BERT (Devlin et al., 2019), unsupervised pre- training of language models on large corpora significantly improves performance on many NLP tasks. These models return contextualized embeddings for each token which can be passed (iii) We evaluate SCIBERT on a suite of tasks in the scientific domain, and achieve new state-of- the-art (SOTA) results on many of these tasks. # 2 Methods Background The BERT model architecture (Devlin et al., 2019) is based on a multilayer bidi- rectional Transformer (Vaswani et al., 2017). In- stead of the traditional left-to-right language mod- eling objective, BERT is trained on two tasks: pre- dicting randomly masked tokens and predicting whether two sentences follow each other. SCIB- ERT follows the same architecture as BERT but is instead pretrained on scientific text. Vocabulary BERT uses WordPiece (Wu et al., 2016) for unsupervised tokenization of the input text. The vocabulary is built such that it contains the most frequently used words or subword units. We refer to the original vocabulary released with BERT as BASEVOCAB. We construct SCIVOCAB, a new WordPiece vo- cabulary on our scientific corpus using the Sen- tencePiece1 library. We produce both cased and uncased vocabularies and set the vocabulary size to 30K to match the size of BASEVOCAB. The re- sulting token overlap between BASEVOCAB and SCIVOCAB is 42%, illustrating a substantial dif- ference in frequently used words between scien- tific and general domain texts. Corpus We train SCIBERT on a random from Semantic sample This corpus Scholar consists of 18% papers from the computer science domain and 82% from the broad biomedical domain. We use the full text of the papers, not just the abstracts. The average paper length is 154 sentences (2,769 tokens) resulting in a corpus size of 3.17B tokens, similar to the 3.3B tokens on which BERT was trained. We split sentences using ScispaCy (Neumann et al., 2019),2 which is optimized for scientific text. # 3 Experimental Setup # 3.1 Tasks We experiment on the following core NLP tasks: 1. Named Entity Recognition (NER) 2. PICO Extraction (PICO) 3. Text Classification (CLS) 4. Relation Classification (REL) 5. Dependency Parsing (DEP) PICO, like NER, is a sequence labeling task where the model extracts spans describing the Partici- pants, Interventions, Comparisons, and Outcomes in a clinical trial paper (Kim et al., 2011). REL is a special case of text classification where the model predicts the type of relation expressed be- tween two entities, which are encapsulated in the sentence by inserted special tokens. # 3.2 Datasets For brevity, we only describe the newer datasets here, and refer the reader to the references in Ta- ble 1 for the older datasets. EBM-NLP (Nye et al., 2018) annotates PICO spans in clinical trial ab- stracts. SciERC (Luan et al., 2018) annotates en- tities and relations from computer science ab- 1https://github.com/google/sentencepiece 2https://github.com/allenai/SciSpaCy stracts. ACL-ARC (Jurgens et al., 2018) and Sci- Cite (Cohan et al., 2019) assign intent labels (e.g. Comparison, Extension, etc.) to sentences from scientific papers that cite other papers. The Paper Field dataset is built from the Microsoft Academic Graph (Sinha et al., 2015)3 and maps paper titles to one of 7 fields of study. Each field of study (i.e. geography, politics, economics, business, so- ciology, medicine, and psychology) has approxi- mately 12K training examples. # 3.3 Pretrained BERT Variants BERT-Base We use the pretrained weights for BERT-Base (Devlin et al., 2019) released with the original BERT code.4 The vocabulary is BASE- VOCAB. We evaluate both cased and uncased ver- sions of this model. SCIBERT We use the original BERT code to train SCIBERT on our corpus with the same con- figuration and size as BERT-Base. We train 4 different versions of SCIBERT: (i) cased or un- cased and (ii) BASEVOCAB or SCIVOCAB. The two models that use BASEVOCAB are finetuned from the corresponding BERT-Base models. The other two models that use the new SCIVOCAB are trained from scratch. Pretraining BERT for long sentences can be slow. Following the original BERT code, we set a maximum sentence length of 128 tokens, and train the model until the training loss stops decreasing. We then continue training the model allowing sen- tence lengths up to 512 tokens. We use a single TPU v3 with 8 cores. Training the SCIVOCAB models from scratch on our corpus takes 1 week5 (5 days with max length 128, then 2 days with max length 512). The BASEVOCAB models take 2 fewer days of training because they aren’t trained from scratch. All pretrained BERT models are converted to be compatible with PyTorch using the pytorch- transformers library.6 All our models (Sec- tions 3.4 and 3.5) are implemented in PyTorch us- ing AllenNLP (Gardner et al., 2017). Casing We follow Devlin et al. (2019) in using the cased models for NER and the uncased models 3https://academic.microsoft.com/ 4https://github.com/google-research/bert 5BERT’s largest model was trained on 16 Cloud TPUs for 4 days. Expected 40-70 days (Dettmers, 2019) on an 8-GPU machine. # 6https://github.com/huggingface/pytorch-transformers for all other tasks. We also use the cased models for parsing. Some light experimentation showed that the uncased models perform slightly better (even sometimes on NER) than cased models. # 3.4 Finetuning BERT We mostly follow the same architecture, opti- mization, and hyperparameter choices used in Devlin et al. (2019). For text classification (i.e. CLS and REL), we feed the final BERT vector for the [CLS] token into a linear classification layer. For sequence labeling (i.e. NER and PICO), we feed the final BERT vector for each token into a linear classification layer with softmax output. We differ slightly in using an additional condi- tional random field, which made evaluation eas- ier by guaranteeing well-formed entities. For DEP, we use the model from Dozat and Manning (2017) with dependency tag and arc embeddings of size 100 and biaffine matrix attention over BERT vec- tors instead of stacked BiLSTMs. In all settings, we apply a dropout of 0.1 and optimize cross entropy loss using Adam (Kingma and Ba, 2015). We finetune for 2 to 5 epochs using a batch size of 32 and a learning rate of 5e-6, 1e-5, 2e-5, or 5e-5 with a slanted triangu- lar schedule (Howard and Ruder, 2018) which is equivalent to the linear warmup followed by lin- ear decay (Devlin et al., 2019). For each dataset and BERT variant, we pick the best learning rate and number of epochs on the development set and report the corresponding test results. We found the setting that works best across most datasets and models is 2 or 4 epochs and a learning rate of 2e-5. While task-dependent, op- timal hyperparameters for each task are often the same across BERT variants. # 3.5 Frozen BERT Embeddings We also explore the usage of BERT as pre- trained contextualized word embeddings, like ELMo (Peters et al., 2018), by training simple task-specific models atop frozen BERT embed- dings. For text classification, we feed each sentence of BERT vectors into a 2-layer BiLSTM of size 200 and apply a multilayer perceptron (with hid- den size 200) on the concatenated first and last BiLSTM vectors. For sequence labeling, we use the same BiLSTM layers and use a condi- tional random field to guarantee well-formed pre- dictions. For DEP, we use the full model from Dozat and Manning (2017) with dependency tag and arc embeddings of size 100 and the same BiLSTM setup as other tasks. We did not find changing the depth or size of the BiLSTMs to sig- nificantly impact results (Reimers and Gurevych, 2017). We optimize cross entropy loss using Adam, but holding BERT weights frozen and applying a dropout of 0.5. We train with early stopping on the development set (patience of 10) using a batch size of 32 and a learning rate of 0.001. We did not perform extensive hyperparameter search, but while optimal hyperparameters are go- ing to be task-dependent, some light experimenta- tion showed these settings work fairly well across most tasks and BERT variants. # 4 Results Table 1 summarizes the experimental results. We observe that SCIBERT outperforms BERT-Base on scientific tasks (+2.11 F1 with finetuning and +2.43 F1 without)8. We also achieve new SOTA results on many of these tasks using SCIBERT. # 4.1 Biomedical Domain We observe that SCIBERT outperforms BERT- Base on biomedical tasks (+1.92 F1 with finetun- ing and +3.59 F1 without). In addition, SCIB- ERT achieves new SOTA results on BC5CDR (Lee et al., 2019), and EBM- and ChemProt NLP (Nye et al., 2018). SCIBERT performs slightly worse than SOTA on 3 datasets. The SOTA model for JNLPBA is a BiLSTM-CRF ensemble trained on multi- ple NER datasets not just JNLPBA (Yoon et al., 2018). The SOTA model for NCBI-disease is BIOBERT (Lee et al., 2019), which is BERT- Base finetuned on 18B tokens from biomedi- cal papers. The SOTA result for GENIA is in Nguyen and Verspoor (2019) which uses the model from Dozat and Manning (2017) with part- of-speech (POS) features, which we do not use. In Table 2, we compare SCIBERT results with reported BIOBERT results on the subset of Interest- datasets included in (Lee et al., 2019). ing, SCIBERT outperforms BIOBERT results on 7The SOTA paper did not report a single score. We compute the average of the reported results for each class weighted by number of examples in each class. 8For rest of this paper, all results reported in this manner are averaged over datasets excluding UAS for DEP since we already include LAS. Field Task Dataset SOTA BERT-Base SCIBERT Frozen Finetune Frozen Finetune Bio NER PICO DEP REL BC5CDR (Li et al., 2016) JNLPBA (Collier and Kim, 2004) NCBI-disease (Dogan et al., 2014) EBM-NLP (Nye et al., 2018) GENIA (Kim et al., 2003) - LAS GENIA (Kim et al., 2003) - UAS ChemProt (Kringelum et al., 2016) 88.857 78.58 89.36 66.30 91.92 92.84 76.68 85.08 74.05 84.06 61.44 90.22 91.84 68.21 86.72 76.09 86.88 71.53 90.33 91.89 79.14 88.73 75.77 86.39 68.30 90.36 92.00 75.03 90.01 77.28 88.57 72.28 90.43 91.99 83.64 CS NER REL CLS SciERC (Luan et al., 2018) SciERC (Luan et al., 2018) ACL-ARC (Jurgens et al., 2018) 64.20 n/a 67.9 63.58 72.74 62.04 65.24 78.71 63.91 65.77 75.25 60.74 67.57 79.97 70.98 Multi CLS Paper Field SciCite (Cohan et al., 2019) n/a 84.0 63.64 84.31 65.37 84.85 64.38 85.42 65.71 85.49 Average 73.58 77.16 76.01 79.27 Table 1: Test performances of all BERT variants on all tasks and datasets. Bold indicates the SOTA result (multiple results bolded if difference within 95% bootstrap confidence interval). Keeping with past work, we report macro F1 scores for NER (span-level), macro F1 scores for REL and CLS (sentence-level), and macro F1 for PICO (token-level), and micro F1 for ChemProt specifically. For DEP, we report labeled (LAS) and unlabeled (UAS) attachment scores (excluding punctuation) for the same model with hyperparameters tuned for LAS. All results are the average of multiple runs with different random seeds. Task Dataset BIOBERT SCIBERT NER REL BC5CDR JNLPBA NCBI-disease ChemProt 88.85 77.59 89.36 76.68 90.01 77.28 88.57 83.64 Cite (Cohan et al., 2019). No prior published SOTA results exist for the Paper Field dataset. # 5 Discussion # 5.1 Effect of Finetuning Table 2: Comparing SCIBERT with the reported BIOBERT results on biomedical datasets. BC5CDR and ChemProt, and performs similarly on JNLPBA despite being trained on a substan- tially smaller biomedical corpus. # 4.2 Computer Science Domain We observe that SCIBERT outperforms BERT- Base on computer science tasks (+3.55 F1 with In addition, finetuning and +1.13 F1 without). SCIBERT achieves new SOTA results on ACL- ARC (Cohan et al., 2019), and the NER part of SciERC (Luan et al., 2018). For relations in Sci- ERC, our results are not comparable with those in Luan et al. (2018) because we are performing re- lation classification given gold entities, while they perform joint entity and relation extraction. # 4.3 Multiple Domains We observe that SCIBERT outperforms BERT- Base on the multidomain tasks (+0.49 F1 with finetuning and +0.93 F1 without). In addi- tion, SCIBERT outperforms the SOTA on Sci- We observe improved results via BERT finetuning rather than task-specific architectures atop frozen embeddings (+3.25 F1 with SCIBERT and +3.58 with BERT-Base, on average). For each scientific domain, we observe the largest effects of finetun- ing on the computer science (+5.59 F1 with SCIB- ERT and +3.17 F1 with BERT-Base) and biomed- ical tasks (+2.94 F1 with SCIBERT and +4.61 F1 with BERT-Base), and the smallest effect on mul- tidomain tasks (+0.7 F1 with SCIBERT and +1.14 F1 with BERT-Base). On every dataset except BC5CDR and SciCite, BERT-Base with finetuning outperforms (or performs similarly to) a model us- ing frozen SCIBERT embeddings. # 5.2 Effect of SCIVOCAB We assess the importance of an in-domain sci- entific vocabulary by repeating the finetuning ex- periments for SCIBERT with BASEVOCAB. We find the optimal hyperparameters for SCIBERT- BASEVOCAB often coincide with those of SCIB- ERT-SCIVOCAB. Averaged across datasets, we observe +0.60 F1 when using SCIVOCAB. For each scientific do- main, we observe +0.76 F1 for biomedical tasks, +0.61 F1 for computer science tasks, and +0.11 F1 for multidomain tasks. Given the disjoint vocabularies (Section 2) and the magnitude of improvement over BERT-Base (Section 4), we suspect that while an in-domain vocabulary is helpful, SCIBERT benefits most from the scientific corpus pretraining. # 6 Related Work Recent work on domain adaptation of BERT in- cludes BIOBERT (Lee et al., 2019) and CLIN- ICALBERT (Alsentzer et al., 2019; Huang et al., BIOBERT is trained on PubMed ab- 2019). stracts and PMC full text articles, and CLIN- ICALBERT is trained on clinical text from the MIMIC-III database (Johnson et al., 2016). In contrast, SCIBERT is trained on the full text of 1.14M biomedical and computer science papers from the Semantic Scholar corpus (Ammar et al., 2018). Furthermore, SCIBERT uses an in-domain vocabulary (SCIVOCAB) while the other above- mentioned models use the original BERT vocab- ulary (BASEVOCAB). # 7 Conclusion and Future Work We released SCIBERT, a pretrained language model for scientific text based on BERT. We evalu- ated SCIBERT on a suite of tasks and datasets from scientific domains. SCIBERT significantly outper- formed BERT-Base and achieves new SOTA re- sults on several of these tasks, even compared to some reported BIOBERT (Lee et al., 2019) results on biomedical tasks. For future work, we will release a version of SCIBERT analogous to BERT-Large, as well as ex- periment with different proportions of papers from each domain. Because these language models are costly to train, we aim to build a single resource that’s useful across multiple domains. # Acknowledgment We thank the anonymous reviewers for their com- ments and suggestions. We also thank Waleed Ammar, Noah Smith, Yoav Goldberg, Daniel King, Doug Downey, and Dan Weld for their help- ful discussions and feedback. All experiments were performed on beaker.org and supported in part by credits from Google Cloud. # References Emily Alsentzer, John R. Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. 2019. Publicly available clini- cal bert embeddings. In ClinicalNLP workshop at NAACL. Waleed Ammar, Dirk Groeneveld, Chandra Bhagavat- ula, Iz Beltagy, Miles Crawford, Doug Downey, Ja- son Dunkelberger, Ahmed Elgohary, Sergey Feld- man, Vu Ha, Rodney Kinney, Sebastian Kohlmeier, Kyle Lo, Tyler Murray, Hsu-Han Ooi, Matthew Pe- ters, Joanna Power, Sam Skjonsberg, Lucy Lu Wang, Chris Wilhelm, Zheng Yuan, Madeleine van Zuylen, and Oren Etzioni. 2018. Construction of the litera- ture graph in semantic scholar. In NAACL. Arman Cohan, Waleed Ammar, Madeleine 2019. van Cady. Structural scaffolds for citation intent classification in scientific publications. In NAACL-HLT, pages 3586–3596, Minneapolis, Minnesota. Association for Computational Linguis- tics. Zuylen, and Field Introduction to the bio-entity recognition task at jnlpba. In NLP- BA/BioNLP. Tim Dettmers. for 2019. TPUs vs (BERT). Transformers GPUs http://timdettmers.com/2018/10/17/tpus-vs-gpus-for-transformers-bert/. Accessed: 2019-02-22. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT. Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. NCBI disease corpus: A resource for dis- ease name recognition and concept normalization. Journal of biomedical informatics, 47:1–10. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. ICLR. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform. In arXiv:1803.07640. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL. Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and pre- dicting hospital readmission. arXiv:1904.05342. Alistair E. W. Johnson, Tom J. Pollard aand Lu Shen, Liwei H. Lehman, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, , and Roger G. Mark. 2016. Leo Anthony Celi, Mimic-iii, a freely accessible critical care database. In Scientific Data, 3:160035. David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Daniel Jurafsky. 2018. Measuring the evolution of a scientific field through citation frames. TACL, 06:391–406. Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. GENIA corpus - a semanti- cally annotated corpus for bio-textmining. Bioinfor- matics, 19:i180i182. Su Kim, David Mart´ınez, Lawrence Cavedon, and Lars Yencken. 2011. Automatic classification of sen- tences to support evidence based medicine. In BMC Bioinformatics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR. Jens Kringelum, Sonny Kim Kjærulff, Søren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau. 2016. ChemProt-3.0: a global chemical biology dis- eases mapping. In Database. Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained for language representation model biomedical biomedical text mining. In arXiv:1901.08746. Jiao Li, Yueping Sun, Robin J. Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J. Mattingly, Thomas C. Wiegers, and Zhiyong Lu. 2016. BioCreative V CDR task corpus: a resource for chemical disease relation extraction. Database : the journal of biological databases and curation. Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of enti- ties, relations, and coreference for scientific knowl- edge graph construction. In EMNLP. Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust mod- els for biomedical natural language processing. In arXiv:1902.07669. Dat Quoc Nguyen and Karin M. Verspoor. 2019. From pos tagging to dependency parsing for biomedical event extraction. BMC Bioinformatics, 20:1–13. Benjamin Nye, Junyi Jessy Li, Roma Patel, Yinfei Yang, Iain James Marshall, Ani Nenkova, and By- ron C. Wallace. 2018. A corpus with multi-level an- notations of patients, interventions and outcomes to support language processing for medical literature. In ACL. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In NAACL-HLT. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Nils Reimers and Iryna Gurevych. 2017. Optimal hy- perparameters for deep lstm-networks for sequence labeling tasks. In EMNLP. Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Dar- rin Eide, Bo-June Paul Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (MAS) and applications. In WWW. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Gregory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and ma- chine translation. abs/1609.08144. Wonjin Yoon, Chan Ho So, Jinhyuk Lee, and Jaewoo Kang. 2018. CollaboNet: collaboration of deep neu- ral networks for biomedical named entity recogni- tion. In DTMBio workshop at CIKM.
{ "id": "1904.05342" }
1903.10972
Simple Applications of BERT for Ad Hoc Document Retrieval
Following recent successes in applying BERT to question answering, we explore simple applications to ad hoc document retrieval. This required confronting the challenge posed by documents that are typically longer than the length of input BERT was designed to handle. We address this issue by applying inference on sentences individually, and then aggregating sentence scores to produce document scores. Experiments on TREC microblog and newswire test collections show that our approach is simple yet effective, as we report the highest average precision on these datasets by neural approaches that we are aware of.
http://arxiv.org/pdf/1903.10972
Wei Yang, Haotian Zhang, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20190326
20190326
9 1 0 2 r a M 6 2 ] R I . s c [ 1 v 2 7 9 0 1 . 3 0 9 1 : v i X r a # Simple Applications of BERT for Ad Hoc Document Retrieval Wei Yang,∗ Haotian Zhang,∗ and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo # Abstract over the previous state of the art in identifying an- swer spans from a large Wikipedia corpus. Following recent successes in applying BERT to question answering, we explore simple ap- plications to ad hoc document retrieval. This required confronting the challenge posed by documents that are typically longer than the length of input BERT was designed to handle. We address this issue by applying inference on sentences individually, and then aggre- gating sentence scores to produce document scores. Experiments on TREC microblog and newswire test collections show that our ap- proach is simple yet effective, as we report the highest average precision on these datasets by neural approaches that we are aware of. Given the successes in applying BERT to ques- tion answering and the similarities between QA and document retrieval, we naturally wondered: Would it be possible to apply BERT to improve document retrieval as well? In short, the answer is yes. Adapting BERT for document retrieval re- quires overcoming the challenges associated with long documents, both during training and infer- ence. We present a simple yet effective approach, based on the same BERTserini framework, that applies inference over individual sentences in a document and then combines sentence scores into document scores. # 1 Introduction The dominant approach to ad hoc document re- trieval using neural networks today is to de- ploy the neural model as a reranker over an ini- tial list of candidate documents retrieved using a standard bag-of-words term-matching technique. Researchers have proposed many neural ranking models (Mitra and Craswell, 2019), but there has recently been some skepticism about whether they have truly advanced the state of the art (Lin, 2018), at least in the absence of large amounts of log data only available to a few organizations. Our approach is evaluated on standard ad hoc retrieval test collections from the TREC Mi- croblog Tracks (2011–2014) and the TREC 2004 Robust Track. We report the highest average pre- cision on these datasets for neural approaches that we are aware of. The contribution of our work is, to our knowledge, the first successful application of BERT to ad hoc document retrieval, yielding state of the art results. # 2 Background and Related Work One important recent innovation is the use of neural models that make heavy use of pretrain- ing (Peters et al., 2018; Radford et al., 2018), cul- minating in BERT (Devlin et al., 2018), the most popular example of this approach today. Re- searchers have applied BERT to a broad range of NLP tasks and reported impressive gains. Most retrieval, BERT- serini (Yang et al., 2019) integrates passage re- trieval using the open-source Anserini IR toolkit with a BERT-based reader to achieve large gains # ∗ equal contribution In ad hoc document retrieval, the system is given a short query q and the task is to produce the best ranking of documents in a corpus, according to some standard metric such as average precision (AP). Mitra and Craswell (2019) provide a recent overview of many of these models, to which we re- fer interested readers in lieu of a detailed literature review due to space considerations. However, there are aspects of the task worth dis- cussing. Researchers have understood for a few years now that relevance matching and semantic matching (for example, paraphrase detection, nat- ural language inference, etc.) are different tasks, despite shared common characteristics (Guo et al., 2016). The first task has a heavier dependence on exact match (i.e., “one-hot”) signals, whereas the second task generally requires models to more ac- curately capture semantics. Question answering has elements of both, but nevertheless remains a different task from document retrieval. Due to these task differences, neural models for docu- ment ranking, for example, DRMM (Guo et al., 2016), are quite different architecturally from neu- ral models for capturing similarity; see, for exam- ple, the survey of Lan and Xu (2018). Another salient fact is that documents can be longer than the length of input texts that BERT was designed for. This creates a problem dur- ing training because relevance judgments are an- notations on documents, not on individual sen- tences or passages. Typically, within a relevant document, only a few passages are relevant, but such fine-grained annotations are not available in most test collections. Thus, it is unclear how ex- actly one would fine-tune BERT given (only) ex- isting document-level relevance judgments. In this paper, we sidestep the training challenge com- pletely and present a simple approach to aggregat- ing sentence-level scores during inference. # 3 Searching Social Media Posts Despite the task mismatch between QA and ad hoc document retrieval, our working hypothesis is that BERT can be fine-tuned to capture rele- vance matching, as long as we can provide ap- propriate training data. To begin, we tackled mi- croblog retrieval—-searching short social media posts—where document length does not pose an issue. Fortunately, test collections from the TREC Microblog Tracks (Lin et al., 2014), from 2011 to 2014, provide data for exactly this task. As with BERTserini, we adopted a simple architecture that uses the Anserini IR toolkit1 for initial retrieval, followed by inference us- ing a BERT model. Building on best practice, query likelihood (QL) with RM3 relevance feed- back (Abdul-Jaleel et al., 2004) provides the ini- tial ranking to depth 1000. The texts of the retrieved documents (posts) are then fed into a BERT classifier, and the BERT scores are com- bined with the retrieval scores via linear inter- polation. We used the BERT-Base model (un- cased, 12-layer, 768-hidden, 12-heads, 110M pa- # 1http://anserini.io/ rameters) described in Devlin et al. (2018). As in- put, we concatenated the query Q and the docu- ment D into a text sequence [[CLS], Q, [SEP], D, [SEP]], and then padded each text sequence in a mini-batch to N tokens, where N is the maximum length in the batch. Following Nogueira and Cho (2019), BERT is used for binary classification (i.e., relevance) by taking the [CLS] vector as input to a single layer neural network. Test collections from the TREC Microblog Tracks were used for fine-tuning the BERT model, using cross-entropy loss. For evaluation on each year’s dataset, we used the remaining years for fine tuning, e.g., tuning on 2011–2013 data, testing on 2014 data. From the training data, we sampled 10% for validation. We fine-tuned BERT with a learning rate of 3 × 10−6 for 10 epochs. The inter- polation weight between the BERT scores and the retrieval scores was tuned on the validation data. We only used as training examples the social me- dia posts that appear in our initial ranking (i.e., as opposed to all available relevance judgments). There are a total of 225 topics (50, 60, 60, 55) in the four datasets, which yields 225,000 examples (unjudged posts are treated as not relevant). Experimental results are shown in Table 1, where we present average precision (AP) and pre- cision at rank 30 (P30), the two official metrics of the evaluation (Ounis et al., 2011). The first two blocks of the table are copied from Rao et al. (2019), who compared bag-of-words baselines (QL and RM3) to several popular neural ranking models as well as MP-HCNN, the model they in- troduced. Results for all the neural models include interpolation with the original document scores. Rao et al. (2019) demonstrated that previous neu- ral models are not suitable for ranking short social media posts, and are no better than the RM3 base- line in many cases. In contrast, MP-HCNN was explicitly designed with characteristics of tweets in mind: it significantly outperforms previous neu- ral ranking models (see original paper for compar- isons, not repeated here). We also copied results from Shi et al. (2018), who reported even higher effectiveness than MP-HCNN. These results represent, to our knowledge, the most comprehensive summary of search effective- ness measured on the TREC Microblog datasets. Note that for these comparisons we leave aside many non-neural approaches that take advantage of learning-to-ranking techniques over manually- Model 2011 AP P30 2012 AP P30 2013 AP P30 2014 AP P30 QL RM3 DRMM (Guo et al., 2016) DUET (Mitra et al., 2017) K-NRM (Xiong et al., 2017) PACRR (Hui et al., 2017) MP-HCNN (Rao et al., 2019) 0.3576 0.3824 0.3477 0.3576 0.3576 0.3810 0.4043 0.4000 0.4211 0.4034 0.4000 0.4000 0.4286 0.4293 0.2091 0.2342 0.2213 0.2243 0.2277 0.2311 0.2460 0.3311 0.3452 0.3537 0.3644 0.3520 0.3576 0.3791 0.2532 0.2766 0.2639 0.2779 0.2721 0.2803 0.2896 0.4450 0.4733 0.4772 0.4878 0.4756 0.4944 0.5294 0.3924 0.4480 0.4042 0.4219 0.4137 0.4140 0.4420 0.6182 0.6339 0.6139 0.6467 0.6358 0.6358 0.6394 BiCNN (Shi et al., 2018) 0.4293 0.4728 0.2621 0.4147 0.2990 0.5367 0.4563 0.6806 BERT 0.4697 0.5040 0.3073 0.4356 0.3357 0.5656 0.5176 0.7006 Table 1: Results on test collections from the TREC Microblog Tracks, comparing BERT with selected neural ranking models. The first two blocks of the table contain results copied from Rao et al. (2019). engineered features, as we do not believe they form a fair basis of comparison. In general, such approaches also take advantage of non-textual fea- tures (e.g., social signals), and these additional signals (naturally) allow them to beat approaches that use only the text of the social media posts (like all the models discussed here). The final row of Table 1 reports results using our simple BERT-based technique, showing quite sub- stantial and consistent improvements over previ- ous results. Since we have directly copied results from previous papers, we did not conduct signifi- cance tests. lation). One rationale for this approach comes from Zhang et al. (2018b,a), who found that the “best” sentence or paragraph in a document pro- vides a good proxy for document relevance. This is also consistent with a long thread of work in in- formation retrieval that leverages passage retrieval techniques for document ranking (Callan, 1994; Clarke et al., 2000; Liu and Croft, 2002). Generalizing, we could consider the top n scor- ing sentences as follows: # n Scored = a · Sdoc + (1 − a) · X i=1 wi · Si # 4 Searching Newswire Articles Results on the microblog test collections confirm our working hypothesis that BERT can be fine- tuned to capture document relevance, at least for short social media posts. In other words, task dif- ferences between QA and document retrieval do not appear to hinder BERT’s adaptability. Hav- ing demonstrated this, we turn our attention to longer documents. For this, we take advantage of the test collection from the TREC 2004 Ro- bust Track (Voorhees, 2004), which comprises 250 topics over a newswire corpus. We selected this collection for a couple of reasons: it is the largest newswire collection we know of in terms of train- ing data, and Lin (2018) provides well-tuned base- lines that support fair comparisons to recent neural ranking models. Given the success of BERT on microblogs, one simple idea is to apply inference over each sen- tence in a candidate document, select the one with the highest score, and then combine that with the original document score (with linear interpo- where Sdoc is the original document score and Si is the i-th top scoring sentence according to BERT. The hyperparameters a and wi can be tuned via cross-validation. Sentence-level inference seems like a reason- able initial attempt at adapting BERT to document retrieval, but what about fine-tuning? As previ- ously discussed, the issue is that we lack sentence- level relevance judgments. Since our efforts rep- resent an initial exploration, we simply sidestep this challenge (for now) and fine tune on exist- ing sentence-level datasets. Specifically, we used: (1) the microblog data from the previous section and (2) the union of the TrecQA (Yao et al., 2013) and WikiQA (Yang et al., 2015) datasets. This sets up an interesting contrast: the first dataset captures the document retrieval task but on a different do- main, while the second dataset captures a differ- ent task but on corpora that are much closer to newswire. It is an empirical question as to which source is more effective. To support a fair comparison, we adopted the same experimental procedure as Lin (2018). He described two separate data conditions: one based on two-fold cross-validation to compare against “Paper 1” and one based on five-fold cross- validation to compare against “Paper 2”.2 The exact fold settings are provided online, which en- sures a fair comparison.3 In our implementation, documents are first cleaned by stripping all tags and then segmenting the text into sentences using NLTK. If the input to BERT is longer than 512 tokens (BERT’s maximum limit), we further split sentences into fixed sized chunks. Across the 250 topics, each document averages 43 sentences, with 27 tokens per sentence. In our experiments, we considered up to the top four sentences. For up to three sentences, a and wi are tuned via exhaustive grid search in the follow- ing range: a ∈ [0, 1], w1 = 1 (fixed), w2 ∈ [0, 1], and w3 ∈ [0, 1], all with step size 0.1. In the four- sentence condition, to reduce the search space, we started with the best three-sentence parameters and explored w4 ∈ [0, 1] with step size 0.1, along with neighboring regions in a, w2, and w3. We se- lected the parameters with the highest AP score on the training folds. Results of our experiments are shown in Ta- ble 2, divided into two blocks: Paper 1 on the top and Paper 2 on the bottom. The effective- ness of the two papers are directly copied from Lin (2018); all other results are our own runs. The paper aggregation site “Papers With Code” places Lin’s result as the state of the art on Robust04 as of this writing.4 As a point of comparison, in the most recent survey of neural ranking models by Guo et al. (2019), the best AP on Robust04 is in the 0.29 range, consistent with the above site. Therefore, we are quite confident that we are eval- uating against competitive models. In the results table, “FT” indicates the dataset used for fine- tuning and nS indicates inference using the top n scoring sentences of the document. We find that the learned w4 value is zero, indicating that addi- tional sentences do not help beyond the top three (at least according to our tuning procedure); thus, 4S results are omitted from the table. Interestingly, 2Since Lin’s article is critical of neural methods, he anonymized the neural approaches but mentioned that they come from articles published in late 2018 and are represen- tative of the most recent advances in neural approaches to document retrieval. 3https://github.com/castorini/Anserini/blob/master/docs/ experiments-forum2018.md # 4https://paperswithcode.com/sota/ ad-hoc-information-retrieval-trec-robust Model AP P20 Paper 1 (two fold) BM25+RM3 1S: BERT FT(QA) 2S: BERT FT(QA) 3S: BERT FT(QA) 1S: BERT FT(Microblog) 2S: BERT FT(Microblog) 3S: BERT FT(Microblog) 0.2971 0.2987 0.3014 0.3003 0.3003 0.3241 0.3240 0.3244 0.3948 0.3871 0.3928 0.3948 0.3948 0.4217 0.4209 0.4219 Paper 2 (five fold) BM25+RM3 1S: BERT FT(QA) 2S: BERT FT(QA) 3S: BERT FT(QA) 1S: BERT FT(Microblog) 2S: BERT FT(Microblog) 3S: BERT FT(Microblog) 0.272 0.3033 0.3102 0.3090 0.3090 0.3266 0.3278 0.3278 0.386 0.3974 0.4068 0.4064 0.4064 0.4245 0.4267 0.4287 Table 2: Results on Robust04. FT indicates the dataset used for fine tuning; nS indicates inference using the top n scoring sentences of the document. we find that fine-tuning BERT on microblog data is more effective than QA data, suggesting that task (QA vs. relevance matching) is more impor- tant than document genre (tweets vs. newswire). Cognizant of the potential dangers of repeated hy- pothesis testing, we probed the statistical signif- icance of one five-fold setting, BM25+RM3 vs. “3S: BERT FT (Microblog)”. According to a paired t-test, the differences are statistically sig- nificant (p < 10−7). As a summary, we see that a well-tuned BM25+RM3 baseline already outperforms neu- ral ranking approaches (which was Lin’s original point). Our simple BERT-based reranker yields further significant improvements. # 5 Conclusions In this preliminary study, we have adapted BERT for document retrieval in the most obvious man- ner, via sentence-level inference and simple score aggregation. Results show substantial improve- ments in both ranking social media posts and newswire documents—to our knowledge, the highest AP scores reported on the TREC Mi- croblog and Robust04 datasets for neural ap- proaches that we are aware of (although the liter- ature does report non-neural approaches that are even better, for both tasks). We readily con- cede that our techniques are quite simple and that In particu- there are many obvious next steps. lar, we simply sidestepped the issue of not hav- ing sentence-level relevance judgments, although there are some obvious distant supervision tech- niques to “project” relevance labels down to the sentence level that should be explored. We are ac- tively pursuing these and other directions. # Acknowledgments This supported by the Natu- ral Sciences and Engineering Research Council (NSERC) of Canada. # References Nasreen Abdul-Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Don- ald Metzler, Mark D. Smucker, Trevor Strohman, Howard Turtle, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004). James P. Callan. 1994. Passage-level evidence in doc- ument retrieval. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’94, pages 302–310. Charles Clarke, Gordon Cormack, and Elizabeth Tudhope. 2000. Relevance ranking for one to three term queries. Information Processing and Manage- ment, 36:291–311. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805. Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model In Proceedings of the 25th for ad-hoc retrieval. ACM International on Conference on Information and Knowledge Management, CIKM ’16, pages 55– 64. Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, and Xueqi Cheng. 2019. A deep look into neural ranking models for information retrieval. arXiv:1903.06902v1. Kai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. PACRR: A position-aware neural IR model for relevance matching. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1049–1058. Wuwei Lan and Wei Xu. 2018. Neural network mod- els for paraphrase identification, semantic textual similarity, natural language inference, and ques- In Proceedings of the 27th Inter- tion answering. national Conference on Computational Linguistics, pages 3890–3902. Jimmy Lin. 2018. The neural hype and comparisons against weak baselines. SIGIR Forum, 52(2):40–51. Jimmy Lin, Miles Efron, Yulu Wang, and Garrick Sher- man. 2014. Overview of the TREC-2014 Microblog Track. In Proceedings of the Twenty-Third Text RE- trieval Conference (TREC 2014). Xiaoyong Liu and W. Bruce Croft. 2002. Passage re- trieval based on language models. In Proceedings of the Eleventh International Conference on Informa- tion and Knowledge Management, CIKM ’02, pages 375–382. Bhaskar Mitra and Nick Craswell. 2019. An intro- duction to neural information retrieval. Foundations and Trends in Information Retrieval, 13(1):1–126. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceed- ings of the 26th International Conference on World Wide Web, WWW ’17, pages 1291–1299. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Iadh Ounis, Craig Macdonald, Jimmy Lin, and Ian Soboroff. 2011. Overview of the TREC-2011 Mi- croblog Track. In Proceedings of the Twentieth Text REtrieval Conference (TREC 2011). Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Technical re- port. Jinfeng Rao, Wei Yang, Yuhao Zhang, Ferhan Ture, and Jimmy Lin. 2019. Multi-perspective relevance matching with hierarchical ConvNets for social me- dia search. Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI). Peng Shi, Jinfeng Rao, and Jimmy Lin. 2018. Simple attention-based representation learning for ranking short social media posts. arxiv:1811.01013. Ellen M. Voorhees. 2004. Overview of the TREC 2004 Robust Track. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004), pages 52–69. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural In Proceed- ad-hoc ranking with kernel pooling. ings of the 40th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR ’17, pages 55–64. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. arXiv:1902.01718. Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- In Proceedings of the 2015 Con- tion answering. ference on Empirical Methods in Natural Language Processing, pages 2013–2018. Xuchen Yao, Benjamin Van Durme, Chris Callison- burch, and Peter Clark. 2013. Answer extraction as In Pro- sequence tagging with tree edit distance. ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 858–867. Haotian Zhang, Mustafa Abualsaud, Nimesh Ghe- lani, Mark D. Smucker, Gordon V. Cormack, and Maura R. Grossman. 2018a. Effective user interac- tion for high-recall retrieval: Less is more. In Pro- ceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM ’18, pages 187–196. Haotian Zhang, Gordon V. Cormack, Maura R. Gross- man, and Mark D. Smucker. 2018b. Evaluating sentence-level relevance feedback for high-recall in- formation retrieval. arXiv:1803.08988.
{ "id": "1902.01718" }
1903.10520
Micro-Batch Training with Batch-Channel Normalization and Weight Standardization
Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for training, which is inevitable for many computer vision tasks, e.g., object detection and semantic segmentation, constrained by memory consumption. To address this issue, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring two success factors of BN into micro-batch training: 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients; BCN combines batch and channel normalizations and leverages estimated statistics of the activations in convolutional layers to keep networks away from elimination singularities. We validate WS and BCN on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation. All experimental results consistently show that WS and BCN improve micro-batch training significantly. Moreover, using WS and BCN with micro-batch training is even able to match or outperform the performances of BN with large-batch training.
http://arxiv.org/pdf/1903.10520
Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, Alan Yuille
cs.CV, cs.LG
null
null
cs.CV
20190325
20200809
0 2 0 2 g u A 9 ] V C . s c [ 2 v 0 2 5 0 1 . 3 0 9 1 : v i X r a JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 # Micro-Batch Training with Batch-Channel Normalization and Weight Standardization Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille, Fellow, IEEE Abstract—Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for training, which is inevitable for many computer vision tasks, e.g., object detection and semantic segmentation, constrained by memory consumption. To address this issue, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring two success factors of BN into micro-batch training: 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients; BCN combines batch and channel normalizations and leverages estimated statistics of the activations in convolutional layers to keep networks away from elimination singularities. We validate WS and BCN on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation. All experimental results consistently show that WS and BCN improve micro-batch training significantly. Moreover, using WS and BCN with micro-batch training is even able to match or outperform the performances of BN with large-batch training. Index Terms—Micro-Batch Training, Group Normalization, Weight Standardization, Batch-Channel Normalization. # 1 INTRODUCTION Deep learning has advanced the state-of-the-arts in many vision tasks [1], [2]. Many deep networks use Batch Normal- ization (BN) [3] in their architectures because BN in most cases is able to accelerate training and help the models to converge to better solutions. BN stabilizes the training by controlling the first two moments of the distributions of the layer outputs in each mini-batch during training and is especially helpful for training very deep networks that have hundreds of layers [4], [5]. Despite its practical success, BN has a shortcoming that it works well only when the batch size is sufficiently large, which prohibits it from being used in micro-batch training. Micro-batch training, i.e., the batch size is small, e.g., 1 or 2, is inevitable for many computer vision tasks, such as object detection and semantic seg- mentation, due to limited GPU memory. This shortcoming draws a lot of attentions from researchers, which urges them to design specific normalization methods for micro-batch training, such as Group Normalization (GN) [6] and Layer Normalization (LN) [7], but they have difficulty matching the performances of BN in large-batch training (Fig. 1). 80 BCN + 40 = 79 cn WS = + ¥ 78 ws 398 < BCN BN & a GN ws GN 38 = e77 + G 3 ws 4 2 x 76 % 2 = 68 € GN ~ 75 74 RN5O RN101 Fig. 1: Comparing BN [3], GN [6], our WS used with GN, and WS used with BCN on ImageNet and COCO. On ImageNet, BN and BCN+WS are trained with large batch sizes while GN and GN+WS are trained with 1 image/GPU. On COCO, BN is frozen for micro-batch training, and BCN uses its micro- batch implementation. GN+WS outperforms both BN and GN comfortably and BCN+WS further improves the performances. In this paper, our goal is to bring the success factors of BN into micro-batch training but without relying on large batch sizes during training. This requires good understand- ings of the reasons of BN’s success, among which we focus on two factors: 1) BN’s smoothing effects: [8] proves that BN makes the landscape of the corresponding optimization problem significantly smoother, thus is able to stabilize the training process and accelerate the convergence speed of training deep neural networks. 2) BN avoids elimination singularities: Elimination sin- gularities refer to the points along the training trajectory where neurons in the networks get eliminated. Eliminable neurons waste computations and decrease the effective model complexity. Getting closer to them will harm the training speed and the final performances. By forcing each neuron to have zero mean and unit variance, BN keeps the networks at far distances from elimination singularities caused by non-linear activation functions. • All authors are with the Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218. E-mail: {siyuan.qiao, hwang157, cxliu}@jhu.edu {shenwei1231, alan.l.yuille}@gmail.com • Corresponding author: W. Shen Manuscript received April 19, 2005; revised August 26, 2015. We find that these two success factors are not properly addressed by some methods specifically designed for micro- batch training. For example, channel-based normalizations, e.g., Layer Normalization (LN) [7] and Group Normalization (GN) [6], are unable to guarantee far distances from elimina- 1 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 tion singularities. This might be the reason for their inferior performance compared with BN in large-batch training. To bring the above two success factors into micro-batch train- ing, we propose Weight Standardization (WS) and Batch- Channel Normalization (BCN) to improve network training. WS standardizes the weights in convolutional layers, i.e., making the weights have zero mean and unit variance. BCN uses estimated means and variances of the activations in convolutional layers by combining batch and channel normalization. WS and BCN are able to run in both large- batch and micro-batch settings and accelerate the training and improve the performances. We study WS and BCN from both theoretical and exper- imental viewpoints. The highlights of the results are: 1) Theoretically, we prove that WS reduces the Lipschitz constants of the loss and the gradients. Hence, WS smooths loss landscape and improves training. 2) We empirically show that WS and BCN are able to push the models away from the elimination singularities. 3) Experiments show that on tasks where large-batches are available (e.g. ImageNet [9]), GN [6] + WS with batch size 1 is able to match or outperform the performances of BN with large batch sizes (Fig. 1). 4) For tasks where only micro-batch training is available (e.g. COCO [10]), GN + WS will significantly improve the performances (Fig. 1). 5) Replacing GN with BCN further improves the results in both large-batch and micro-batch training settings. To show that our WS and BCN are applicable to many vision tasks, we conduct comprehensive experiments, in- cluding image classification on CIFAR-10/100 [11] and Im- ageNet dataset [9], object detection and instance segmen- tation on MS COCO dataset [10], video recognition on Something-SomethingV1 dataset [12], and semantic image segmentation on PASCAL VOC [13]. The experimental re- sults show that our WS and BCN are able to accelerate training and improve performances. # 2 RELATED WORK Deep neural networks advance state-of-the-arts in many computer vision tasks [1], [5], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23]. But deep networks are hard to train. To speed up training, proper model initialization strategies are widely used as well as data normalization based on the assumption of the data distribution [24], [25]. On top of data normalization and model initialization, Batch Normaliza- tion [3] is proposed to ensure certain distributions so that the normalization effects will not fade away during training. By performing normalization along the batch dimension, Batch Normalization achieves state-of-the-art performances in many tasks in addition to accelerating the training pro- cess. When the batch size decreases, however, the perfor- mances of Batch Normalization drop dramatically since the batch statistics are not representative enough of the dataset statistics. Unlike Batch Normalization that works on the batch dimension, Layer Normalization [7] normalizes data on the channel dimension, Instance Normalization [26] does Batch Normalization for each sample individually. Group Normalization [6] also normalizes features on the channel dimension, but it finds a better middle point between Layer Normalization and Instance Normalization. Batch Normalization, Layer Normalization, Group Nor- malization, and Instance Normalization are all activation- based normalization methods. Besides them, there are also weight-based normalization methods, such as Weight Nor- malization [27] and Centered Weight Normalization [28]. Weight Normalization decouples the length and the direc- tion of the weights, and Centered Weight Normalization also centers the weights to have zero mean. Weight Stan- dardization is similar, but removes the learnable weight length. Instead, the weights are standardized to have zero mean and unit variance, and then directly sent to the con- volution operations. When used with GN, it narrows the performance gap between BN and GN. In this paper, we study normalization from the perspec- tive of elimination singularity [29], [30] and smoothness [8]. There are also other perspectives to understand normaliza- tion methods. For example, from training robustness, BN is able to make optimization trajectories more robust to parameter initialization [31]. [8] shows that normalizations are able to reduce the Lipschitz constants of the loss and the gradients, thus the training becomes easier and faster. From the angle of model generalization, [32] shows that Batch Normalization relies less on single directions of activations, thus has better generalization properties, and [33] studies the regularization effects of Batch Normalization. [34] also explores length-direction decoupling in BN and WN [27]. Other work also approaches normalizations from the gradi- ent explosion issues [35] and learning rate tuning [36]. Our WS is also related to converting constrained optimization to unconstrained optimization [37], [38]. Our BCN uses Batch Normalization and Group Normal- ization at the same time for one layer. Some previous work also uses multiple normalizations or a combined version of normalizations for one layer. For example, SN [39] computes BN, IN, and LN at the same time and uses AutoML [40] to determine how to combine them. SSN [41] uses SparseMax to get sparse SN. DN [42] proposes a more flexible form to represent normalizations and finds better normaliza- tions. Unlike them, our method is based on analysis and theoretical understandings instead of searching solutions through AutoML, and our normalizations are used together as a composite function rather than linearly adding up the normalization effects in a flat way. # 3 LIPSCHITZ SMOOTHNESS AND ELIMINATION SINGULARITIES We first describe Lipschitz Smoothness and Elimination Singularities to provide the background of our analyses. # 3.1 Lipschitz Smoothness A function f : A Rn is L-Lipschitz [43] if Rm, A → A : ∈ f (a) f (b) a, b a L (1) . b || − A continuously differentiable function f is β-smooth if the gradient ∀ ∈ || || ≤ || − ∇ a, b A : f (a) f (b) a β b (2) . || ∀ ∈ ||∇ − ∇ || ≤ || − 2 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Many results show that training smooth functions using gradient descent algorithms is faster than training non- smooth functions [44]. Intuitively, gradient descent based training algorithms can be unstable due to exploding or vanishing gradients. As a result, they are sensitive to the selections of the learning rate and initialization if the loss landscape is not smooth. Using an algorithm (e.g. WS) that smooths the loss landscape will make the gradients more reliable and predictive; thus, larger steps can be taken and the training will be accelerated. # 3.2 Elimination singularities Deep neural networks are hard to train partly due to the singularities caused by the non-identifiability of the model [30]. These singularities include overlap singularities, linear dependence singularities, elimination singularities, etc. Degenerate manifolds in the loss landscape will be caused by these singularities, getting closer to which will slow down learning and impact model performances [29]. In this paper, we focus on elimination singularities, which correspond to the points on the training trajectory where neurons in the model become constantly deactivated. The original definition of elimination singularities is based on weights [30]: if we use ωc to denote the weights that take the channel c as input, then an elimination singu- larity is encountered when ωc = 0. However, this definition is not suitable for real-world deep network training as most of ωc will not be close to 0. For example, in a ResNet- 50 [2] well-trained on ImageNet [9], 1 = 0.55, L where L is the number of all the layers l in the network. Note that weight decay is already used in training this net- work to encourage weight sparsity. In other words, defining elimination singularities based on weights is not proper for networks trained in real-world settings. In this paper, we consider elimination singularities for networks that use ReLU as their activation functions. We fo- cus on a basic building element that is widely used in neural networks: a convolutional layer followed by a normalization method (e.g. BN, LN) and ReLU [45], i.e., X out = ReLU(Norm(Conv(X in))). (3) When ReLU is used, ωc = 0 is no longer necessary for a neuron to be eliminatable. This is because ReLU sets any values below 0 to 0; thus a neuron is constantly deactivated if its maximum value after the normalization layer is below 0. Their gradients will also be 0 because of ReLU, making them hard to revive; hence, a singularity is created. # 4 WEIGHT STANDARDIZATION In this section, we introduce Weight Standardization, which is inspired by BN. It has been demonstrated that BN in- fluences network training in a fundamental way: it makes the landscape of the optimization problem significantly smoother [8]. Specifically, [8] shows that BN reduces the Lipschitz constants of the loss function, and makes the gradients more Lipschitz, too, i.e., the loss will have a better β-smoothness [43]. We notice that BN considers the Lipschitz constants with respect to activations, not the weights that the optimizer is Kernel Size out in Fig. 2: Comparing normalization methods on activations (blue) and Weight Standardization (orange). directly optimizing. Therefore, we argue that we can also standardize the weights in the convolutional layers to further smooth the landscape. By doing so, we do not have to worry about transferring smoothing effects from activations to weights; moreover, the smoothing effects on activations and weights are also additive. Based on these motivations, we propose Weight Standardization. # 4.1 Weight Standardization Here, we show the detailed modeling of Weight Standard- ization (WS) (Fig. 2). Consider a standard convolutional layer with its bias term set to 0: y = ˆW x, RO×I denotes the weights in the layer and ∗ where W € R°*! denotes the weights in the layer and * denotes the convolution operation. For W © ROx! ,Ois the number of the output channels, J corresponds to the number of input channels within the kernel region of each output channel. Taking Fig. 2Jas an example, O = Cout and I = Cn x Kernel_Size. In Weight Standardization, instead of directly optimizing the loss £ on the original weights Ww, we reparameterize the weights W as a function of W, ice., W =WS(W), and optimize the loss £ on W by SGD: . . . W;,- W = |W, | Wij = 24], 6) . . . W;,- W = |W, | Wij = 24], 6) ow;, y = ˆW x, (6) ∗ where iJ 1 Lw;. = FLW ow, = FLW — iy, +e. j=l j=l (7) Similar to BN, WS controls the first and second mo- ments of the weights of each output channel individu- ally in convolutional layers. Note that many initialization methods also initialize the weights in some similar ways. Different from those methods, WS standardizes the weights in a differentiable way which aims to normalize gradients during back-propagation. Note that we do not have any affine transformation on ˆW . This is because we assume that normalization layers such as BN or GN will normalize this convolutional layer again, and having affine transformation will confuse and slow down training. In the following, we first discuss the normalization effects of WS to the gradients. 3 (7) JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 # 4.2 Comparing WS with WN and CWN Weight Normalization (WN) and Centered Weight Nor- malization (CWN) also normalize weights to speed up deep network training. Weight Normalization reparameter- izes weights by separating the direction Thal and length g: ˆW = g W W . (8) WN is able to train good models on many tasks. But as shown in [46], WN has difficulty matching the performances of models trained with BN on large-scale datasets. Later, CWN adds a centering operation for WN, i.e., w-Ww |W wil W=g ) − − To compare with WN and CWN, we consider the weights for only one of the output channel and reformulate the corresponding weights output by WS in Eq. 5 as We= ww (10) ww − which removes the learnable length g from Eq. 9 and divides the weights with their standard deviation instead. Experi- ments in Sec. 8 show that WS outperforms WN and CWN on large-scale tasks [9]. # 5 THE SMOOTHING EFFECTS OF WS In this section, we discuss the smoothing effects of WS. Sec. 5.1 shows that WS normalizes the gradients. This normalization effect on gradients lowers the Lipschitz con- stants of the loss and the gradients as will be shown in Sec. 5.2, where Sec. 5.2.1 discusses the effects on the loss and Sec. 5.2.2 discusses the effects on the gradients. 5.1 WS normalizes gradients For convenience, we set « = 0 (in Eq. (7). We first focus on one output channel c. Let y. € R° be all the outputs of channel c during one pass of feedforwarding and back- propagation, and x, € R®! be the corresponding inputs. Then, we can rewrite Eq. [bJand [Jas W.. =W., ~51(1,W.,), a . 1 . We. = We./(y 5(L. We), ˙Wc,· = Wc,· (11) . We), a . 1 . We. = We./(y 5(L. We), (12) yc = xc ˆWc,·, (13) where ( , ) denotes dot product and °? denotes Hadamard power. Then, the gradients are 1 a Vw..£ = 5 (Vw. £~ 2(We. Vy, £)We.), (14) # ∇ ˆWc,·L − 1 I ∇ Vw.,£= Vy, £-s11, Vy, 2). (15) I ∇ Fig. 3 shows the computation graph. Based on the equations, we observe that different from the original gradients ∇ ˆWc,· L which is back-propagated through Eq. 13, the gradients are normalized by Eq. 14 & 15. ∇ ._-— zz —> ‘ “L —> J — tt WwW wi w Fig. 3: Computation graph for WS in feed-forwarding and back- propagation. W , is first subtracted In Eq. 14, to compute ∇ and then divided by by a weighted average of σ ˆWc,· . Note that when BN is used to normalize this convo- lutional layer, as BN will compute again the scaling factor σu, the effects of dividing the gradients by σ ˆWc,· will be canceled in both feedforwarding and back-propagation. As for the additive term, its effect will depend on the statistics and ˆWc,·. We will later show that this term will of reduce the gradient norm regardless of the statistics. As for Eq. 15, it zero-centers the gradients from ˙Wc,·. When the mean gradient is large, zero-centering will significantly affect the gradients passed to Wc,·. # 5.2 WS smooths landscape We will show that WS is able to make the loss landscape smoother. Specifically, we show that optimizing £ on W has smaller Lipschitz constants on both the loss and the gradients than optimizing £ on W. Lipschitz constant of a function f is the value of L if f satisfies | f (x1) — f(#2)| < L\|x1 — x2||, Vr1, x2. For the loss and gradients, f will be Land Vw, and x will be W. Smaller Lipschitz constants on the loss and gradients mean that the changes of the loss and the gradients during training will be bounded more. They will provide more confidence when the optimizer takes a big step in the gradient direction as the gradient direction will vary less within the range of the step. In other words, the optimizer can take longer steps without worrying about sudden changes of the loss landscape and gradients. Therefore, WS is able to accelerate training. # 5.2.1 Effects of WS on the Lipschitz constant of the loss Here, we show that both Eq. 14 and Eq. 15 are able to reduce the Lipschitz constant of the loss. We first study Eq. 14: IIVw,.£Il” = a (lI¥w. A+ 1 (16) qa (Woo Vw, Ly” Wh. W.,.) —21)). Ly” Wh. W.,.) = I. # wel 2 = I. Then, By Eq. 12, we know that 2 = 2 Vw. £1? = (lw, £1? Ww, (17) Low, + Vw.,£)’)- Since we assume that this convolutional layer is followed by a normalization layer such as BN or GN, the effect of 1/σ2 Wc,· will be canceled. Therefore, the real effect on the gradient 4 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 ResNet50 Train GN GN+Eq.11 60 GN+Eq.12 GN+Eq.11612 Error Rate FS S$ Error Rate FS S$ ResNet50 Val GN GN+Eq.11 GN+Eq.12 GN+Eq.116&12 Percentage (%) Tei, ma) Soa Vii. £) mm of, Vw, £1? 0 20 40 60 80 0 20 Epoch Epoch 60 80 0 20 40 60 80 Epoch Fig. 4: Training ResNet-50 on ImageNet with GN, Eq. 11 and 12. The left and the middle figures show the training dynamics. The right figure shows the reduction percentages on the Lipschitz constant. Note that the y-axis of the right figure is in log scale. norm is the reduction — LW. > Vw. Lipschitz constant of the loss. Ly’, which reduces the Next, we study the effect of Eq. 15. By definition, “ Vw. £|)" = Vw. 4 — ZO By Eq.[14] we rewrite the second term: Vw. £|)" = Vw. 4 — ZO Vw,. 0? 8) ph Yw, 2° = poe ((s 0) © —F(We. Vy, £)- (1, We,)) (1, W..,.) Since = 0, we have £1? = Vw. £1? = Vw, 2 - (Vw, 6). 20) 2 Leow, Summarizing the effects of Eq [14] [14jand[15]on the Lipschitz constant of the loss: ignoring 1/ i "ra pedces it by LW... Vw. Ly, and at 1S|reduces it by + 11, Vyw, L£ L)?. Although both Eq. |14| and [15] reduce the Lipschitz con- stant, their real effects depend on the statistics of the weights and the gradients. For example, the reduction effect of Eq./15 depends on the average gradients on W. As for Eq.|14} note that (1, W. ..) = 0, its effect might be limited when W... is evenly distributed around 0. To understand their real effects, we conduct a case study on ResNet-50 trained on ImageNet to see which one of Eq. [iJand [12] has bigger effects or they contribute similarly to smoothing the landscape. # ∇ ˆWc,·L # Vyw, L£ To compute the two values above, we gather and save the intermediate gradients Vw. £, and the weights for the convolution W.... In total, we train ResNet-50 with GN, Eq. and for 90 epochs, and we save the gradients and the weights of the first training iteration of each epoch. The right figure of 8. shows the average percentages of 1(W..., Vy Ly (1, Vy £), and of. ||Vw. ol From the right ‘figure we can see that = LW... Vy Ly i small compared with other two components (< 0: 02). In other words, although Eq. |1 [12] decreases the gradient norm regardless of the statistics of the weights and gradients, its real effect is limited due to the distribution of W... and Vw. L. Nevertheless, from the left figures we can see that q.[12]still improves the training. Since Eq. [I2]requires very little computations, we will keep it in WS. From the experiments above, we observe that the train- ing speed boost is mainly due to Eq. As the effect of Eq. is limited, in this section, we only study the effect of Eq.|11}on “ Hessian of W,.. and W.... Here, we will show that Eq. |11] decreases the Frobenius norm of the Hessian matrix of the weights, ie., ||Viy, Lille < Vi, Llle- With smaller Frobenius norm, the gradients of W.. are more predictable, thus the loss is smoother and easier to optimize. We use H and H to denote the Hessian matrices of W,.. and ˙Wc,·, respectively, i.e., Hi,j = ∂2 ∂Wc,i∂Wc,j L , ˙Hi,j = ∂2 L ∂ ˙Wc,i∂ ˙Wc,j . (21) 5.2.2 Effects of WS on the Lipschitz constant of gradients Before the Lipschitzness study on the gradients, we first show a case study where we train ResNet-50 models on ImageNet following the conventional training procedure (21. In total, we train four models, including ResNet-50 with GN, ResNet-50 with GN+Eq.|11} ResNet-50 with GN+Eq.|12|and ResNet-50 with GN+Eq. [118412] The training dynamics are shown in Fig. [4] from which we observe that Eq. [I2|slightly improves the training speed and performances of models with or without Eq. |11] while the major improvements are from Eq. This observation motivates us to study the real effects of Eq. |11| and |12) on the Lipschitz constant of the loss. To investigate this, we take a look at the values of LW... Vy, Ly, and +(1,V w...£)” during training. # ∇ ˆWc,·L # and w...£)” We first derive the relationship between Hi,j and ˙Hi,j: Hi, «+ Hy;) LEY tye (22) p=lq=l Note that (23) Therefore, Eq. 11 not only zero-centers the feedforwarding outputs and the back-propagated gradients, but also the 5 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Hessian matrix. Next, we compute its Frobenius norm: It Alp = S00 A, i=1j=1 As shown in Eq. 24, Eq. 11 reduces the Frobenius norm /I 2, of the Hessian matrix by at least which makes the gradients more predictable than directly optimizing on the weights of the convolutional layer. 5.3 Connections to constrained optimization WS imposes constraints to the weight ˆWc,· such that I dW. = 0, i=l I Sow, =1, ve. i=l (25) Therefore, an alternative to the proposed WS is to consider the problem as constrained optimization and uses Projected Gradient Descent (PGD) to find the solution. The update tule for PGD can be written as Wet) = Proi(Wi; — 6: Vy £) (26) where Pro j(-) denotes the projection function and € denotes the learning rate. To satisfy Eq. 25} Proj(-) standardizes its input. We can approximate the right hand side of Eq. [26] by minimizing the Lagrangian of the loss function £, which obtains Wit We, -<(V wi (w. cs Vay, L)YWe (27) 1 - 7h Vw. £)) Different from Eq. 26, the update rule of WS is wit =Proi(Wi, —e€: Vwe L) 1 =Proi(Wé, c ( wef - 7 iW ee (28) Vw. £)Wei — 7h Vw. L)) € » . - Proq (hs (Woo Vw, L)W...)). Eq. 28 is more complex than Eq. 27, but the increased complexity is neglectable compared with training deep net- works. For simplicity, Eq. 28 reuses Proj to denote the standardization process, despite that WS uses Stochastic Gradient Descent instead of Projected Gradient Descent to optimize the weights. # 6 WS’S EFFECTS ON ELIMINATION SINGULARITIES In this section, we will provide the background of BN, GN and LN, discuss the negative correlation between the performance and the distance to elimination singularities, and show LN and GN are unable to keep the networks away from elimination singularities as BN. Next, we will show that WS helps avoiding elimination singularities. # 6.1 Batch- and channel-based normalizations and their effects on elimination singularities 6.1.1 Batch- and channel-based normalizations Based on how activations are normalized, we group the normalization methods into two types: batch-based normal- ization and channel-based normalization, where the batch- based normalization method corresponds to BN and the channel-based normalization methods include LN and GN. Suppose we are going to normalize a 2D feature map RB×C×H×W , where B is the batch size, C is the X number of channels, H and W denote the height and the width. For each channel c, BN normalizes X by Y·c·· = X·c·· − σ·c·· µ·c·· , (29) where µ·c·· and σ·c·· denote the mean and the standard de- viation of all the features of the channel c, X·c··. Throughout the paper, we use in the subscript to denote all the features · along that dimension for convenience. Unlike BN which computes statistics on the batch di- mension in addition to the height and width, channel-based normalization methods (LN and GN) compute statistics on the channel dimension. Specifically, they divide the channels to several groups, and normalize each group of channels, i.e., X is reshaped as ˙X ∈ ˙Ybg··· = ˙Xbg··· µbg··· − σbg··· , (30) for each sample b of B samples in a batch and each channel ˙Y is group g out of all G groups. After Eq. 30, the output reshaped as ˙X and denoted by Y . Both batch- and channel-based normalization methods have an optional affine transformation, i.e., Z·c·· = γcY·c·· + βc. (31) 6.1.2 BN avoids elimination singularities Here, we study the effect of BN on elimination singularities. Since the normalization methods all have an optional affine transformation, we focus on the distinct part of BN, which normalizes all channels to zero mean and unit variance, i.e., Eyey....[y] =0, Eyey....[y"] =1, Ve. (32) ∀ As a result, regardless of the weights and the distribution of the inputs, it guarantees that the activations of each chan- nel are zero-centered with unit variance. Therefore, each channel cannot be constantly deactivated because there are always some activations that are > 0, nor underrepresented due to the channel having a very small activation scale compared with the others. 6 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Far from singularities / Close to singularities / Close to BN Far from BN 0.0 0.5 1.0 15 2.0 2.5 3.0 84 Close to singularities / Far from BN Fig. 5: Model accuracy and distance to singularities. Larger circles correspond to higher performances. Red crosses repre- sent failure cases (accuracy < 70%). Circles are farther from singularities/closer to BN if they are closer to the origin. # 6.1.3 Statistical distance and its affects on performance BN avoids singularities by normalizing each channel to zero mean and unit variance. What if they are normalized to other means and variances? We ask this question because this is similar to what happens in channel-normalized models. In the context of activation-based normalizations, BN completely resolve the issue of elimination singularities as each channel is zero- centered with unit variance. By contrast, channel-based nor- malization methods, as they do not have batch information, are unable to make sure that all neurons have zero mean and unit variance after normalization. In other words, there are likely some underrepresented channels after training if the model is using channel-based normalizations. Since BN represents the ideal case which has the furthest distance to elimination singularities, and any dissimilarity with BN will lead to lightly or heavily underrepresented channels and thus make the models closer to singularities, we use the distance to BN as the distance to singularities for activation-based normalizations. Specifically, in this definition, the model is closer to singularities when it is far from BN. Fig. 5 shows that this definition is useful, where we study the relationship between the performance and the distance to singularities (i.e., how far from BN) caused by statistical differences. We conduct experiments on a 4-layer convolutional network, the results of which are shown in Fig 5. Each convolutional layer has 32 output channels, and is followed by an average pooling layer which down-samples the features by a factor of 2. Finally, a global average pooling layer and a fully- connected layer output the logits for Softmax. The experi- ments are done on CIFAR-10 [11]. In the experiment, each channel c will be normalized to a pre-defined mean ˆµc and a pre-defined variance ˆσc that are drawn from two distributions, respectively: (0, σµ) and ˆσc = e ˙σc where ˙σc (0, σσ). (33) # ˆµc # ∼ N # ∼ N The model will be closer to singularities when σµ or σσ increases. BN corresponds to the case where σµ = σσ = 0. 1.04 yw —— resnet-110-gn Ee —— resnet-110-gn-ws 084 resnet-110-In resnet-110-In-ws £ G06 4 9 g a 0.44 0.24 oo+—+ ; 1 : + +r r + ; 0 20 40 +60 80 100 120 140 160 Training Epoch Fig. 6: Means and standard deviations of the statistical differ- ences (StatDiff defined in Eq. 35) of all layers in a ResNet-110 trained on CIFAR-10 with GN, GN+WS, LN, and LN+WS. After getting ˆµc and ˆσc for each channel, we compute Xe. — flee Ye(Fe Pe" + fl) + Be. Onc. (34) − σ·c·· Note that ˆµc and ˆσc are fixed during training while γc and βc are trainable parameters in the affine transformation. Fig. 5 shows the experimental results. When σµ and σσ are closer to the origin, the normalization method is more close to BN. When their values increase, we observe perfor- mance decreases. For extreme cases, we also observe train- ing failures. These results indicate that although the affine transformation theoretically can find solutions that cancel the negative effects of normalizing channels to different statistics, their capability is limited by the gradient-based training. They show that defining distance to singularities as the distance to BN is useful. They also raise concerns about channel normalizations regarding their distances. # 6.1.4 Statistics in Channel-based Normalization Following our concerns about channel-based normalization and their distance to singularities, we study the statistical differences between channels when they are normalized by a channel-based normalization such as GN or LN. Statistical differences in GN, LN and WS: We train a ResNet-110 [2] on CIFAR-10 [11] normalized by GN, LN, with and without WS. During training, we keep record of the running mean µr c and variance σr c of each channel c after convolutional layers. For each group g of the channels that are normalized together, we compute their channel sta- tistical difference defined as the standard deviation of their means divided by the mean of their standard deviations, i.e., \VBeea[(H2)?] — (Eveg[a?])” Eceg [oe] StatDiff(g) (35) We plot the average statistical differences of all the groups after every training epoch as shown in Fig. 6. g. In BN, all their means are the same, as well as their variances, thus StatDiff(g) = 0. As the value of StatDiff(g) goes up, the differences between channels within a group become larger. Since they will be normalized together as in Eq. 30, large differences will inevitably lead to underrepresented channels. Fig. 7 plots 3 examples of 2 channels before and after normalization in 7 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 # Before Norm and ReLU # After Norm and ReLU b N StatDiff=0.13 StatDiff=0.67 StatDiff=1.33 rill h Fig. 7: Examples of normalizing two channels in a group when they have different means and variances. Transparent bars mean they are 0 after ReLU. StatDiff is defined in Eq. 35. Eq. 30. Compared with those examples, it is clear that the models in Fig. 6 have many underrepresented channels. Why GN performs better than LN: Fig. 6 also provides explanations why GN performs better than LN. Comparing GN and LN, the major difference is their numbers of groups for channels: LN has only one group for all the channels in a layer while GN collects them into several groups. A strong benefit of having more than one group is that it guarantees that each group will at least have one neuron that is not suppressed by the others from the same group. Therefore, GN provides a mechanism to prevent the models from getting too close to singularities. Fig. 6 also shows the statistical differences when WS is used. From the results, we can clearly see that WS makes StatDiff much closer to 0. Consequently, the majority of the channels are not underrepresented in WS: most of them are frequently activated and they are at similar activation scales. This makes training with WS easier and their results better. # 6.2 WS helps avoiding elimination singularities The above discussions show that WS helps keeping models away from elimination singularities. Here, we discuss why WS is able to achieve this. Recall that WS adds constraints RO×I of a convolutional layer with O to the weight W ∈ c, output channels and I inputs such that ∀ I L Wei =0, SOW = 1. (36) i=l i=1 With the constraints of WS, µout c and σout c become I ; I He = SO Wein, (02k)? = (7) i=1 when we follow the assumptions in Xavier initializa- tion [24]. When the input channels are similar in their statistics, i.e., µin j , σin µin σin j , i, j, i ≈ # ~] ot ∀ µout c µin 1 Wc,i = 0, ≈ i=1 (38) (oe)? = (oP)? We; = (oP). (39) i=1 In other words, WS can pass the statistical similarities from the input channels to the output channels, all the way from the image space where RGB channels are properly normalized. This is similar to the objective of Xavier ini- tialization [24] or Kaiming initialization [25], except that WS enforces it by reparameterization throughout the entire training process, thus is able to reduce the statistical differ- ences a lot, as shown in Fig. 6. Here, we summarize this subsection. We have shown that channel-based normalization methods, as they do not have batch information, are not able to ensure a far distance from elimination singularities. Without the help of batch in- formation, GN alleviates this issue by assigning channels to more than one group to encourage more activated neurons, and WS adds constraints to pull the channels to be not so statistically different. We notice that the batch information is not hard to collect in reality. This inspires us to equip channel-based normalization with batch information, and the result is Batch-Channel Normalization. # 7 BATCH-CHANNEL NORMALIZATION The previous section discusses elimination singularities and shows WS is able to keep models away from them. To fully address the issue of elimination singularities, we propose Batch-Channel Normalization (BCN). This section presents the definition of BCN, discusses why adding batch statistics to channel normalization is not redundant, and shows how BCN runs in large-batch and micro-batch training settings. # 7.1 Definition Batch-Channel Normalization (BCN) adds batch informa- tion and constraints to channel-based normalization meth- RB×C×H×W be the features to be normalized. ods. Let X Then, the normalization is done as follows. ∀ ˙X·c·· = γb c X·c·· − ˆσc ˆµc + βb c, (40) where the purpose of ˆµc and ˆσc is to make Xie. — fle Xie. = fle,2 E = dE 1. 41 { Be } Oan {( Be ) } (41) ˙X is reshaped as ˙X Then, groups of channels. Next, RB×G×C/G×H×W to have G ∈ g, b, ∀ ˙Xbg··· ˙Ybg··· = γc g µbg··· + βc g. − σbg··· (42) Finally, ˙Y is reshaped back to Y the output of the Batch-Channel Normalization. RB×C×H×W , which is ∈ 8 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Algorithm 1: Micro-batch BCN Input: X € R®*CX4*W’ the current estimates of fic and 6?, and the update rate r. Output: Normalized Y. 1 Compute fie — snr oh Xb.c,h,wi 2 Compute 62 © gay Donw (Xb,ehw 3 Update fig + fle + r(fte — fle); 4 Update o? + 62 + r(a? — 6?); c s Normalize X... = + Bb; 6 Reshape X to X € RBXGXC/GxHxW 7 Normalize Yq... = pte Mo + BG; ; bg s Reshape Y to Y € RBXCx#xW, ∈ # 7.2 Large- and Micro-batch Implementations Note that in Eq. 40 and 42, only two statistics need batch information: ˆµc and ˆσc, as their values depend on more than one sample. Depending on how we obtain the values of ˆµc and ˆσc, we have different implementations for large-batch and micro-batch training settings. 7.2.1 Large-batch training When the batch size is large, estimating ˆµc and ˆσc is easy: we just use a Batch Normalization layer to achieve the function of Eq. 40 and 41. As a result, the proposed BCN can be written as BCN(X) = CN(BN(X)). (43) Implementing it is also easy with modern deep learning libraries, which is omitted here. # 7.2.2 Micro-batch training One of the motivations of channel normalization is to allow deep networks to train on tasks where the batch size is limited by the GPU memory. Therefore, it is important for Batch-Channel Normalization to be able to work in the micro-batch training setting. Algorithm 1 shows the feed-forwarding implementation of the micro-batch Batch-Channel Normalization. The basic idea behind this algorithm is to constantly estimate the values of ˆµc and ˆσc, which are initialized as 0 and 1, respec- tively, and normalize X based on these estimates. It is worth noting that in the algorithm, ˆµc and ˆσc are not updated by the gradients computed from the loss function; instead, they are updated towards more accurate estimates of those statistics. Step 3 and 4 in Algorithm 1 resemble the update steps in gradient descent; thus, the implementation can also be written in gradient descent by storing the difference ∆ˆµc and ∆ˆσc as their gradients. Moreover, we set the update rate r to be the learning rate of trainable parameters. Algorithm 1 also raises an interesting question: when researchers study the micro-batch issue of BN before, why not just using the estimates to batch-normalize the features? In fact, [47] tries a similar idea, but does not fully solve the micro-batch issue: it needs a bootstrap phase to make the estimates meaningful, and the performances are usually Method Top-1 Method Top-1 LN [7] IN [26] GN [6] BN [3] 27.22 29.49 24.81 24.30 LN+WS IN+WS GN+WS BN+WS 24.60 28.24 23.72 23.76 TABLE 1: Top-1 error rates of ResNet-50 on ImageNet. All models except BN are trained with batch size 1 per GPU. BN models are trained with batch size 64 per GPU. not satisfactory. The underlying difference between micro- batch BCN and [47] is that BCN has a channel normalization following the estimate-based normalization. This makes the previously unstable estimate-based normalization stable, and the reduction of Lipschitz constants which speeds up training is also done in the channel-based normalization part, which is also impossible to do in estimate-based nor- malization. In summary, channel-based normalization makes estimate-based normalization possible, and estimate-based normal- ization helps channel-based normalization to keep models away from elimination singularities. # 7.3 Is Batch-Channel Normalization Redundant? Batch- and channel-based normalizations are similar in many ways. Is BCN thus redundant as it normalizes nor- malized features? Our answer is no. Channel normalizations need batch knowledge to keep the models away from elim- ination singularities; at the same time, it also brings benefits to the batch-based normalization, including: Batch knowledge without large batches. Since BCN runs in both large-batch and micro-batch settings, it provides a way to utilize batch knowledge to normalize activations without relying on large training batch sizes. Additional non-linearity. Batch Normalization is linear in the test mode or when the batch size is large in training. By contrast, channel-based normalization methods, as they normalize each sample individually, are not linear. They will add strong non-linearity and increase the model capacity. Test-time normalization. Unlike BN that relies on estimated statistics on the training dataset for testing, channel nor- malization normalizes testing data again, thus allows the statistics to adapt to different samples. As a result, channel normalization will be more robust to statistical changes and show better generalizability for unseen data. # 8 EXPERIMENTAL RESULTS In this section, we will present the experimental results of using our proposed Weight Standardization and Batch- Channel Normalization, including image classification on CIFAR-10/100 [11] and ImageNet [9], object detection and instance segmentation on COCO [10], video recognition on Something-SomethingV1 dataset [12], and semantic seg- mentation on PASCAL VOC [13]. # 8.1 Image Classification on ImageNet # 8.1.1 Weight Standardization ImageNet is a large-scale image classification dataset. There are about 1.28 million training samples and 50K validation 9 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Method – Batch Size BN [3] – 64 / 32 SN [39] – 1 GN [6] – 1 BN+WS – 64 / 32 GN+WS – 1 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 Top-1 Top-5 ResNet-50 [2] ResNet-101 [2] 24.30 22.44 7.19 6.21 25.00 – – – 24.81 22.87 7.46 6.51 23.76 21.89 7.13 6.01 23.72 22.10 6.99 6.07 TABLE 2: Error rates of ResNet-50 and ResNet-101 on ImageNet. ResNet-50 models with BN are trained with batch size 64 per GPU, and ResNet-101 models with BN are trained with 32 images per GPU. The others are trained with 1 image per GPU. Backbone | WN CWN_ WS | Top-1 ResNet-50 + GN 24.81 ResNet-50 + GN v 25.09 ResNet-50 + GN v 24.23 ResNet-50 + GN v 23.72 Backbone | -mean /std | Top-1 ResNet-50 + GN 24.81 ResNet-50 + GN v 23.96 ResNet-50 + GN v 24.60 ResNet-50 + GN v v 23.72 TABLE 3: Comparing Top-1 error rates between WS, WN and CWN on ImageNet. The backbone is a ResNet-50 normalized by GN and trained with batch size 1 per GPU. TABLE 4: Comparing Top-1 error rates between WS (“- mean”: Eq. 11, and “/ div”: Eq. 12) and its individual effects. The backbone is a ResNet-50-GN trained with batch size 1 per GPU. images. It has 1000 categories, each has roughly 1300 train- ing images and exactly 50 validation samples. Table 1 shows the top-1 error rates of ResNet-50 on ImageNet when it is trained with different normalization methods, including Layer Normalization [7], Instance Nor- malization [26], Group Normalization [6] and Batch Nor- malization. From Table 1, we can see that when the batch size is limited to 1, GN+WS is able to achieve performances comparable to BN with large batch size. Therefore, we will use GN+WS for micro-batch training because GN shows the best results among all the normalization methods that can be trained with 1 image per GPU. Table 2 shows our major experimental results of WS on the ImageNet dataset [9]. Note that Table 2 only shows the error rates of ResNet-50 and ResNet-101. This is to compare with the previous work that focus on micro-batch training problem, e.g. Switchable Normalization [39] and Group Normalization [6]. We run all the experiments using the official PyTorch implementations of the layers except for SN [39] which are the performances reported in their paper. This makes sure that all the experimental results are comparable, and our improvements are reproducible. Table 3 compares WS with other weight-based normal- ization methods including WN and CWN. To show the com- parisons, we train the same ResNet-50 normalized by GN on activations with different weight-based normalizations. The code of WN uses the official PyTorch implementation, and the code of CWN is from the official implementation of their github. From the results, we can observe that these normal- ization methods have different effects on the performances of the models. Compared with WN and CWN, the proposed WS achieves lower top-1 error rate. Method GN [6] GN+WS [6] Batch Size = 1 Top-1 Top-5 Top-1 Top-5 ResNeXt-50 [48] ResNeXt-101 [48] 24.24 22.86 7.27 6.51 22.71 21.80 6.38 6.03 TABLE 5: ResNeXt-50 and ResNeXt-101 on ImageNet. All mod- els are trained with batch size 1 per GPU. layers, we use 32 groups for each of them which is the default configuration for ResNet that GN was originally proposed for. ResNeXt-50 and 101 are 32x4d. We train the models for 100 epochs with batch size set to 1 and iteration size set to 32. As the table shows, the performance of GN on training ResNeXt is unsatisfactory: they perform closely to the original ResNets. In the same setting, WS is able to make training ResNeXt a lot easier. # 8.1.2 Batch-Channel Normalization Fig. 8 shows the training dynamics of ResNet-50 with GN, GN+WS and BCN+WS, and Table 6 shows the top-1 and top-5 error rates of ResNet-50 and ResNet-101 trained with different normalization methods. From the results, we observe that adding batch information to channel-based normalizations strongly improves their accuracy. As a result, GN, whose performances are similar to BN when used with WS, now is able to achieve better results than the BN baselines. And we find improvements not only in the final model accuracy, but also in the training speed. As shown in Fig. 8, we see a big drop of training error rates at each epoch. This demonstrates that the model is now farther from elimination singularities, resulting in an easier and faster learning. Table 4 shows the individual effects of Eq. 11 and 12 on training deep neural networks. Consistent with Fig. 4, Eq. 11 is the major component that brings performance improvements. These results are also consistent with the theoretical results we have on the Lipschitz analysis. In Table 5, we also provide the experimental results on ResNeXt [48]. Here, we show the performance comparisons between ResNeXt+GN and ResNeXt+GN+WS. Note that GN originally did not provide results on ResNeXt. Without tuning the hyper-parameters in the Group Normalization # 8.1.3 Experiment settings Here, we list the hyper-parameters used for getting all those results. For all models, the learning rate is set to 0.1 initially, and is multiplied by 0.1 after every 30 epochs. We use SGD to train the models, where the weight decay is set to 0.0001 and the momentum is set to 0.9. For ResNet-50 with BN or BN+WS, the training batch is set to 256 for 4 GPUs. Without synchronized BN [49], the effective batch size is 64. For other ResNet-50 where batch size is 1 per GPU, we 10 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 10x - 70 Hf sos RNSO+GN Beene RNSO+GN =-- RN5SO+GN+WS # === RN50+GN+WS, —— RN50+BCN+WS 60 — _ RNSO+BCN+WS w i) Train Error Rate rs S$ w 6 Val Error Rate N 3 0 2 40 60 80 0 20 40 60 80 Epoch Epoch Fig. 8: Training and validation error rates of ResNet-50 on ImageNet. The comparison is between the baselines GN [6], GN + WS, and Batch-Channel Normalization (BCN) with WS. Our method BCN and WS not only significantly improve the training speed, they also lower the error rates of the final models by a comfortable margin. Backbone | GN WS_ BCN | Top-1_ Top-5 ResNet-50 v 24.81 7.46 ResNet-50 v v 23.72 6.99 ResNet-50 v v 23.09 6.55 ResNet-101 v 22.87 6.51 ResNet-101 v v 22.10 6.07 ResNet-101 v v 21.29 5.60 TABLE 6: Top-1/5 error rates of ResNet-50, ResNet-101, and ResNeXt-50 on ImageNet. The test size is 224 × 224 with center cropping. All normalizations are trained with batch size 32 or 64 per GPU without synchronization. set the iteration size to 64, i.e., the gradients are averaged across every 64 iterations and then one step is taken. This is to ensure fair comparisons because by doing so the total numbers of parameter updates are the same even if their batch sizes are different. We train ResNet-50 with different normalization techniques for 90 epochs. For ResNet-101, we set the batch size to 128 because some of the models will use more than 12GB per GPU when setting their batch size to 256. In total, we train all ResNet-101 models for 100 epochs. Similarly, we set the iteration size for models trained with 1 image per GPU to be 32 in order to compensate for the total numbers of parameter updates. # 8.2 Image Classification on CIFAR CIFAR has two image datasets, CIFAR-10 (C10) and CIFAR- 100 (C100). Both C10 and C100 have color images of size 32 32. C10 dataset has 10 categories while C100 dataset has 100 categories. Each of C10 and C100 has 50,000 images for training and 10,000 images for testing and the categories are balanced in terms of the number of samples. In all the experiments shown here, the standard data augmentation schemes are used, i.e., mirroring and shifting, for these two datasets. We also standardizes each channel of the datasets for data pre-processing. Table 7 shows the experimental results that compare our proposed BCN with BN and GN. The results are grouped into 4 parts based on whether the training is large-batch or micro-batch, and whether the dataset is C10 and C100. On C10, our proposed BCN is better than BN on large- batch training, and is better than GN (with or without Model | Micro | BN GN BCN_ WS | Error C10 RN110 v 6.43 C10 RN110 v v 5.90 C10 RN110 v v 7.45 C10 RN110 v v v 6.82 C10 RN110 v v v 6.31 C100 =RN110 v 28.86 C100 =RN110 v ¥ | 28.36 C100 =RN110 v v 32.86 C100 =RN110 v v v | 29.49 C100 =RN110 v v ¥ | 28.28 TABLE 7: Error rates of a 110-layer ResNet [2] on CIFAR- 10/100 [11] trained with BN [3], GN [6], and our BCN and WS. The results are grouped based on dataset and large/micro- batch training. Micro-batch assumes 1 sample per batch while large-batch uses 128 samples in each batch. WS indicates whether WS is used for weights. Dataset Model Micro Method Error C10 RNI18 BN 5.20 C10 RNI18 SN 5.60 C10 RNI18 DN 5.02 C10 RNI18 BCN+WS 4.96 C10 RNI18 v BN 8.45 C10 RN18 v SN 7.62 C10 RNI18 v DN 7.55 C10 RN18 v BCN+WS 5.43 TABLE 8: Error rates of ResNet-18 on CIFAR-10 trained with SN [39], DN [42], and our BCN and WS. The results are grouped based on large/micro-batch training. The performances of BN, SN and DN are from [42]. Micro-batch for BN, SN and DN uses 2 images per batch, while BCN uses 1. WS) which is specifically designed for micro-batch training. Here, micro-batch training assumes the batch size is 1, and RN110 is the 110-layer ResNet [2] with basic block as the building block. The number of groups here for GN is min { Table 8 shows comparisons with more recent normal- ization methods, Switchable Normalization (SN) [39] and Dynamic Normalization (DN) [42] which were tested for a variant of ResNet for CIFAR: ResNet-18. To provide readers with direct comparisons, we also evaluate BCN on ResNet- 18 with the group number set to 32 for models that use GN. Again, all the results are organized based on whether they are trained in the micro-batch setting. Based on the results shown in Table 7 and 8, it is clear that BCN is able to outperform the baselines effortlessly in both large-batch and micro-batch training settings. . } # 8.3 Object Detection and Instance Segmentation Unlike image classification on ImageNet where we could afford large batch training when the models are not too big, object detection and segmentation on COCO [10] usually use 1 or 2 images per GPU for training due to the high resolution. Given the good performances of our method on ImageNet which are comparable to the large-batch BN training, we expect that our method is able to significantly improve the performances on COCO because of the training setting. 11 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Model | GN WS BCN | AP’ AP’; AP’; | AP? AP?, AP? | AP™ AP'S AP, | AP” APy, AP RN5O | v 398 605 434 | 524 429 230| 361 574 387 | 536 386 169 RN50 | vo Vv 408 616 448 | 527 440 235] 365 585 389 | 535 393 16.6 RN5O Y Vv | 414 62.2 45.2 | 547 45.0 24.2] 373 594 398 | 55.0 401 17.9 RN101 | ¥ 415 620 455 |548 450 241 | 370 590 396 | 545 400 17.5 RN1I01| Vv 42.7 636 468 | 56.0 46.0 25.7] 379 604 40.7 | 56.3 406 182 RN101 Y Vv | 436 644 479 | 574 475 25.6 | 391 614 42.2 | 573 421 191 TABLE 9: Object detection and instance segmentation results on COCO val2017 [10] of Mask R-CNN [50] and FPN [51] with ResNet-50 and ResNet-101 [2] as backbone. The models are trained with different normalization methods, which are used in their backbones, bounding box heads, and mask heads. Model |GN WS BCN|AP’ AP’, AP’;;|AP? AP?, AP? RN5O | ¥ 38.0 59.1 41.2 |49.5 40.9 22.4 RN5O |) ¥ Vv 38.9 60.4 42.1 |50.4 424 23.5 RN50O v ¥Y |39.7 60.9 43.1 |51.7 43.2 24.0 RN101| ¥ 39.7 60.9 43.3 |51.9 43.3 23.1 RN101} Y Vv 41.3 62.8 45.1 |53.9 45.2 24.7 RN101 v ¥Y |41.8 63.4 45.8 |54.1 45.6 25.6 Dataset Model |GN BN WS_ BCN | mloU VOC Val RN10 v 74.90 VOC Val RN10 v v 77.20 VOC Val RN10 V 76.49 VOC Val RN10 v v 77.15 VOC Val__RN10 | v v | 78.22 TABLE 10: Object detection results on COCO using Faster R- CNN [52] and FPN with different normalization methods. TABLE 11: Comparisons of semantic segmentation performance of DeepLabV3 [53] trained with different normalizations on PASCAL VOC 2012 [13] validation set. Output stride is 16, without multi-scale or flipping when testing. We use a PyTorch-based Mask R-CNN framework1 for all the experiments. We take the models pre-trained on ImageNet, fine-tune them on COCO train2017 set, and test them on COCO val2017 set. To maximize the comparison fairness, we use the models we pre-trained on ImageNet instead of downloading the pre-trained models available online. We use 4 GPUs to train the models and apply the learning rate schedules for all models following the practice used in the Mask R-CNN framework our work is based on. We use 1X learning rate schedule for Faster R-CNN and 2X learning rate schedule for Mask R-CNN. For ResNet-50, we use 2 images per GPU to train the models, and for ResNet- 101, we use 1 image per GPU because the models cannot fit in 12GB GPU memory. We then adapt the learning rates and the training steps accordingly. The configurations we run use FPN [51] and a 4conv1fc bounding box head. All the training procedures strictly follow their original settings. Table 9 reports the Average Precision for bounding box (APb) and instance segmentation (APm) and Table 10 reports the Average Precision (AP) of Faster R-CNN trained with different methods. From the two tables, we can observe results similar to those on ImageNet. GN has limited per- formance improvements when it is used on more com- plicated architectures such as ResNet-101 and ResNet-101. But when we add WS to GN or use BCN, we are able to train the models much better. The improvements become more significant when the network complexity increases. Considering nowadays deep networks are becoming deeper and wider, having a normalization technique such as our WS will ease the training a lot without worrying about the memory and batch size issues. Model #Frame |GN BN WS _ BCN | Top-1_ Top-5 RN50O 8 v 42.07 73.20 RN5O 8 v v 44.26 75.51 RN50O 8 v 44.30 74.53 RN50O 8 v v 46.49 76.46 RN50O 8 | v ¥ | 45.27 75.22 TABLE 12: Comparing video recognition accuracy of TSM [55] on Something-SomethingV1 [12]. DeepLabV3 [53] as the evaluation model for its good perfor- mances and its use of the pre-trained ResNet-101 backbone. Table 11 shows our results on PASCAL VOC, which has 21 different categories with background included. We take the common practice to prepare the dataset, and the training set is augmented by the annotations provided in [54], thus has 10,582 images. We take our ResNet-101 pre- trained on ImageNet and finetune it for the task. Here, we list all the implementation details for easy reproductions of our results: the batch size is set to 16, the image crop size is 513, the learning rate follows polynomial decay with an initial rate 0.007. The model is trained for 30K iterations, and the multi-grid is (1, 1, 1) instead of (1, 2, 4). For testing, the output stride is set to 16, and we do not use multi- scale or horizontal flipping test augmentation. As shown in Table 11, by only changing the normalization methods from BN and GN to our BCN, mIoU increases by about 2%, which is a significant improvement for PASCAL VOC dataset. As we strictly follow the hyper-parameters used in the previous work, there could be even more room of improvements if we tune them to favor BCN or WS, which we do not explore in this paper and leave to future work. # 8.4 Semantic Segmentation on PASCAL VOC After evaluating BCN and WS on classification and detec- tion, we test it on dense prediction tasks. We start with semantic segmentation on PASCAL VOC [13]. We choose # 8.5 Video Recognition on Something-Something 1. https://github.com/facebookresearch/maskrcnn-benchmark In this subsection, we show the results of applying our method on video recognition on Something-SomethingV1 12 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 dataset [12]. Something-SomethingV1 is a video dataset which includes a large number of video clips that show humans performing pre-defined basic actions. The dataset has 86,017 clips for training and 11,522 clips for validation. We use the state-of-the-art method TSM [55] for video recognition, which uses a ResNet-50 with BN as its back- bone network. The codes are based on TRN [56] and then adapted to TSM. The reimplementation is different from the original TSM [55]: we use models pre-trained on ImageNet rather than Kinetics dataset [57] as the starting points. Then, we fine-tune the pre-trained models on Something- SomethingV1 for 45 epochs. The batch size is set to 32 for 4 GPUs, and the learning rate is initially set to 0.0125, then divided by 10 at the 26th and the 36th epochs. The batch normalization layers are not fixed during training. With all the changes, the reimplemented TSM-BN achieves top- 1/5 accuracy 44.30/74.53, higher than 43.4/73.2 originally reported in the paper. Then, we compare the performances when different normalization methods are used in training TSM. Table 12 shows the top-1/5 accuracies of TSM when trained with GN, GN+WS, BN and BN+WS. From the table we can see that WS increases the top-1 accuracy about 2% for both GN and BN. The improvements help GN to cache up the performances of BN, and boost BN to even better accuracies, which roughly match the performances of the ensemble TSM with 24 frames reported in the paper. Despite that BCN improves performances of GN, it does not surpass BN. This shows the limitation of BCN. # 9 CONCLUSION In this paper, we proposed two novel normalization meth- ods, Weight Standardization (WS) and Batch-Channel Nor- malization (BCN) to bring the success factors of Batch Nor- malization (BN) into micro-batch training, including 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers and BCN leverages estimated batch statistics of the activations in convolutional layers. We provided theoretical analysis to show that WS reduces the Lipschitz constants of the loss and the gradients, and thus it smooths the loss landscape. By investigating normalization methods from the perspective of elimination singularities, we found that channel-based normalization methods, such as Layer Nor- malization (LN) and Group Normalization (GN) are unable to keep far distances from elimination singularities, caused by lack of batch knowledge. We showed that WS is able to alleviate this issue and BCN can further push models away from elimination singularities by incorporating esti- mated batch statistics channel-normalized models. Exper- iments on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation, demonstrate 1) WS and BCN improve micro-batch training significantly, 2) WS+GN with batch size 1 is even able to match or outperform the performances of BN with large batch sizes, and 3) replacing GN by BCN leads to further improvement. # REFERENCES [1] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected crfs,” in International Conference on Learning Repre- sentations (ICLR), 2015. [2] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. Ioffe and C. Szegedy, “Batch normalization: Accelerating S. deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015. [Online]. Available: http://jmlr.org/proceedings/ papers/v37/ioffe15.html [3] [4] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” European Conference on Computer Vision (ECCV), pp. 630–645, 2016. [5] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connected convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269. [6] Y. Wu and K. He, “Group normalization,” in European Conference on Computer Vision (ECCV), 2018, pp. 3–19. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016. S. Santurkar, D. Tsipras, A. Ilyas, and A. Madry, “How does batch normalization help optimization?” in Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 2488–2498. [9] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” International Journal of Computer Vision (IJCV), vol. 115, no. 3, pp. 211–252, 2015. [7] [8] [10] T. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: common objects in context,” in European Conference on Computer Vision (ECCV), 2014, pp. 740–755. [11] A. Krizhevsky and G. Hinton, “Learning multiple layers of fea- tures from tiny images,” Master’s thesis, Department of Computer Science, University of Toronto, 2009. [12] R. Goyal, S. E. Kahou, V. Michalski, J. Materzynska, S. Westphal, I. Fr ¨und, P. Yianilos, M. Mueller- H. Kim, V. Haenel, Freitag, F. Hoppe, C. Thurau, I. Bax, and R. Memisevic, “The ”something something” video database for learning and evaluating visual common sense,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5843–5851. [Online]. Available: https://doi.org/10.1109/ICCV.2017.622 [13] M. Everingham, S. M. A. Eslami, L. J. V. Gool, C. K. I. Williams, J. M. Winn, and A. Zisserman, “The pascal visual object classes Journal of Computer challenge: A retrospective,” International Vision, vol. 111, no. 1, pp. 98–136, 2015. [Online]. Available: https://doi.org/10.1007/s11263-014-0733-5 [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classi- fication with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2012, pp. 1097– 1105. [15] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3431–3440. [Online]. Available: https://doi.org/10.1109/CVPR. 2015.7298965 [16] S. Qiao, C. Liu, W. Shen, and A. L. Yuille, “Few-shot image recognition by predicting parameters from activations,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [17] S. Qiao, W. Shen, Z. Zhang, B. Wang, and A. Yuille, “Deep co-training for semi-supervised image recognition,” in European Conference on Computer Vision (ECCV), 2018. [18] W. Qiu, F. Zhong, Y. Zhang, S. Qiao, Z. Xiao, T. S. Kim, and Y. Wang, “Unrealcv: Virtual worlds for computer vision,” in Proceedings of the 25th ACM international conference on Multimedia. ACM, 2017, pp. 1221–1224. [19] K. Simonyan and A. Zisserman, “Very deep convolutional net- works for large-scale image recognition,” in International Confer- ence on Learning Representations (ICLR), 2015. [20] Y. Wang, L. Xie, C. Liu, S. Qiao, Y. Zhang, W. Zhang, Q. Tian, and A. Yuille, “SORT: Second-Order Response Transform for Visual Recognition,” IEEE International Conference on Computer Vision, 2017. 13 JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 [21] Y. Wang, L. Xie, S. Qiao, Y. Zhang, W. Zhang, and A. L. Yuille, “Multi-scale spatially-asymmetric recalibration for image classifi- cation,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 509–525. [22] C. Yang, L. Xie, S. Qiao, and A. Yuille, “Knowledge distillation in generations: More tolerant teachers educate better students,” AAAI, 2018. [23] Z. Zhang, S. Qiao, C. Xie, W. Shen, B. Wang, and A. L. Yuille, “Single-shot object detection with enriched semantics,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 5813–5821. [24] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AIS- TATS), 2010, pp. 249–256. [25] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1026–1034. [26] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normaliza- tion: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016. [27] T. Salimans and D. P. Kingma, “Weight normalization: A sim- ple reparameterization to accelerate training of deep neural networks,” in Advances in Neural Information Processing Systems (NeurIPS), 2016, pp. 901–909. [28] L. Huang, X. Liu, Y. Liu, B. Lang, and D. Tao, “Centered weight normalization in accelerating training of deep neural networks,” in Proceedings of the IEEE International Conference on Computer Vision, 2017. [29] A. E. Orhan and X. Pitkow, “Skip connections eliminate singu- larities,” International Conference on Learning Representations (ICLR), 2018. [30] H. Wei, J. Zhang, F. Cousseau, T. Ozeki, and S.-i. Amari, “Dy- namics of learning near singularities in layered networks,” Neural computation, vol. 20, no. 3, pp. 813–843, 2008. [31] D. J. Im, M. Tao, and K. Branson, “An empirical analysis of deep network loss surfaces,” 2016. [32] A. S. Morcos, D. G. Barrett, N. C. Rabinowitz, and M. Botvinick, “On the importance of single directions for generalization,” arXiv preprint arXiv:1803.06959, 2018. [33] P. Luo, X. Wang, W. Shao, and Z. Peng, “Towards understanding regularization in batch normalization,” in International Conference on Learning Representations (ICLR), 2019. [34] J. Kohler, H. Daneshmand, A. Lucchi, M. Zhou, K. Neymeyr, and T. Hofmann, “Towards a theoretical understanding of batch normalization,” arXiv preprint arXiv:1805.10694, 2018. [35] G. Yang, J. Pennington, V. Rao, J. Sohl-Dickstein, and S. S. Schoen- holz, “A mean field theory of batch normalization,” in International Conference on Learning Representations (ICLR), 2019. [36] S. Arora, Z. Li, and K. Lyu, “Theoretical analysis of auto rate- tuning by batch normalization,” in International Conference on Learning Representations (ICLR), 2019. [37] P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds. Princeton University Press, 2009. [38] M. Cho and J. Lee, “Riemannian approach to batch normaliza- tion,” in Advances in Neural Information Processing Systems, 2017, pp. 5225–5235. [39] P. Luo, J. Ren, and Z. Peng, “Differentiable learning-to-normalize via switchable normalization,” arXiv preprint arXiv:1806.10779, 2018. J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. L. Yuille, J. Huang, and K. Murphy, “Progressive neural architecture search,” in European Conference on Computer Vision (ECCV), 2018, pp. 19–35. [Online]. Available: https: //doi.org/10.1007/978-3-030-01246-5 2 [41] W. Shao, T. Meng, J. Li, R. Zhang, Y. Li, X. Wang, and P. Luo, “Ssn: Learning sparse switchable normalization via sparsestmax,” arXiv preprint arXiv:1903.03793, 2019. Jiamin, and W. Lingyun, “Differentiable dynamic normalization for learning deep representation,” in International Conference on Machine Learn- ing, 2019, pp. 4203–4211. [43] Y. Nesterov, Introductory lectures on convex optimization: A basic course. Springer Science & Business Media, 2013, vol. 87. 14 S. Bubeck et al., “Convex optimization: Algorithms and complex- ity,” Foundations and Trends®) in Machine Learning, vol. 8, no. 3-4, pp. 231-357, 2015. [45] V. Nair and G. E. Hinton, “Rectified linear units improve the 27th June [Online]. Available: http: [46] I. Gitman and B. Ginsburg, “Comparison of batch normalization and weight normalization algorithms for the large-scale image classification,” arXiv preprint arXiv:1709.08145, 2017. [47] S. Ioffe, “Batch renormalization: Towards reducing minibatch dependence in batch-normalized models,” in Advances in Neural Information Processing Systems (NeurIPS), 2017, pp. 1945–1953. [48] S. Xie, R. B. Girshick, P. Doll´ar, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5987–5995. [Online]. Available: http://arxiv.org/abs/ 1611.05431 [49] C. Peng, T. Xiao, Z. Li, Y. Jiang, X. Zhang, K. Jia, G. Yu, and J. Sun, “Megdet: A large mini-batch object detector,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 6181– 6189. [50] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick, “Mask r-cnn,” in IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961–2969. [51] T.-Y. Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Be- longie, “Feature pyramid networks for object detection,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2117–2125. [52] S. Ren, K. He, R. B. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” in Information Processing Systems (NeurIPS), Advances in Neural 2015, pp. 91–99. [Online]. Available: http://papers.nips.cc/paper/ 5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks [53] L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, “Rethink- ing atrous convolution for semantic image segmentation,” arXiv preprint arXiv:1706.05587, 2017. [54] B. Hariharan, P. Arbelaez, L. D. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in IEEE International Conference on Computer Vision (ICCV), 2011, pp. 991–998. [Online]. Available: https://doi.org/10.1109/ICCV.2011.6126343 [55] J. Lin, C. Gan, and S. Han, “Temporal shift module for efficient video understanding,” arXiv preprint arXiv:1811.08383, 2018. [56] B. Zhou, A. Andonian, A. Oliva, and A. Torralba, “Temporal relational reasoning in videos,” in European Conference on Computer Vision (ECCV), 2018, pp. 803–818. [57] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijaya- narasimhan, F. Viola, T. Green, T. Back, P. Natsev et al., “The kinet- ics human action video dataset,” arXiv preprint arXiv:1705.06950, 2017. Siyuan Qiao received B.E. in Computer Science at Shanghai Jiao Tong University in 2016. He is currently a Ph.D. student at Johns Hopkins University, where he is advised by Bloomberg Distinguished Professor Alan Yuille. From June 2017 to August 2017, he worked at Baidu IDL as an intern. He interned at Adobe Inc. from June 2018 to August 2018. He has also spent time at University of California, Los Angeles, and YITU Technology. His research interests are computer vision and deep learning. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 Huiyu Wang is a Ph.D. student in Computer Science at Johns Hopkins University, advised by Bloomberg Distinguished Professor Alan Yuille. He received M.S. in Electrical Engineering at University of California, Los Angeles in 2017 and B.S. in Information Engineering at Shanghai Jiao Tong University in 2015. He also spent wonderful summers at Google, Allen Institute for Artificial Intelligence(AI2), and TuSimple. His research in- terests are computer vision and machine learn- ing. Chenxi Liu is a Ph.D. student at Johns Hop- kins University, where his advisor is Bloomberg Distinguished Professor Alan Yuille. Before that, he received M.S. at University of California, Los Angeles and B.E. at Tsinghua University. He has also spent time at Facebook, Google, Adobe, Toyota Technological Institute at Chicago, Uni- versity of Toronto, and Rice University. His re- search lies in computer vision and natural lan- guage processing. Wei Shen received his B.S. and Ph.D. degree both in Electronics and Information Engineering from the Huazhong University of Science and Technology, Wuhan, China, in 2007 and in 2012. From April 2011 to November 2011, he worked in Microsoft Research Asia as an intern. In 2012, he joined the School of Communication and In- formation Engineering, Shanghai University and served as an assistant and associate professor until Oct 2018. He is currently an Assistant Re- search Professor at the Department of Computer Science, Johns Hopkins University. His current research interests in- clude computer vision, deep learning and biomedical image analysis. = Alan Yuille received his B.A. in mathematics from the University of Cambridge in 1976, and completed his Ph.D. in theoretical physics at Cambridge in 1980. He then held a postdoctoral position with the Physics Department, University of Texas at Austin, and the Institute for Theo- retical Physics, Santa Barbara. He then became a research scientists at the Artificial Intelligence Laboratory at MIT (1982-1986) and followed this with a faculty position in the Division of Applied Sciences at Harvard (1986-1995), rising to the position of associate professor. From 1995-2002 he worked as a senior scientist at the Smith-Kettlewell Eye Research Institute in San Francisco. From 2002-2016 he was a full professor in the Department of Statistics at UCLA with joint appointments in Psychology, Computer Science, and Psychiatry. In 2016 he became a Bloomberg Distinguished Professor in Cognitive Science and Computer Science at Johns Hopkins University. He has won a Marr prize, a Helmholtz prize, and is a Fellow of IEEE. 15
{ "id": "1803.06959" }
1903.10391
Local Orthogonal Decomposition for Maximum Inner Product Search
Inverted file and asymmetric distance computation (IVFADC) have been successfully applied to approximate nearest neighbor search and subsequently maximum inner product search. In such a framework, vector quantization is used for coarse partitioning while product quantization is used for quantizing residuals. In the original IVFADC as well as all of its variants, after residuals are computed, the second production quantization step is completely independent of the first vector quantization step. In this work, we seek to exploit the connection between these two steps when we perform non-exhaustive search. More specifically, we decompose a residual vector locally into two orthogonal components and perform uniform quantization and multiscale quantization to each component respectively. The proposed method, called local orthogonal decomposition, combined with multiscale quantization consistently achieves higher recall than previous methods under the same bitrates. We conduct comprehensive experiments on large scale datasets as well as detailed ablation tests, demonstrating effectiveness of our method.
http://arxiv.org/pdf/1903.10391
Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha
cs.LG, stat.ML
null
null
cs.LG
20190325
20190325
9 1 0 2 r a M 5 2 ] G L . s c [ 1 v 1 9 3 0 1 . 3 0 9 1 : v i X r a # Local Orthogonal Decomposition for Maximum Inner Product Search Xiang Wu # Ruiqi Guo # Sanjiv Kumar Google Research, New York David Simcha # Abstract Inverted file and asymmetric distance compu- tation (IVFADC) have been successfully ap- plied to approximate nearest neighbor search and subsequently maximum inner product search. In such a framework, vector quanti- zation is used for coarse partitioning while product quantization is used for quantizing residuals. In the original IVFADC as well as all of its variants, after residuals are com- puted, the second production quantization step is completely independent of the first vec- tor quantization step. In this work, we seek to exploit the connection between these two steps when we perform non-exhaustive search. More specifically, we decompose a residual vec- tor locally into two orthogonal components and perform uniform quantization and mul- tiscale quantization to each component re- spectively. The proposed method, called lo- cal orthogonal decomposition, combined with multiscale quantization consistently achieves higher recall than previous methods under the same bitrates. We conduct comprehen- sive experiments on large scale datasets as well as detailed ablation tests, demonstrating effectiveness of our method. sampling for speeding up softmax computation [17] and sparse update in end-to-end trainable memory systems [20]. Formally, MIPS solves the following problem. Given a database of vectors X = {xi}[N ] and a query vector q, where both xi, q ∈ Rd, we want to find x∗ q ∈ X such that x∗ Although related, MIPS is different from ¢) nearest neighbor search in that inner product (IP) is not a metric, and triangle inequality does not apply. We discuss this more in Section P] # 1.1 Background We refer to several quantization techniques in this work and we briefly introduce their notations: e Scalar Quantization (SQ): The codebook of SQ Bsq = {Yih ins] contains ngq scalars. A scalar z is quantized into ¢sg(z) = argminyez., |2 — yI- The bitrate per input is Isg = [logy nsq]. • Uniform Quantization (UQ): UQ is a special- ization of SQ, whose codebook is parameterized with only 2 scalars: BU Q = {ai + b}[nU Q]. Though the UQ codebook is restricted to this structure, its major advantage over SQ is that the codebook can be compactly represented with only 2 scalars. # Introduction Maximum inner product search (MIPS) has become a popular paradigm for solving large scale classification and retrieval tasks. For example, in recommendation systems, user queries and documents are embedded into dense vector space of the same dimensionality and MIPS is used to find the most relevant documents given a user query [9]. Similarly in extreme classifica- tion tasks [10], MIPS is used to predict the class label when a large number of classes are involved, often on the order of millions or even billions. Lately it has also been applied to training tasks such as scalable gradi- ent computation in large output spaces [23], efficient e Vector Quantization (VQ): VQ is a natural extension of scalar quantization into vector spaces. Give a codebook C = {ci }{mj with m codewords, an input vector x is quantized into: dya(x) = argmin,cg ||@ — cll2. And the code that we store for vector x is the index of the closest codeword in the VQ codebook: indexy g(x). e Product Quantization (PQ): To apply PQ, we first divide a vector into ng subspaces: x = 2) © x?) @---@a'"s), And within each subspace we apply an independent VQ with nw codewords, ie., bpa(x) = Pic{na| P(e). The bitrate per input for PQ is thus ng/[logy nw |. Local Orthogonal Decomposition for Maximum Inner Product Search The IVFADC [12] framework combines VQ for coarse partitioning and PQ for residual quantization: e IVF: An inverted file is generated via a VQ partitioning. Each VQ partition P; contains all database vectors x whose closest VQ center is Gi, ie, Pi) = {x € X|c; = argmin,<g ||x — cll2}- Within each partition P,, residual vectors {r, = x —cCi}cep, are further quantized with PQ and we denote the quantized approximation of the residual Ty aS PpaQ(ra)- • ADC: Asymmetric distance computation refers to an efficient table lookup algorithm that computes the approximate IP. For VQ, ADC(q, φV Q(x)) = lookup({q · c}c∈C, indexV Q(x)). And for PQ with multiple subspaces, we can decompose the dot product as: de © 4° bPQ(Tx) = ADC(G, oraQ(re)) = Vieng) ADCO (@, 1% (r!)) • Non-Exhaustive Search: When processing a query q, we use IVF to determine the top partitions according to q · ci. We select top mADC partitions to search into and then apply ADC to residuals in these top partitions. residual rz = x — c;, where c; is the center of the par- tition P; that x is assigned to. In the non-exhaustive setup, the fact that we search into partition P; re- veals strong information about the local conditional query distribution. Nonetheless, previous methods ap- proximate q-r, by first quantizing r,, independent of q distribution. And a close analysis of the IP q-r, clearly shows that its variance is distributed non-uniformly in different directions. Formally a direction is a unit norm vector ||v||z = 1, and the the projected IP on direction v is defined as: (q-v)(rz-v). Within a par- tition P, we define the projected IP variance along v as Var(v) = ral Vaep((q: v)(rT2-v))?. Note that the empirical first moment Tal Vrep(d: U)(T2 u) = 0 by construction of VQ partitions. We conduct two different analyses with the public Net- flix [7] dataset. In Figure we fix the query g and thus its top partition P* and its center c*. We pick the first direction uy; = cj/||cj|lz and the second di- rection ug orthogonal to u; randomly. We then gen- erate n, = 1000 evenly spaced directions in the sub- space spanned by {u;,u2} as: {v; = cos (2in/n,)uy + sin (2i7/n,)u2}{n,)- We finally plot of the set of points {(Var(v;) cos (2im/n,)), Var(vi) sin (2it/ny))}in,j, Le. the distance between each point and the origin repre- sents the projected IP variance on its direction. The elongated peanut shape demonstrates clearly that vari- ance of projected IPs is more concentrated on some directions than others. There are many variations of the IVFADC setup. For example, the codebooks of VQ partitioning and PQ quantization can be (jointly) learned, and asymmetric distance computation can be implemented with SIMD instructions [22, 8]. We discuss these variations in depth and the relation to this work in the Section 2. In large scale applications, as the database size in- creases, larger m and mpc are generally used in IVF. Auvolat et al. in [2] proposes to use m ~ N!/? for 1-level VQ partitions and m ~ N'/? for 2-level etc. From latest publications [5] [13], the number of parti- tions for large datasets is among 10° — 10°. Hence in the following discussion, we focus on the case where the number of partitions is much larger than the vector dimension, i.e., m > d. In Figure[1b] we fix a partition and plot 1) the residuals in the partition and 2) queries that have maximum IPs with the partition center. We project all residuals and queries with maximum IPs onto the 2-dimensional subspace spanned by the partition center direction and the first principal direction of the residuals. Residuals in blue are scattered uniformly in this subspace, but queries in black are much more concentrated along the direction of partition center c/||cll2- # 1.3 Contributions This paper makes following main contributions: The scale of modern MIPS systems is often limited by the cost of storing the quantized vectors in main memory. Therefore, we focus on methods that operate under low bitrate and can still achieve high recall. This is reflected in our experiments in Section 5. # 1.2 Empirical Study of Inner Product Variance • Introduces a novel quantization scheme that di- rectly takes advantage of the non-uniform distri- bution of the variance of projected IPs. • Identifies the optimal direction for projection within each partition and proposes an effective approximation both theoretically and empirically. The overall quality of IP approximation is crucially de- pendent on the joint distribution of the query q and the • Designs complete indexing and search algorithms that achieve higher recall than existing techniques on widely tested public datasets. Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha (a) (b) Figure 1: Non-uniform distribution of projected IP variance: (a) projected IP variance vs. angle in 2- dimensional subspace spanned by {u1, u2}. The pro- jected IP variance is represented by the distance from the origin to the corresponding blue point at the angle. Variances are linearly scaled so that they fit the aspect ratio. (b) Scatter plot of residuals and queries that have maximum IPs with the partition center. Rotations and codebooks are often applied in IVFADC variations, but there are significant costs associated with them. In the most extreme cases, Locally Opti- mized Product Quantization (LOPQ) [14] learns a sep- arate rotation matrix and codebook for each partition. This leads to an extra memory cost of O(md(d + nW )) and O(mADCd(d + nW )) more multiplications for each query at search time. where mADC is the number of VQ partitions we search. When m and mADC increase, the overhead become quickly noticeable and may become even more expensive than ADC itself. For example, when d = 200, nB = 50, nW = 16, performing the rotation once is as expensive as performing 6,400 ADC computations under an optimized implementation. In practice, it is often desirable to avoid per partition rotation or codebooks, but learn global codebooks and rotation. # 3 Methods # 2 Related Work The MIPS problem is closely related to the £2 nearest neighbor search problem as there are multiple ways to transform MIPS into equivalent instances of £2 nearest neighbor search. For example, Shrivastava and Li 21] roposed augmenting the original vector with a few dimensions. Neyshabur and Srebro proposed another simpler transformation to augment just one dimension o original vector: @ = [#/U;\/1 = (||zl]2/U)?], ¢ = q/|\q\2; 0]. Empirically, the augmentation strategies do not perform strongly against strategies that work in the unaugmented space. Existing approaches based on the IVFADC frame- work mainly focus on minimizing the squared loss when quantizing residuals. Formally, they aim at finding an optimal quantization parameter θ∗ = argminθ 2. As we have dis- cussed in previous sections, our “signal”, i.e., residual IPs q · rx exhibit strong non-uniformity locally within a partition. By directly taking advantage of this skewed distribution, our proposed method achieves higher re- call at the same bitrate when compared with others that are agnostic of this phenomenon. # 3.1 Local Orthogonal Decomposition Learning Rotation and Codebooks. Learning based variations of IVFADC framework have been proposed. One of the focuses is learning a rotation matrix which is applied before vectors are quantized. Such rotation reduces intra-subspace statistical depen- dence as analyzed in OPQ [11, 18] and its variant [14] and thus lead to smaller quantization error. An- other focus is learning codebooks that are additive such as [3, 4, 15, 25, 26, 16]. In these works, codewords are learned in the full vector space instead of a sub- space, and thus are more expressive. Empirically, such additive codebooks perform better than OPQ at low bitrates but the gain diminishes at higher bitrates. ADC Implementation. ADC transforms inner prod- uct computations into a lookup table based operation, which can be implemented in different ways. The origi- nal ADC paper [12] used L1 cache based lookup table. Johnson et al. [13] used an GPU implementation for ADC lookup. A SIMD based approach was also de- veloped by [1, 22]. Again, this is orthogonal to the local decomposition idea of this work, as any ADC implementation can be used in this work. Given a unit norm vector or direction define: ll2 = 1, we Hi! =v", Ht =1- Hu! Hence H! is the projection matrix onto direction v and H+ is projection matrix onto its complement subspace. We can thus decompose a residual as: r, = Hr, + Htr,. Similar to the original IVFADC framework, we first decompose the IP between a query g and a database vector x into q-« =q-c+q-rz. With our new insight of non-uniformity of the distribution of our signal, we propose to further decompose the residual IP with respect to a learned direction v as: q-te = 4° (Hilrz) + q- (Here) We name H|\ rz the projected component of residual Ty, an Htr, the orthogonal component. Note that the projected component resides in a 1-dimensional subspace spanned by v and can also be very efficiently quantized with existing scalar quantization techniques. Local Orthogonal Decomposition for Maximum Inner Product Search # 3.2 Multiscale Quantization of Orthogonal Component need to quantize the difference between the two as zv x = (rx − φM SQ(ov x)) · v. We propose to learn a uniform quantization: We define 0”? = H+r, and 0 = 0” /||0’||2 to simplify no- tation. Multiscale quantization proposed in [22] learns a separate scale and rotation matrix that are multi- plied to the product quantized residual as \,RdpQ(o¥), where R is a learned rotation matrix and ¢pg(-) is the production quantization learned from the normalized orthogonal components. Differently from the original! MSQ, our scale is chosen to preserve the 2 norm o the orthogonal component o%, not the whole residual Tet φU Q(zv x; aP , bP ) = aP round((zv x − bP )/aP ) + bP Whereby: • zmax and zmin are the maximum and minimum of x|x ∈ P }. And lU Q is the the finite input set {zv number of bits for uniform quantization; • aP = (zmax −zmin)/(2lU Q −1) scales the input into the range [−2lU Q−1, +(2lU Q−1 − 1)]. «ilo = llozlle |x Hy ora(o} • bP = (zmax + zmin + aP )/2 centers the input; The rotation R is omitted as it doesn’t affect the C2 norm. Another scalar quantization (SQ) is learne on the scales to further reduce the storage cost an speedup ADC. The final MSQ quantized residual is then: φM SQ(ov x) = φSQ(λx)RφP Q( ˆov x) Where φSQ is the non-uniform scalar quantization for partition P learned via a Lloyd algorithm. The number of codewords in the SQ codebook is fixed at 16 for our experiments, hence its storage cost is negligible. # 3.3 Adjustment to Projected Component In general, unlike 0%, dysq(o) is not orthogonal to v anymore. Recall that we want to approximate q-o% in the orthogonal subspace as q- (Hoy, sq(o%)). Now a subtle performance issue arises. A critical improvement to ADC introduced since the original OPQ is to move the rotation multiplication to the query side so that it is done only once globally. Formally with MSQ, we can perform following: q- éusg("«) = =q: ($sQ(Ar)ROPQ(ok)) = $sqQ(Ax)((R9) - drQ(0¥))- With LOD, the extra projection H; in front of émsqQ(o%) prevents us from moving R to the q side as the two matrices H/ and R are not commutative in general. However we have q: 0? © ¢: Hi émse(o’%) = (a: éuse(o¥)) — (¢- (Hl éuse(o2))). We can per- form fast ADC on the term q- @sqQ(o%) as proposed in the orginal MSQ [22] and only multiply matrix RT to q once. The extra term q - (Hl éxrsol (o8)) = (q-v)(¢mse@(o%) - v) can be removed by subtracting it from the projected component before quantization. • round(·) is the function that rounds a floating point number to its nearest integer. round((zv x −bP )/aP ) is the integer code in lU Q bits that we store for each residual. In practice, we may relax zmax to the 99%th quantile of the input to guard against outliers, and similarly 1%th quantile for zmin. We clip rounded outputs to within [−2lU Q−1, +(2lU Q−1 − 1)]. The main advantage of UQ over other scalar quantiza- tion techniques is that its codebook size is independent of the number of bits used for its codes. This is criti- cal as we use lU Q = 256 for our experiments. It also enables fast computation of approximate IP between query and projected component as: (q · v)φU Q(zv x) = ((q · v)aP )round((zv Putting both quantization schemes together, we can ap- proximate the residual IP by replacing each component with its quantized result: qr =q: (Hrs) +g 08% = (q- ovale 22) +q- émsa(o?) And for each term, we can perform efficient ADC. # 3.5 Preserving /. Norms We design the LOD+MSQ framework with the objec- tive of preserving 2 norms of residuals. Note that: *)+¢-omsQ(ok) = ) + Ht éusa(o’)) (q- v)ouel@ ((dve(z zy)u+ H}éxrso(od # q: ((dve(z In the projected subspace, we have: # 3.4 Uniform Quantization of Projected Component Following the procedure above, after projection onto direction v, we have the original residual contribut- ing rx · v and its quantized orthogonal component contributing an extra term φM SQ(ov x) · v. We thus (duQ(z2)u + Hiéuso(o! ztu + Hlléuse(o2) In the orthogonal subspace, we have # o) =re-¥ | oursa( onl = |losqQe) He ora (oz) cH pq (0%) lz = llozlle ll2 Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha Hence we preserve the ¢2 norm of r,, up to small scalar quantization errors in ¢yeQ(z?) and dgq(Az). Empiri- cally, preserving ¢2 norms improves recall when there is considerable variation in residual norms [22]. # Indexing and Search Algorithms We list all parameters of the overall indexing and search algorithms besides their inputs in Table 1. #partitions in the inverted file m #codebooks used for PQ encoding nB #codewords used in each PQ codebook nW #bits for UQ encoding lU Q lSQ #bits for SQ encoding mADC #partitions to apply ADC to Search(q, k) begin input : query g, number k and outputs of Index(X) output: Approximate top k maximum inner products Compute {pi — q°c'}{m] C* & Top({(pi, i) }imj, Mave) qr — Rg for (p;,i) € C* do Compute {ripy < ADC(qr, dba (0%) }rer, Compute {ripy — $$ Q(Az) x rips }eer; Compute {rip!! — (q- vi) x db (%) }xer; Compute {rip, < rip? + rip! cep, // ix: index of x top; < Top({(rip,, tz) }xePr,,k) top; — {(tip, + Pi, tx) }(xip, ie) etop; return Top(∪(pi,i)∈C∗ topi, k) Algorithm 2: Search top k inner products in an in- dexed database with query q. Table 1: Parameters for the overall indexing and search algorithms. # 4 Analysis # Index(X) begin Index(X) begin input : Database X and function ProjDir output: Partitions {P'};,,.; with centers {c'}[m), projection directions {v'}{), uniform quantization {¢jQ(-)}{mj and multiscale quantization {$47 5Q(-)}[m| {P'}imj: {C bi) < IVECX,m) for i+ 1 to mdo v' + ProjDir(P’,c’) Compute {02 — Hyire}eepi | Compute {6 © 0°/llolla}sers R, $paq(-) — OPQ({o% }rex, nB, nw) for i+ 1 to mdo Compute {Ax + llo!llo/l|iibro(d8)lla}ecrs sq(-) + ScalarQuantize({Ax},cpi,lsa) Compute {$irsq(0z) — b8Q (Ac) RopaQ (0%) fer: Compute {22 — (rz — ¢iusq(02)) *U }aeri | ¢ue(-) — UniformQuantize({z2},<pi,lua) return L {Pb ims fe’ }im), {0° im) {Sve()biml, {Oarsa(-) bm) Algorithm 1: Index database X with local orthogo- nal decomposition and multiscale quantization. The projection direction is parameterized with the function ProjDir. We leave the projection direction function as an input to our indexing algorithm in the previous section. In this section, we formally investigate the optimal projection direction given partition P and its center c conditional on the fact that c = argmaxci∈C q · ci. We start by analyzing the error introduced by our quantization scheme to the approximate residual IP. x − φU Q(zv Let eU Q(zv x) = (H ⊥ v q) · (ov x)). Consider the quantization error on the residual IP within partition P as: 2) — 9: dusQle Tal Deer(a “Te — (q-v)buQ(z: TP] vep(eva(2z) + emsa(or))? = TP] rep (eua(2z)? + 2eve(zz)emse(or) ”))? +emse(o )) 1 |P | First, UQ achieves an error bound of O(2−lU Q ) in its 1-dimensional subspace, which is much lower than the error bound that MSQ can achieve in the orthogonal (d − 1)-dimensional subspace. UQ and MSQ are two completely separate quantization steps, and the cross product of their quantization errors are expected to be small. Therefore we shall focus on minimizing the last quantization error term averaging over q and rx: We want to highlight that in memory bandwidth lim- ited large scale MIPS, the search time is well approxi- mated by the number of bits read: O(N 422 (lpg + np {logs nw |)). In our experiments, we fix m4pc/m = 1/10. The bitrate of the original dataset is 32 bits per dimension and we use either 1/2 or 1 bit per dimension in our quantization schemes. Hence we achieve over 2-orders of magnitudes of speedup. Tea Vaep( (Hy q): emsa(oz))? = E,(He9)" (tH Veer emsqorjemsa(or)” Hig # Eq(H ⊥ # Veer If we define Σv = 1 x)T ) and x∈P (eM SQ(ov |P | ov q = H ⊥ v q, we can then rewrite the optimization as: minv Eq(ov q . Notice that the matrix Σv in the middle is also dependent on the direction v, which makes this optimization problem very challenging. However the learned rotation R in MSQ serves two purposes: 1) it reduces correlation between dimensions x)2) Local Orthogonal Decomposition for Maximum Inner Product Search and 2) it evens variance allocation across PQ subspaces [11]. Hence it is reasonable to expect the errors to be close to isotropic across dimensions assuming the subspace spanned by orthogonal components does not degenerate into a low dimensional subspace. This is to assume: Assumption 1. The empirical covariance matrix of orthogonal component errors {eM SQ(ov x)}x∈P is isotropic. This assumption allows us to approximate L, AI with some constant . Now we arrive at min, E,||Htq|3 = min, Egg? — vv? )q = min, E,qg7q—E,(q-v)?. Let’s introduce a simplfication of the conditional expectation as E,(-|c) = E,(-|c = argmax,.cc q°¢i). We need to solve the maximization problem of: max, E,(q-v)? = max, v E,(qq7 |c)v. The matrix in the middle is the conditional covariance ma- trix of all queries that have maximum IPs with center c. If we can estimate this matrix E,(qq’ |c) accurately, we can simply take its first principal direction as our optimal direction v. c∗ = argmaxc∈C q · c is at least L1(m, δ): Pr{cos(q,c*) > Li(m,d)} > 1-6 Ly(m, 6) = V1 — (mvs 8) a ™m √ In practical settings, we have log(m/(mWVdlog 1/5)) < (d+1)/2. Let a =2(1—e7+) > 1, we can weaken it to a more intuitive form: Ly(m,6) > ie max ( log(m/v/d) = (t log 1/6) ; 0) Lemma 1. If we uniformly sample 2 vectors x and y from the unit sphere Sd−1, we have Ex,y(x · y)2 = 1/d A few comments on these 2 results: • From Theorem 1, we can see that the dependency of the maximum residual IP on the confidence parameter δ is rather weak at log log(1/δ). In real applications, for any partition center, we can only sample a very limited number of queries g such that c = argmax,,¢¢ q° ci. This approach thus can’t scale to large m in the range of 10° — 10°. This makes the estimation of E,(qq7|c) inherently of high variance. To overcome this noisy estimation issue, we provide both theoretical and empirical support of approximating the optimal direction with the partition center direction w=c/llela. e If we choose 6 = 1/2, we can thus show that for at least half of the queries, the largest IP q- c* is at least O(\/log(m/V/d)) larger than the cosine similarity between two randomly sampled vectors. Next, we allow centers to have varying norms: Theorem 2. Suppose the directions of m centers C = {ci}[m] are uniformly sampled from the unit sphere Sd−1, and their sorted norms are h1 ≥ h2 ≥ · · · ≥ hm. With probability at least 1 − δ, the maximum cosine similarity between the query and c∗ = argmaxc∈C q · c is at least L2(m, δ, {hi}[m]): # 4.1 Alignment of Query and Partition Center We first estimate the magnitude of the projected query In component along the partition center direction. the original setup, we have a set of fixed centers and a random query. To facilitate our analysis, we can fix the query and instead rotate centers with respect to the query. We start by studying the case where both centers and query are normalized and later lift the constraint. We consider the scenario where centers after rotation follow a uniform distribution over the unit sphere Sd−1. This provides a more conservative bound than that of real datasets, because real queries tend to be tightly clustered around the “topics” in the database due to formulation of the training objectives and regularizers [27]. Theorem 1. Given a normalized query q ∈ Rd and m random centers C = {ci}[m] uniformly sampled from the unit sphere Sd−1, with probability at least 1 − δ, the maximum cosine similarity between the query and L2(m, δ, {hi}[m]) = max i∈[m] hi h1 L1(i, δ) Intuitively, as i increases, the first factor fe decreases, but the second one L;(i,6) increases, thus the maxi- mum is achieved somewhere in the middle. Specifically, we can see that Lo(m, 6, {hi}{m)) = Atm (Fm/2] , 0). This bound is robust to any small outlier near h,,, but it can be influenced by the largest norm hy. However, we remark that when the largest center norm hy is significantly larger than the median hj,/2), the MIPS problem itself becomes much easier. As the rela- ive magnitude of hy increases, its partition becomes more likely to contain the maximum IP than the rest. And furthermore, the gap between the maximum IP in hy’s partition and the maximum IP from other par- itions becomes wider. Both the concentration of the maximum IP in one partition and the large gap con- ribute to better recall. Hence LOD helps adversarial Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha instances more than easy instances, which explains the consistent recall improvement in our experiments. Ex- act quantification of this behavior is one of our future research directions. # 5 Experiments # 5.1 Datasets We conclude this section with the observation that real queries tend to be more clustered along partition cen- ters than what is suggested by Theorem 1, i.e., the ob- served cos2(q, c∗) is much higher than O(log(m/ d)/d). We hypothesize that this is due to the training pro- cess that aligns query vectors with the natural topical structure in the database vector space. We apply our method along with other state-of-the- art MIPS techniques on public datasets Netflix [7] and Glove [19]. The Netflix dataset is generated with a regularized matrix factorization model simi- lar to [24]. The Glove dataset is downloaded from https://nlp.stanford.edu/projects/glove/. For word embeddings, the cosine similarity between these embeddings reflects their semantic relatedness. Hence we ¢j-normalize the Glove dataset, and then cosine similarity is equal to inner product. # 4.2 Asymptotically Optimal Projection Direction We list details of these datasets in Table 2. Let ou u q be the orthogonal query component in the complement subspace of the partition center. Under the same assumption as Theorem 2, we are ready to state our main result: Theorem 3. Let γ be the ratio between the the largest and smallest non-zero eigenvalues of the matrix Eq(ou q )T |c). The optimal direction is equal to the partition center direction with probability at least 1 − δ if: Dataset #Vectors #Dims Netflix Glove m 20 1000 Table 2: Datasets used for MIPS experiments. γ < (d − 2)L2 2(m, δ, {hi}[m]) # 5.2 Recalls With some positive constant η2, we can rewrite the above into a more intuitive form: We apply following algorithms to both of our datasets: √ γ < η2(log(m/ d) − log(η1 log 1/δ)) • MIPS-PQ: implements the PQ [12] quantization scheme proposed in the original IVFADC frame- work. This theorem states that when the number of partitions m increases above a threshold dependent on the ratio y and 6, the optimal direction is equal to the partition center direction with probability at least 1 — 6. Hence asymptotically, the optimal direction approaches the partition center direction for our LOD+MSQ frame- work as m — oo and m > d. • MIPS-OPQ: implements the OPQ [11] quantiza- tion scheme that learns a global rotation matrix. e L2-OPQ: implements the OPQ quantization scheme and also the MIPS to ¢2-NNS conversion proposed in [6]. We do not transform the Glove dataset since (2-NNS retrieves the same set of database vectors as MIPS after normalization. # 4.3 Approximation with Benefits Approximating the optimal direction with the partition center direction also brings practical benefits: implements our proposed method with both LOD and MSQ. The projec- tion direction is set to the partition center as an effective approximation to the optimal direction. • No extra storage cost, as we don’t have to store a separate vector per partition. We set parameters to following values for all our recall experiments: e Free projection at search time, as we have computed all IPs between the query and centers for partition selection. We just need to perform an O(1) operation to divide the IP by the center norm to get the projected component gq - (c/||c||2). • IVF: we keep average partition size at around 1,000 and we always search 10% of the partitions with ADC. This is in-line with other practices reported in benchmarks and industrial applications [5, 13]. Local Orthogonal Decomposition for Maximum Inner Product Search • Product Quantization: we use either nB ∈ {25, 50} codebooks, each of which contains nW = 16 codewords for PQ and OPQ. For LOD+MSQ, we set nB ∈ {23, 48} when lU Q = 8 and nB ∈ {24, 49} when lU Q = 4 to keep the number of bits spent on each database vector the same. The number of codewords is fixed at 16 for efficient SIMD based implementation of in-register table look-up [22, 8]. (a) (b) • UQ: we use lU Q = 8 bits for uniform quantiza- tion for Netflix and lU Q = 4 bits for Glove, which results in 256 and 16 levels in the codebook respec- tively. Figure 4: Ablation study of both LOD and MSQ on Netflix and Glove. All plots are generated with 100 bit per database vector. # 5.3 Ablation • MSQ: we use lSQ = 4 bits and accordingly 16 levels for scalar quantization of scales in MSQ for all experiments. We apply the same technique in [22] to avoid explicitly storing the codes and hence it incurs no extra cost in storage. To systematically investigate the contribution of LOD and MSQ in isolation, we perform ablation study with both datasets. The combination of LOD+MSQ consistently outper- forms other existing techniques under the same bitrate. Its relative improvement is higher on Netflix because the residual norms of the Netflix dataset exhibit larger variance than those of the Glove dataset. Netflix, #Codebooks=50, Bitrate=200 # Netflix, #Codebooks=25, Bitrate=100 (a) (b) Figure 2: Experiments on the Netflix dataset: (a) recall vs k for 100-bit encoding of database vectors and (b) recall vs k for 200-bit encoding. • MIPS-OPQ, MIPS-LOD-MSQ: are repeated from the experiments reported from the previous section. • MIPS-MSQ: implements the MSQ quantization scheme directly on the residuals rx without LOD. • MIPS-LOD-OPQ: first applies LOD and then implements the OPQ quantization scheme on the orthogonal component ov x. The combination of LOD+MSQ consistently outper- forms either one in isolation. Interestingly, LOD per- forms much better than MSQ alone on Netflix and worse on Glove. This is due to the fact that in the normalized Glove dataset, orthogonal components of residuals have larger norms than projected components. With LOD only, OPQ is applied to the orthogonal com- onents and it fails to preserve ¢) norms at a low bitrate. And the decrease in recall is fairly discernable from the Figure (a) (b) Figure 3: Experiments on the Glove dataset: a) recall vs k for 100-bit encoding of database vector and (b) recall vs k for 200-bit encoding. # 6 Conclusion In this work, we propose a novel quantization scheme that decomposes a residual into two orthogonal compo- nents with respect to a learned projection direction. We then apply UQ to the projected component and MSQ to the orthogonal component respectively. We provide theoretical and empirical support of approximating the optimal projection direction with the partition center direction, which does not require estimating the noisy conditional covariance matrix. The combination of lo- cal orthogonal decomposition and MSQ consistently outperforms other quantization techniques on widely tested public datasets. Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha # References [1] F. André, A.-M. Kermarrec, and N. Le Scouarnec. Cache locality is not enough: high-performance nearest neighbor search with product quantization fast scan. Proceedings of the VLDB Endowment, 9(4):288–299, 2015. [2] A. Auvolat, S. Chandar, P. Vincent, H. Larochelle, and Y. Bengio. Clustering is efficient for approx- imate maximum inner product search. CoRR, abs/1507.05910, 2015. [11] T. Ge, K. He, Q. Ke, and J. Sun. Optimized prod- uct quantization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4):744–755, April 2014. [12] H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE transactions on pattern analysis and machine in- telligence, 33(1):117–128, 2011. [13] J. Johnson, M. Douze, and H. Jégou. Billion- scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. [3] A. Babenko and V. Lempitsky. Additive quantiza- tion for extreme vector compression. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 931–938. IEEE, 2014. [4] A. Babenko and V. Lempitsky. Tree quantiza- tion for large-scale similarity search and classifica- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4240–4248, 2015. [14] Y. Kalantidis and Y. Avrithis. Locally opti- mized product quantization for approximate near- est neighbor search. In Computer Vision and Pat- tern Recognition (CVPR), 2014 IEEE Conference on, pages 2329–2336. IEEE, 2014. [15] J. Martinez, J. Clement, H. H. Hoos, and J. J. Lit- tle. Revisiting additive quantization. In European Conference on Computer Vision, pages 137–153. Springer, 2016. [5] A. Babenko and V. Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2055–2063, June 2016. [6] Y. Bachrach, Y. Finkelstein, R. Gilad-Bachrach, L. Katzir, N. Koenigstein, N. Nice, and U. Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender Systems, pages 257–264, 2014. [7] J. Bennett, S. Lanning, and N. Netflix. The netflix prize. In In KDD Cup and Workshop in conjunc- tion with KDD, 2007. [8] D. W. Blalock and J. V. Guttag. Bolt: Accel- erated data mining with fast vector compression. In Proceedings of the 23rd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, pages 727–735, 2017. [9] P. Cremonesi, Y. Koren, and R. Turrin. Perfor- mance of recommender algorithms on top-n rec- ommendation tasks. In Proceedings of the Fourth ACM Conference on Recommender Systems, pages 39–46, 2010. [10] T. Dean, M. Ruzon, M. Segal, J. Shlens, S. Vi- jayanarasimhan, and J. Yagnik. Fast, accurate detection of 100,000 object classes on a single ma- chine: Technical supplement. In Proceedings of IEEE Conference on Computer Vision and Pat- tern Recognition, 2013. [16] J. Martinez, H. H. Hoos, and J. J. Little. Stacked quantizers for compositional vector compression. CoRR, abs/1411.2173, 2014. [17] S. Mussmann and S. Ermon. Learning and in- ference via maximum inner product search. In Proceedings of The 33rd International Conference on Machine Learning, volume 48, pages 2587–2596, 2016. [18] M. Norouzi and D. J. Fleet. Cartesian k-means. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3017–3024, 2013. [19] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, 2014. [20] A. Pritzel, B. Uria, S. Srinivasan, A. P. Badia, O. Vinyals, D. Hassabis, D. Wierstra, and C. Blun- dell. Neural episodic control. In Proceedings of the 34th International Conference on Machine Learn- ing, volume 70, pages 2827–2836, 2017. [21] A. Shrivastava and P. Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). In Advances in Neural Information Pro- cessing Systems, pages 2321–2329, 2014. [22] X. Wu, R. Guo, A. T. Suresh, S. Kumar, D. N. Holtmann-Rice, D. Simcha, and F. Yu. Multiscale quantization for fast similarity search. In Ad- vances in Neural Information Processing Systems 30, pages 5745–5755. 2017. Local Orthogonal Decomposition for Maximum Inner Product Search [23] I. E.-H. Yen, S. Kale, F. Yu, D. Holtmann-Rice, S. Kumar, and P. Ravikumar. Loss decomposition for fast learning in large output spaces. In Pro- ceedings of the 35th International Conference on Machine Learning, volume 80, pages 5640–5649, 2018. [24] H.-F. Yu, C.-J. Hsieh, Q. Lei, and I. S. Dhillon. A greedy approach for budgeted maximum inner product search. In Advances in Neural Information Processing Systems 30, pages 5453–5462. 2017. [25] T. Zhang, C. Du, and J. Wang. Composite quan- tization for approximate nearest neighbor search. In ICML, number 2, pages 838–846, 2014. [26] T. Zhang, G.-J. Qi, J. Tang, and J. Wang. Sparse composite quantization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4548–4556, 2015. [27] X. Zhang, F. X. Yu, S. Kumar, and S. Chang. Learning spread-out local feature descriptors. In IEEE International Conference on Computer Vi- sion, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 4605–4613, 2017. Xiang Wu, Ruiqi Guo, Sanjiv Kumar, David Simcha # 7 Appendix # 7.1 Proof of Theorem 1 For 0 ≤ x < 1, we note that 1 − e−x is concave, and it is entirely above the line (1 − e−1)x, i.e., 1 − e−x > (1 − e−1)x. Plug this into RHS, we arrive at: Without loss the generality, we can assume the query is fixed at g = [1,0,--- ,0]. Thus the inner product between the query and a center becomes the value of the first dimension of the center, whose distribution is F(y) = aJ,a —a?)“4-D/2dx, where Zq is the normalization constant. Its value is given by Va/Va_1, where Vy is the volume of the d-dimensional unit hy- perball: 14/?/D(d/2 +1). Ideally, we want to find the maximum h that still satisfies (F (h))m ≤ δ. For any δ > 2−m, it is clear that h > 0. And we have: he io max (08H/ YE = lou 1081/8) Where α = 2(1 − e−1). # 7.2 Proof of Theorem 2 Fix an index i, we can divide the centers into two groups with norms {hj}1≤j≤i and {hj}(i+1)≤j≤m. Let j∗ = argmax1≤j≤m q · cj, i.e, j∗ is the index of c∗, we have two cases: 1 ft 2) \m a-7/ (1 =a?) YP dr)" <6 √ Let z = x2 and replace x by z, we have: : 1 Ly 2jlenr 2Za Sn v2 dz <5i/m √ Note that if we replace can replace it with this stronger guarantee: 1 ,-- (1 2)?-D Pde < 5m 2Za Jn2 ~ Which becomes: 1 Zd(d + 1) (1 − h2)(d+1)/2 ≥ 1 − δ1/m • j∗ ≥ (i + 1), i.e., the maximum inner product center is in the second group. We know that its inner product at least the largest among i + 1 centers all with norms ≥ hj∗ . This implies that cos(q, c∗) ≥ L1(i + 1, δ) with probability at least 1 − δ. e j* <i. Now the maximum inner product is in the first group. We generate a new set of centers with dividing every center in the first group by the small- est norm hj, ie., {c1/hi,co/hi,--+ ,ci/hi}. Note hat after division, all new centers still have norms > 1. The maximum inner product between the query and this new set of centers is at least as arge as the maximum when all centers have unit norms. This implies that g-c*/h; > Li(i,6) with probability at least 1 — 6. Since |lc*||2 < hi, we have cos(q,c*) > ELili, 6) with probability at east 1 — 0. Note that: 1 − δ1/m = 1 − exp(−(log 1/δ)/m) < (log 1/δ)/m. So we can replace RHS with this stronger guarantee: (1 − h2)(d+1)/2 ≥ Zd(d + 1)(log 1/δ)/m √ And Zd ≤ η/ d for some positive constant η with suf- ficient large d, based on the two-sided Sterling formula. Plug this stronger guarantee into RHS: Combining these 2, we can conclude that cos(q, c∗) ≥ hi L1(i, δ) for any i with probability at least 1 − δ. h1 L1(i, δ). Hence the overall lower bound is maxi∈[m] # 7.3 Proof of Theorem 3 n(d + 1) log 1/8.) 2/(a+1) Jd m 1—h? > ( And with sufficiently large d, we can increase η slightly to η1 so that: Without loss of generality, we can fix the norm of query at 1. We can decompose the optimal direction v = au+ Bw, where a? + 6? = 1,a,8 > 0, and w is a direction orthogonal to u. We denote A = E,4((q-u)?|c) and B= ,/E((o¥- w)?|c). Then after some manipulation, we can arrive at: √ h< ji _ (mv alos 1/9) at m vT Eq(qqT |c)v = α2A2 + β2B2 + 2αβEq((q · u)(ou Note that: To make RHS more comprehensible, we note that: √ 1 (Vales 1/6 ) ar _ m ~ 1 exp(—2 ons vd) =log(n log 1/9) ) Eq((q·u)(ou q ·w)|c) ≤ Eq((q · u)2|c)Eq((ou q · w)2|c) = AB So we have vT Eq(qqT |c)v ≤ (αA + βB)2 even for the optimal v. When we have A > B or equivalently Local Orthogonal Decomposition for Maximum Inner Product Search A2 > B2, we have vT Eq(qqT |c)v ≤ A2, but we also know that vT Eq(qqT |c)v ≥ uT Eq(qqT |c)u = A2. Hence when A2 > B2, the optimal v is equail to u by setting α = 1. And we have: B2 = Eq((ou q · w)2|c) = wT Eq(ou λmax(Eq(ou q )T |c)) q (ou q (ou q )T |c)w ≤ So when the maximum eigenvalue of the matrix λmax(Eq(ou q )T |c)) < Eq((q · u)2|c), the optimal di- rection is equal to u. Note that: q )T |c)) = trace(Eq(qqT |c))− trace(Eq(ou trace(Eq((q · u)2uuT |c)) = 1 − Eq((q · u)2|c) And the smallest eigenvalue of matrix Eq(ou q )T |c) is 0. Hence it only has (d − 1) non-zero eigenvalues. If the ratio between the largest and smallest non-zero eigenvalues of this matrix is γ, then we have: λmax + (d − 2)λmax/γ ≤ 1 − Eq((q · u)2|c) Which gives us an upper bound of λmax as: λmax ≤ 1 − Eq((q · u)2|c) 1 + (d − 2)/γ So when RHS is less than Eq((q · u)2|c), we have the op- timal direction is equal to the partition center direction. After simplification, we arrive at the condition: (d − 2)Eq((q · u)2|c) 1 − 2Eq((q · u)2|c) We can replace RHS with a stronger gurantee and arrive at: γ < (d − 2)Eq((q · u)2|c) Now we can plug in the result from Theorem 2, and conclude the proof.
{ "id": "1702.08734" }
1903.08983
SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)
We present the results and the main findings of SemEval-2019 Task 6 on Identifying and Categorizing Offensive Language in Social Media (OffensEval). The task was based on a new dataset, the Offensive Language Identification Dataset (OLID), which contains over 14,000 English tweets. It featured three sub-tasks. In sub-task A, the goal was to discriminate between offensive and non-offensive posts. In sub-task B, the focus was on the type of offensive content in the post. Finally, in sub-task C, systems had to detect the target of the offensive posts. OffensEval attracted a large number of participants and it was one of the most popular tasks in SemEval-2019. In total, about 800 teams signed up to participate in the task, and 115 of them submitted results, which we present and analyze in this report.
http://arxiv.org/pdf/1903.08983
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, Ritesh Kumar
cs.CL
Proceedings of the International Workshop on Semantic Evaluation (SemEval)
null
cs.CL
20190319
20190427
9 1 0 2 r p A 7 2 ] L C . s c [ 3 v 3 8 9 8 0 . 3 0 9 1 : v i X r a # SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) Marcos Zampieri,1 Shervin Malmasi,2 Preslav Nakov,3 Sara Rosenthal,4 Noura Farra,5 Ritesh Kumar6 1University of Wolverhampton, UK, 2Amazon Research, USA 3Qatar Computing Research Institute, HBKU, Qatar 4IBM Research, USA, 5Columbia University, USA, 6Bhim Rao Ambedkar University, India [email protected] # Abstract We present the results and the main findings of SemEval-2019 Task 6 on Identifying and Cate- gorizing Offensive Language in Social Media (OffensEval). The task was based on a new dataset, the Offensive Language Identification Dataset (OLID), which contains over 14,000 English tweets. It featured three sub-tasks. In sub-task A, the goal was to discriminate be- tween offensive and non-offensive posts. In sub-task B, the focus was on the type of of- fensive content in the post. Finally, in sub-task C, systems had to detect the target of the offen- sive posts. OffensEval attracted a large num- ber of participants and it was one of the most popular tasks in SemEval-2019. In total, about 800 teams signed up to participate in the task, and 115 of them submitted results, which we present and analyze in this report. Interestingly, none of this previous work has stud- ied both the type and the target of the offensive language, which is our approach here. Our task, OffensEval1, uses the Offensive Language Identi- fication Dataset (OLID)2 (Zampieri et al., 2019), which we created specifically for this task. OLID is annotated following a hierarchical three-level annotation schema that takes both the target and the type of offensive content into account. Thus, it can relate to phenomena captured by previous datasets such as the one by Davidson et al. (2017). Hate speech, for example, is commonly under- stood as an insult targeted at a group, whereas cy- berbulling is typically targeted at an individual. We defined three sub-tasks, corresponding to the three levels in our annotation schema:3 Sub-task A: Offensive language identification (104 participating teams) # Introduction Recent years have seen the proliferation of offen- sive language in social media platforms such as Facebook and Twitter. As manual filtering is very time consuming, and as it can cause post-traumatic stress disorder-like symptoms to human annota- tors, there have been many research efforts aim- ing at automating the process. The task is usually modeled as a supervised classification problem, where systems are trained on posts annotated with respect to the presence of some form of abusive or offensive content. Examples of offensive con- tent studied in previous work include hate speech (Davidson et al., 2017; Malmasi and Zampieri, 2017, 2018), cyberbulling (Dinakar et al., 2011), and aggression (Kumar et al., 2018). Moreover, given the multitude of terms and definitions used in the literature, some recent studies have investi- gated the common aspects of different abusive lan- guage detection sub-tasks (Waseem et al., 2017; Wiegand et al., 2018). Sub-task B: Automatic categorization of offense types (71 participating teams) Sub-task C: Offense target identification (66 par- ticipating teams) The remainder of this paper is organized as follows: Section 2 discusses prior work, includ- ing shared tasks related to OffensEval. Section 3 presents the shared task description and the sub- tasks included in OffensEval. Section 4 includes a brief description of OLID based on (Zampieri et al., 2019). Section 5 discusses the participating systems and their results in the shared task. Fi- nally, Section 6 concludes and suggests directions for future work. 1http://competitions.codalab.org/ competitions/20011 2http://scholar.harvard.edu/malmasi/ olid 3A total of 800 teams signed up to participate in the task, but only 115 teams ended up submitting results eventually. # 2 Related Work Different abusive and offense language identifica- tion problems have been explored in the literature ranging from aggression to cyber bullying, hate speech, toxic comments, and offensive language. Below we discuss each of them briefly. Aggression identification: The TRAC shared task on Aggression Identification (Kumar et al., 2018) provided participants with a dataset contain- ing 15,000 annotated Facebook posts and com- ments in English and Hindi for training and val- idation. For testing, two different sets, one from Facebook and one from Twitter, were used. The goal was to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. The best-performing systems in this competition used deep learning approaches based on convolutional neural networks (CNN), recur- rent neural networks, and LSTM (Aroyehun and Gelbukh, 2018; Majumder et al., 2018). Bullying detection: There have been several stud- ies on cyber bullying detection. For example, Xu et al. (2012) used sentiment analysis and topic models to identify relevant topics, and Dadvar et al. (2013) used user-related features such as the frequency of profanity in previous messages. Hate speech identification: This is the most stud- ied abusive language detection task (Kwok and Wang, 2013; Burnap and Williams, 2015; Djuric et al., 2015). More recently, Davidson et al. (2017) presented the hate speech detection dataset with over 24,000 English tweets labeled as non offen- sive, hate speech, and profanity. Offensive language: The GermEval4 (Wiegand et al., 2018) shared task focused on offensive lan- guage identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between of- fensive and non-offensive tweets. There was also a second task where the offensive class was sub- divided into profanity, insult, and abuse. This is similar to our work, but there are three key differ- ences: (i) we have a third level in our hierarchy, (ii) we use different labels in the second level, and (iii) we focus on English. # 4http://projects.fzai.h-da.de/iggsa/ Toxic comments: The Toxic Comment Classifica- tion Challenge5 was an open competition at Kag- gle, which provided participants with comments from Wikipedia organized in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. The dataset was also used outside of the compe- tition (Georgakopoulos et al., 2018), including as additional training material for the aforementioned TRAC shared (Fortuna et al., 2018). While each of the above tasks tackles a par- ticular type of abuse or offense, there are many commonalities. For example, an insult targeted at an individual is commonly known as cyberbulling and insults targeted at a group are known as hate speech. The hierarchical annotation model pro- posed in OLID (Zampieri et al., 2019) and used in OffensEval aims to capture this. We hope that the OLID’s dataset would become a useful resource for various offensive language identification tasks. # 3 Task Description and Evaluation The training and testing material for OffensEval is the aforementioned Offensive Language Identi- fication Dataset (OLID) dataset, which was built specifically for this task. OLID was annotated us- ing a hierarchical three-level annotation model in- troduced in Zampieri et al. (2019). Four examples of annotated instances from the dataset are pre- sented in Table 1. We use the annotation of each of the three layers in OLID for a sub-task in Of- fensEval as described below. # 3.1 Sub-task A: Offensive language identification In this sub-task, the goal is to discriminate be- tween offensive and non-offensive posts. Offen- sive posts include insults, threats, and posts con- taining any form of untargeted profanity. Each in- stance is assigned one of the following two labels. • Not Offensive (NOT): Posts that do not con- tain offense or profanity; • Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable lan- guage (profanity) or a targeted offense, which can be veiled or direct. This category in- cludes insults, threats, and posts containing profane language or swear words. 5 http://kaggle.com/c/jigsaw-toxic-comment-classification-challenge Tweet A B C @USER He is so generous with his offers. IM FREEEEE!!!! WORST EXPERIENCE OF MY FUCKING LIFE @USER Fuk this fat cock sucker @USER Figures! What is wrong with these idiots? Thank God for @USER NOT — — OFF UNT — TIN IND OFF TIN GRP OFF Table 1: Four tweets from the OLID dataset, with their labels for each level of the annotation model. # 3.2 Sub-task B: Automatic categorization of offense types In sub-task B, the goal is to predict the type of offense. Only posts labeled as Offensive (OFF) in sub-task A are included in sub-task B. The two categories in sub-task B are the following: • Targeted Insult (TIN): Posts containing an in- sult/threat to an individual, group, or others (see sub-task C below); • Untargeted (UNT): Posts containing non- targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. # 3.3 Sub-task C: Offense target identification Confusion Matrix True label Predicted label Figure 1: Example of a confusion matrix provided in the results package for team NULI, which is the best- performing team for sub-task A. Sub-task C focuses on the target of offenses. Only posts that are either insults or threats (TIN) arwe considered in this third layer of annotation. The three labels in sub-task C are the following: • Individual (IND): Posts targeting an individ- It can be a a famous person, a named ual. individual or an unnamed participant in the conversation. Insults/threats targeted at indi- viduals are often defined as cyberbullying. • Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sex- ual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. • Other (OTH): The target of these offensive posts does not belong to any of the previous two categories, e.g., an organization, a situa- tion, an event, or an issue. # 3.4 Task Evaluation Given the strong imbalance between the number of instances in the different classes across the three tasks, we used the macro-averaged F1-score as the official evaluation measure for all three sub-tasks. At the end of the competition, we provided the participants with packages containing the results for each of their submissions, including tables and confusion matrices, and tables with the ranks list- ing all teams who competed in each sub-task. For example, the confusion matrix for the best team in sub-task A is shown in Figure 1. # 3.5 Participation The task attracted nearly 800 teams and 115 of them submitted their results. The teams that sub- mitted papers for the SemEval-2019 proceedings are listed in Table 2.6 # 6ASE-CSE is for Amrita School of Engineering - CSE. Team (Rozental and Biton, 2019) (Sridharan and T, 2019) (Kumar et al., 2019) (Wu et al., 2019) (Aglionby et al., 2019) (Yaojie et al., 2019) (Pavlopoulos et al., 2019) (la Pea and Rosso, 2019) (Pedersen, 2019) (Kebriaei et al., 2019) (Pelicon et al., 2019) (Indurthi et al., 2019) (Doostmohammadi et al., 2019) (Bansal et al., 2019) (Oberstrass et al., 2019) (Patras et al., 2019) (Graff et al., 2019) (HaCohen-Kerner et al., 2019) (Han et al., 2019) (Torres and Vaca, 2019) (Mukherjee et al., 2019) (Rani and Ojha, 2019) (Altin et al., 2019) (Aggarwal et al., 2019) (Mahata et al., 2019) Amobee ASE-CSE bhanodaig BNU-HKBU ... CAMsterdam CN-HIT-MI.T ConvAI DA-LD-Hildesheim (Modha et al., 2019) DeepAnalyzer Duluth Emad Embeddia Fermi Ghmerti HAD-T¨ubingen HHU Hope INGEOTEC JCTICOL jhan014 JTML JU ETCE 17 21 KMI Coling LaSTUS/TALN LTL-UDE MIDAS Nikolov-Radivchev (Nikolov and Radivchev, 2019) NIT Agartala NLP Team (Swamy et al., 2019) NLP NLP@UIOWA NLPR@SRPOL nlpUP NULI SINAI SSN NLP Stop PropagHate Pardeep techssn The Titans TUVD T¨uKaSt UBC-NLP UTFPR UHH-LT UM-IU@LING USF UVA Wahoos YNU-HPCC YNUWB Zeyad (Kapil et al., 2019) (Rusert and Srinivasan, 2019) (Seganti et al., 2019) (Mitrovi´c et al., 2019) (Liu et al., 2019) (Plaza-del Arco et al., 2019) (Thenmozhi et al., 2019) (Fortuna et al., 2019) (Singh and Chand, 2019) (S et al., 2019) (Garain and Basu, 2019) (Shushkevich et al., 2019) (Kannan and Stein, 2019) (Rajendran et al., 2019) (Paetzold, 2019) (Wiedemann et al., 2019) (Zhu et al., 2019) (Goel and Sharma, 2019) (Ramakrishnan et al., 2019) (Zhou et al., 2019) (Wang et al., 2019) (El-Zanaty, 2019) Table 2: The teams that participated in OffensEval and submitted system description papers. # 4 Data Below, we briefly describe OLID, the dataset used for our SemEval-2019 task 6. A detailed descrip- tion of the data collection process and annotation is presented in Zampieri et al. (2019). OLID is a large collection of English tweets an- notated using a hierarchical three-layer annotation It contains 14,100 annotated tweets di- model. vided into a training partition of 13,240 tweets and a testing partition of 860 tweets. Additionally, a small trial dataset of 320 tweets was made avail- able before the start of the competition. A B C Train Test Total TIN IND OFF TIN OTH OFF OFF TIN GRP OFF UNT — — NOT — 2,407 395 1,074 524 8,840 100 35 78 27 620 2,507 430 1,152 551 9,460 All 13,240 860 14,100 Table 3: Distribution of label combinations in OLID. The distribution of the labels in OLID is shown in Table 3. We annotated the dataset using the crowdsourcing platform Figure Eight.7 We en- sured the quality of the annotation by only hiring experienced annotators on the platform and by us- ing test questions to discard annotators who did not achieve a certain threshold. All the tweets were annotated by two people. In case of dis- agreement, a third annotation was requested, and ultimately we used a majority vote. Examples of tweets from the dataset with their annotation labels are shown in Table 1. # 5 Results The models used in the task submissions ranged from traditional machine learning, e.g., SVM and logistic regression, to deep learning, e.g., CNN, RNN, BiLSTM, including attention mechanism, to state-of-the-art deep learning models such as ELMo (Peters et al., 2018) and BERT (Devlin et al.). Figure 2 shows a pie chart indicating the breakdown by model type for all participating sys- tems in sub-task A. Deep learning was clearly the most popular approach, as were also ensem- ble models. Similar trends were observed for sub- tasks B and C. 7https://www.figure-eight.com/ Sub-task A Models Machine Learning 17% @ Machine Learning @ Other BE @ RNN, GRU @ CNN @ LSTM, BiLSTM mw BERT w@ Ensemble m DL Other Figure 2: Pie chart showing the models used in sub- task A. ‘N/A’ indicates that the system did not have a description. Some teams used additional training data, explor- ing external datasets such as Hate Speech Tweets (Davidson et al., 2017), toxicity labels (Thain et al., 2017), and TRAC (Kumar et al., 2018). Moreover, seven teams indicated that they used sentiment lexicons or a sentiment analysis model for prediction, and two teams reported the use of offensive word lists. Furthermore, several teams used pre-trained word embeddings from FastText (Bojanowski et al., 2016), from GloVe, includ- ing Twitter embeddings from GloVe (Pennington et al., 2014) and from word2vec (Mikolov et al., 2013; Godin et al., 2015). In addition, several teams used techniques for pre-processing the tweets such as normalizing the tokens, hashtags, URLs, retweets (RT), dates, elongated words (e.g., “Hiiiii” to “Hi”, partially hidden words (“c00l” to “cool”). Other techniques include converting emojis to text, removing un- common words, and using Twitter-specific tok- enizers, such as the Ark Tokenizer8 (Gimpel et al., 2011) and the NLTK TweetTokenizer,9 as well as standard tokenizers (Stanford Core NLP (Manning et al., 2014), and the one from Keras.10 Approxi- mately a third of the teams indicated that they used one or more of these techniques. # 8http://www.cs.cmu.edu/˜ark/TweetNLP 9http://www.nltk.org/api/nltk. # tokenize.html # 10http://keras.io/preprocessing/text/ The results for each of the sub-tasks are shown in Table 4. Due to the large number of submissions, we only show the F1-score for the top-10 teams, followed by result ranges for the rest of the teams. We further include the models and the baselines from (Zampieri et al., 2019): CNN, BiLSTM, and SVM. The baselines are choosing all predictions to be of the same class, e.g., all offensive, and all not offensive for sub-task A. Table 5 shows all the teams that participated in the tasks along with their ranks in each task. These two tables can be used together to find the score/range for a particular team. Below, we describe the overall results for each sub-task, and we describe the top-3 systems. # 5.1 Sub-task A Sub-task A was the most popular sub-task with 104 participating teams. Among the top-10 teams, seven used BERT (Devlin et al.) with varia- tions in the parameters and in the pre-processing The top-performing team, NULI, used steps. BERT-base-uncased with default-parameters, but with a max sentence length of 64 and trained for 2 epochs. The 82.9% F1 score of NULI is 1.4 points better than the next system, but the differ- ence between the next 5 systems, ranked 2-6, is less than one point: 81.5%-80.6%. The top non- BERT model, MIDAS, is ranked sixth. They used an ensemble of CNN and BLSTM+BGRU, to- gether with Twitter word2vec embeddings (Godin et al., 2015) and token/hashtag normalization. # 5.2 Sub-task B A total of 76 teams participated in sub-task B, and 71 of them had also participated in sub-task A. In contrast to sub-task A, where BERT clearly dominated, here five of the top-10 teams used an ensemble model. Interestingly, the best team, jhan014, which was ranked 76th in sub-task A, used a rule-based approach with a keyword filter based on a Twitter language behavior list, which included strings such as hashtags, signs, etc., achieving an F1-score of 75.5%. The second and the third teams, Amobee and HHU, used ensem- bles of deep learning (including BERT) and non- neural machine learning models. The best team from sub-task A also performed well here, ranked 4th (71.6%), thus indicating that overall BERT works well for sub-task B as well. Sub-task B Team Ranks F1 Range Team Ranks F1 Range Team Ranks F1 Range Sub-task A Sub-task C 1 2 3 4 5 6 7 8 9 CNN 10 11-14 15-24 BiLSTM 25-29 SVM 30-38 39-49 50-62 ALL TIN 63-74 75 76 All UNT 0.755 0.739 0.719 0.716 0.708 0.706 0.700 0.695 0.692 0.690 0.687 .680-.682 .660-.671 0.660 .640-.655 0.640 .600-.638 .553-.595 .500-.546 0.470 .418-.486 0.270 0.121 0.100 1 2 3 4 5 6 7 8 9 10 11-14 15-18 19-23 24-29 30-33 34-40 41-47 CNN BiLSTM SVM 46-60 61-65 All IND All GRP ALL OTH 0.660 0.628 0.626 0.621 0.613 0.613 0.591 0.588 0.587 0.586 .571-.580 .560-.569 .547-.557 .523-.535 .511-.515 .500-.509 .480-.490 0.470 0.470 0.450 .401-.476 .249-.340 0.210 0.180 0.090 Table 4: F1-Macro for the top-10 teams followed by the rest of the teams grouped in ranges for all three sub-tasks. Refer to Table 5 to see the team names associated with each rank. We also include the models (CNN, BiLSTM, and SVM) and the baselines (All NOT and All OFF) from (Zampieri et al., 2019), shown in bold. # 5.3 Sub-task C # 5.4 Description of the Top Teams A total of 66 teams participated in sub-task C, and most of them also participated in sub-tasks A and B. As in sub-task B, ensembles were quite successful and were used by five of the top- 10 teams. However, as in sub-task A, the best team, vradivchev anikolov, used BERT after try- ing many other deep learning methods. They also used pre-processing and pre-trained word embed- dings based on GloVe. The second best team, NLPR@SRPOL, used an ensemble of deep learn- ing models such as OpenAI Finetune, LSTM, Transformer, and non-neural machine learning models such as SVM and Random Forest. The top-3 teams by average rank for all three sub-tasks were NLPR@SRPOL, NULI, and vradi- vchev anikolov. Below, we provide a brief de- scription of their approaches: NLPR@SRPOL was ranked 8th, 9th, and 2nd on sub-tasks A, B, and C, respectively. They used ensembles of OpenAI GPT, Random Forest, the Transformer, Universal encoder, ELMo, and combined embeddings from fast- Text and custom ones. They trained their models on multiple publicly available offen- sive datasets, as well as on their own custom dataset annotated by linguists. Team Sub-task A B C Team Sub-task A B C Team Sub-task A B C NoOffense HHU quanzhi TUVD NULI 1 4 18 vradivchev anikolov 2 16 1 UM-IU@LING 3 76 27 4 18 5 Embeddia 5 8 - MIDAS 6 62 39 BNU-HKBU 7 - SentiBERT - NLPR@SRPOL 8 9 2 9 - YNUWB - 10 - 19 LTL-UDE 11 - nlpUP - 12 11 35 ConvAI 13 10 - Vadym 14 21 13 UHH-LT 15 19 20 CAMsterdam - 16 - YNU-HPCC - 17 - nishnik Amobee 18 2 7 19 46 11 himanisoni 20 - samsam - JU ETCE 17 21 21 50 47 DA-LD-Hildesheim 22 28 21 23 12 4 YNU-HPCC 24 - 28 ChenXiuling 25 29 - Ghmerti 26 - safina - 27 17 - Arjun Roy 28 30 22 CN-HIT-MI.T 29 20 15 LaSTUS/TALN 30 3 - HHU 31 26 10 na14 32 37 24 NRC 33 54 52 NLP 34 - JTML - 35 25 31 Arup-Baruah UVA Wahoos 36 42 - NLP@UniBuc 37 73 49 38 40 43 NTUA-ISLab 39 49 - Rohit 79 71 - kroniker 40 43 - resham - aswathyprem 80 - 41 47 29 Xcosmos DeepAnalyzer 81 38 45 jkolis - 42 - - Code Lyoko 82 - NIT Agartala NLP Team 43 5 38 - rowantahseen 83 - 44 - Stop PropagHate - - 84 - ramjib 45 52 26 KVETHZ - OmerElshrief 85 - 46 14 36 christoph.alt 86 56 - desi 47 22 16 TECHSSN 87 31 3 Fermi 48 32 62 USF - 88 - 49 64 33 mkannan Ziv Ben David 89 35 54 mking 50 63 - JCTICOL T¨uKaSt ninab 90 69 - 51 23 50 Gal DD dianalungu725 91 74 65 52 66 25 HAD-T¨ubingen - 92 - Halamulki 53 59 61 93 65 64 SSN NLP 54 - Emad - NLP@UIOWA - UTFPR 94 - 55 27 37 - rogersdepelle 95 - 56 15 12 INGEOTEC Amimul Ihsan 96 - - 57 39 44 Duluth supriyamandal 97 75 - 58 34 34 Zeyad 98 - ramitpahwa - 59 70 58 ShalomRochman 99 33 32 ASE - CSE - 60 - stefaniehegele 100 - kripo - 61 48 46 NLP-CIC 101 44 63 garain 62 67 40 Elyash KMI Coling 102 - NAYEL - 63 45 53 - 103 - magnito60 - 64 - RUG OffenseEval 104 36 48 AyushS 65 41 - jaypee1996 - UBC NLP 6 9 66 55 8 orabia - 57 - bhanodaig 67 58 60 v.gambhir15 - 60 - Panaetius 68 68 42 kerner-jct.ac.il - 61 - 69 - SINAI eruppert - - 72 - 70 13 55 Macporal apalmer - 6 - 71 53 57 ayman - 14 - 72 24 - Geetika - 17 - 73 51 59 Taha - 23 - 74 - justhalf - - 51 - 75 7 41 mmfouad Pardeep 76 1 30 jhan014 - 56 balangheorghe - - 77 - liuxy94 - 78 - ngre1989 Table 5: All the teams that participated in SemEval-2019 Task 6 with their ranks for each sub-task. The symbol ‘-’ indicates that the team did not participate in some of the subtasks. Please, refer to Table 4 to see the scores based on a team’s rank. The top team for each task is in bold, and the second-place team is underlined. Note: ASE - CSE stands for Amrita School of Engineering - CSE, and BNU-HBKU stands for BNU-HKBU UIC NLP Team 2. NULI was ranked 1st, 4th, and 18th on sub-tasks A, B, and C, respectively. They experimented with different models including linear mod- els, LSTM, and pre-trained BERT with fine- tuning on the OLID dataset. Their final submissions for all three subtasks only used BERT, which performed best during devel- opment. They also used a number of pre- processing techniques such as hashtag seg- mentation and emoji substitution. vradivchev anikolov was ranked 2nd, 16th, and 1st on sub-tasks A, B, and C, respectively. They trained a variety of models and com- bined them in ensembles, but their best sub- missions for sub-tasks A and C used BERT only, as the other models overfitted. For sub- task B, BERT did not perform as well, and they used soft voting classifiers. In all cases, they used pre-trained GloVe vectors and they also applied techniques to address the class imbalance in the training data. # 6 Conclusion We have described SemEval-2019 Task 6 on Iden- tifying and Categorizing Offensive Language in Social Media (OffensEval). The task used OLID (Zampieri et al., 2019), a dataset of English tweets annotated for offensive language use, following a three-level hierarchical schema that considers (i) whether a message is offensive or not (for sub- task A), (ii) what is the type of the offensive mes- sage (for sub-task B), and (iii) who is the target of the offensive message (for sub-task C). Overall, about 800 teams signed up for Of- fensEval, and 115 of them actually participated in at least one sub-task. The evaluation results have shown that the best systems used ensembles and state-of-the-art deep learning models such as BERT. Overall, both deep learning and traditional machine learning classifiers were widely used. More details about the indvididual systems can be found in their respective system description pa- pers, which are published in the SemEval-2019 proceedings. A list with references to these pub- lications can be found in Table 2; note, however, that only 50 of the 115 participating teams submit- ted a system description paper. As is traditional for SemEval, we have made OLID publicly available to the research commu- nity beyond the SemEval competition, hoping to facilitate future research on this important topic. In fact, the OLID dataset and the SemEval-2019 Task 6 competition setup have already been used in teaching curricula in universities in UK and USA. For example, student competitions based on OffensEval using OLID have been organized as part of Natural Language Processing and Text An- alytics courses in two universities in UK: Impe- rial College London and the University of Leeds. System papers describing some of the students’ work are publicly accessible11 and have also been made available on arXiv.org (Cambray and Pod- sadowski, 2019; Frisiani et al., 2019; Ong, 2019; Sapora et al., 2019; Puiu and Brabete, 2019; Uglow et al., 2019). Similarly, a number of stu- dents in Linguistics and Computer Science at the University of Arizona in USA have been using OLID in their coursework. In future work, we plan to increase the size of the OLID dataset, while addressing issues such as class imbalance and the small size for the test partition, particularly for sub-tasks B and C. We would also like to expand the dataset and the task to other languages. # Acknowledgments We would like to thank the SemEval-2019 orga- nizers for hosting the OffensEval task and for re- plying promptly to all our inquires. We further thank the SemEval-2019 anonymous reviewers for the helpful suggestions and for the constructive feedback, which have helped us improve the text of this report. We especially thank the SemEval-2019 Task 6 participants for their interest in the shared task, for their participation, and for their timely feedback, which have helped us make the shared task a suc- cess. Finally, we would like to thank Lucia Specia from Imperial College London and Eric Atwell from the University of Leeds for hosting the Of- fensEval competition in their courses. We further thank the students who participated in these stu- dent competitions and especially those who wrote papers describing their systems. The research presented in this paper was par- tially supported by an ERAS fellowship, which was awarded to Marcos Zampieri by the Univer- sity of Wolverhampton, UK. 11http://scholar.harvard.edu/malmasi/ offenseval-student-systems # References Piush Aggarwal, Tobias Horsmann, Michael Wojatzki, and Torsten Zesch. 2019. LTL-UDE at SemEval- 2019 Task 6: BERT and two-vote classification for In Proceedings of The categorizing offensiveness. 13th International Workshop on Semantic Evalua- tion (SemEval). Guy Aglionby, Chris Davis, Pushkar Mishra, Andrew Caines, Helen Yannakoudakis, Marek Rei, Ekaterina Shutova, and Paula Buttery. 2019. CAMsterdam at SemEval-2019 Task 6: Neural and graph-based feature extraction for the identification of offensive In Proceedings of The 13th International tweets. Workshop on Semantic Evaluation (SemEval). Lutfiye Seda Mut Altin, Alex Bravo Serrano, and Ho- racio Saggion. 2019. LaSTUS/TALN at SemEval- 2019 Task 6: Identification and categorization of offensive language in social media with attention- In Proceedings of The based Bi-LSTM model. 13th International Workshop on Semantic Evalua- tion (SemEval). Flor Miriam Plaza-del Arco, Dolores Molina- Gonz´alez, Teresa Mart´ın-Valdivia, and Alfonso Ure˜na-L´opez. 2019. SINAI at SemEval-2019 Task 6: Incorporating lexicon knowledge into SVM learning to identify and categorize offensive lan- guage in social media. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Segun Taofeek Aroyehun and Alexander Gelbukh. 2018. Aggression detection in social media: Us- ing deep neural networks, data augmentation, and pseudo labeling. In Proceedings of the First Work- shop on Trolling, Aggression and Cyberbullying (TRAC), pages 90–97. Himanshu Bansal, Daniel Nagel, and Anita Soloveva. 2019. HAD-T¨ubingen at SemEval-2019 Task 6: Deep learning analysis of offensive language on In Pro- Twitter: Identification and categorization. ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606. Pete Burnap and Matthew L Williams. 2015. Cyber hate speech on Twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & Internet, 7(2):223–242. Aleix Cambray and Norbert Podsadowski. 2019. Bidi- rectional recurrent models for offensive tweet clas- sification. arXiv preprint arXiv:1903.08808. Maral Dadvar, Dolf Trieschnigg, Roeland Ordelman, and Franciska de Jong. 2013. Improving cyberbul- In Advances in lying detection with user context. Information Retrieval, pages 693–696. Springer. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International Conference on We- blogs and Social Media (ICWSM). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understand- In Proceedings of the Annual Conference of ing. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nology (NAACL-HLT). Karthik Dinakar, Roi Reichart, and Henry Lieberman. 2011. Modeling the detection of textual cyberbully- ing. In The Social Mobile Web, pages 11–17. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Gr- bovic, Vladan Radosavljevic, and Narayan Bhamidi- pati. 2015. Hate speech detection with comment embeddings. In Proceedings of the Web Conference (WWW). Ehsan Doostmohammadi, Hossein Sameti, and Ali Saf- far. 2019. Ghmerti at SemEval-2019 Task 6: A deep word- and character-based approach to offen- sive language identification. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Zeyad El-Zanaty. 2019. Zeyad at SemEval-2019 Task 6: That’s offensive! An all-out search for an ensem- ble to identify and categorize offense in tweets. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Paula Fortuna, Jos´e Ferreira, Luiz Pires, Guilherme Routar, and S´ergio Nunes. 2018. Merging datasets for aggressive text identification. In Proceedings of the First Workshop on Trolling, Aggression and Cy- berbullying (TRAC), pages 128–139. Paula Fortuna, Juan Soler-Company, and Nunes Srgio. 2019. Stop PropagHate at SemEval-2019 Tasks 5 and 6: Are abusive language classification results reproducible? In Proceedings of The 13th Interna- tional Workshop on Semantic Evaluation (SemEval). Nicol`o Frisiani, Alexis Laignelet, and Batuhan G¨uler. 2019. Combination of multiple deep learning archi- tectures for offensive language detection in tweets. arXiv preprint arXiv:1903.08734. Avishek Garain and Arpan Basu. 2019. The Titans at SemEval-2019 Task 6: Hate speech and target de- In Proceedings of The 13th International tection. Workshop on Semantic Evaluation (SemEval). Spiros V Georgakopoulos, Sotiris K Tasoulis, Aris- tidis G Vrahatis, and Vassilis P Plagianakos. 2018. Convolutional neural networks for toxic comment classification. arXiv preprint arXiv:1802.09957. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Jacob Eisenstein, Dipanjan Das, Daniel Mills, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: annotation, features, and experiments. In Proceedings of the Annual Meeting of the Associ- ation for Computational Linguistics (ACL). Fr´ederic Godin, Baptist Vandersmissen, Wesley De Neve, and Rik Van de Walle. 2015. Multime- dia Lab @ACL WNUT NER Shared Task: Named entity recognition for Twitter microposts using dis- tributed word representations. In Proceedings of the Workshop on Noisy User-generated Text. Bharti Goel and Ravi Sharma. 2019. USF at SemEval- 2019 Task 6: Offensive language detection using In Proceedings of LSTM with word embeddings. The 13th International Workshop on Semantic Eval- uation (SemEval). Mario Graff, Sabino Miranda-Jim´enez, Eric S. Tellez, and Daniela Moctezuma. 2019. INGEOTEC at SemEval-2019 Task 5 and Task 6: A genetic pro- gramming approach for text classification. In Pro- ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Yaakov HaCohen-Kerner, Ziv Ben-David, Gal Didi, Eli Cahn, Shalom Rochman, and Elyashiv Shayovitz. 2019. JCTICOL at SemEval-2019 Task 6: Classi- fying offensive language in social media using deep learning methods, word/character n-gram features, and preprocessing methods. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Jiahui Han, Xinyu Liu, and Shengtan Wu. 2019. jhan014 at SemEval-2019 Task 6: Identifying and categorizing offensive language in social media. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Vijayasaradhi Indurthi, Bakhtiyar Syed, Manish Shri- vastava, Manish Gupta, and Vasudeva Varma. 2019. Fermi at SemEval-2019 Task 6: Identifying and categorizing offensive language in social media us- In Proceedings of The ing sentence embeddings. 13th International Workshop on Semantic Evalua- tion (SemEval). Madeeswaran Kannan and Lukas Stein. 2019. T¨uKaSt at SemEval-2019 Task 6: something old, something neu(ral): Traditional and neural approaches to of- In Proceedings of The fensive text classification. 13th International Workshop on Semantic Evalua- tion (SemEval). Prashant Kapil, Asif Ekbal, and Dipankar Das. 2019. NLP at SemEval-2019 Task 6: Detecting offensive language using neural networks. In Proceedings of The 13th International Workshop on Semantic Eval- uation (SemEval). Emad Kebriaei, Samaneh Karimi, Nazanin Sabri, and Azadeh Shakery. 2019. Emad at SemEval-2019 Task 6: Offensive language identification using tra- ditional machine learning and deep learning ap- proaches. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Ritesh Kumar, Guggilla Bhanodai, Rajendra Pamula, and Chennuru Maheshwar Reddy. 2019. bhanodaig at SemEval-2019 Task 6: Categorizing offensive In Proceedings of The language in social media. 13th International Workshop on Semantic Evalua- tion (SemEval). Ritesh Kumar, Atul Kr Ojha, Shervin Malmasi, and Marcos Zampieri. 2018. Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyber- bullying (TRAC). Irene Kwok and Yuzhou Wang. 2013. Locate the hate: In Proceedings Detecting tweets against blacks. of the AAAI Conference on Artificial Intelligence (AAAI). Ping Liu, Wen Li, and Liang Zou. 2019. NULI at SemEval-2019 Task 6: Transfer learning for offen- sive language detection using bidirectional trans- formers. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Debanjan Mahata, Haimin Zhang, Karan Uppal, Ya- man Kumar, Rajiv Ratn Shah, Simra Shahid, Laiba Mehnaz, and Sarthak Anand. 2019. MIDAS at SemEval-2019 Task 6: Identifying offensive posts and targeted offense from Twitter. In Proceedings of The 13th International Workshop on Semantic Eval- uation (SemEval). Prasenjit Majumder, Thomas Mandl, et al. 2018. Fil- tering aggression from the multilingual social me- In Proceedings of the First Workshop dia feed. on Trolling, Aggression and Cyberbullying (TRAC), pages 199–207. Shervin Malmasi and Marcos Zampieri. 2017. Detect- ing Hate Speech in Social Media. In Proceedings of the Conference on Recent Advances in Natural Lan- guage Processing (RANLP). Shervin Malmasi and Marcos Zampieri. 2018. Chal- lenges in Discriminating Profanity from Hate Speech. Journal of Experimental & Theoretical Ar- tificial Intelligence, 30:1 – 16. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed repre- sentations of words and phrases and their compo- sitionality. In Proceedings of the International Con- ference on Neural Information Processing Systems (NIPS). Jelena Mitrovi´c, Bastian Birkeneder, and Michael Granitzer. 2019. nlpUP at SemEval-2019 Task 6: a deep neural language model for offensive language detection. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Sandip Modha, Prasenjit Majumder, and Daksh Patel. 2019. DA-LD-Hildesheim at SemEval-2019 Task 6: Tracking offensive content with deep learning model using shallow representation. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Preeti Mukherjee, Mainak Pal, Somnath Banerjee, and Sudip Kumar Naskar. 2019. JU ETCE 17 21 at SemEval-2019 Task 6: Efficient machine learning and neural network approaches for identifying and In Pro- categorizing offensive language in tweets. ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Alex Nikolov and Victor Radivchev. 2019. Nikolov- Radivchev at SemEval-2019 Task 6: Offensive tweet In Pro- classification with BERT and ensembles. ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Alexander Oberstrass, Julia Romberg, Anke Stoll, and Stefan Conrad. 2019. HHU at SemEval-2019 Task 6: Context does matter - tackling offensive language In identification and categorization with ELMo. Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Ryan Ong. 2019. Offensive language analysis us- arXiv preprint ing deep learning architecture. arXiv:1903.05280. UTFPR at SemEval-2019 Task 6: Relying on compositionality to find offense. In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation (SemEval). Gabriel Florentin Patras, Diana Florina Lungu, Daniela Gifu, and Diana Trandabat. 2019. Hope at SemEval- 2019 Task 6: Mining social media language to dis- In Proceedings of The cover offensive language. 13th International Workshop on Semantic Evalua- tion (SemEval). John Pavlopoulos, Nithum Thain, Lucas Dixon, and Ion Androutsopoulos. 2019. ConvAI at SemEval- 2019 Task 6: Offensive language identification and categorization with perspective and BERT. In Pro- ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Gretel Liz De la Pea and Paolo Rosso. 2019. Deep- Analyzer at SemEval-2019 Task 6: A deep learning- based ensemble method for identifying offensive In Proceedings of The 13th International tweets. Workshop on Semantic Evaluation (SemEval). Ted Pedersen. 2019. Duluth at SemEval-2019 Task 6: Lexical approaches to identify and categorize offen- In Proceedings of The 13th Interna- sive tweets. tional Workshop on Semantic Evaluation (SemEval). Andraˇz Pelicon, Matej Martinc, and Petra Kralj Novak. 2019. Embeddia at SemEval-2019 Task 6: Detect- ing hate with neural network and transfer learning In Proceedings of The 13th Interna- approaches. tional Workshop on Semantic Evaluation (SemEval). Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In Empirical Methods in Natu- ral Language Processing (EMNLP). Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the Annual Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technology (NAACL-HLT). Andrei-Bogdan Puiu and Andrei-Octavian Brabete. 2019. Towards NLP with deep learning: Convolu- tional neural networks and recurrent neural networks for offensive language identification in social media. arXiv preprint arXiv:1903.00665. Arun Rajendran, Chiyu Zhang, and Muhammad Abdul-Mageed. 2019. UBC-NLP at SemEval-2019 Task 6: Ensemble learning of offensive content with enhanced training data. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Murugesan Ramakrishnan, Wlodek Zadrozny, and Narges Tabari. 2019. UVA Wahoos at SemEval- 2019 Task 6: Hate speech identification using en- In Proceedings of The semble machine learning. 13th International Workshop on Semantic Evalua- tion (SemEval). Priya Rani and Atul Kr. Ojha. 2019. KMIColing at SemEval-2019 Task 6: Exploring n-grams for of- fensive language detection. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Alon Rozental and Dadi Biton. 2019. Amobee at SemEval-2019 Tasks 5 and 6: Multiple choice CNN over contextual embedding. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Jonathan Rusert and Padmini Srinivasan. 2019. NLP@UIOWA at SemEval-2019 Task 6: Classify- ing the classless using multi-windowed CNNs. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Angel Deborah S, Rajalakshmi S, Logesh B, Harshini S, Geetika B, Dyaneswaran S, S Milton Rajendram, and Mirnalinee T T. 2019. TECHSSN at SemEval- 2019 Task 6: Identifying and categorizing offensive language in tweets using deep neural networks. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Silvia Sapora, Bogdan Lazarescu, and Christo Lolov. 2019. Absit invidia verbo: Comparing deep learn- ing methods for offensive language. arXiv preprint arXiv:1903.05929. Iryna Orlova, Tymo- Staniszewski, Jakub Hannam Kim, teusz Krumholc, and Krystian Koziel. 2019. NLPR@SRPOL at SemEval-2019 Task 6 and Task 5: Linguistically enhanced deep learning offensive In Proceedings of The 13th sentence classifier. International Workshop on Semantic Evaluation (SemEval). Elena Shushkevich, John Cardiff, and Paolo Rosso. 2019. TUVD team at SemEval-2019 Task 6: Of- In Proceedings of The fense target identification. 13th International Workshop on Semantic Evalua- tion (SemEval). Pardeep Singh and Satish Chand. 2019. Pardeep at SemEval-2019 Task 6: Identifying and categorizing offensive language in social media using deep learn- ing. In Proceedings of The 13th International Work- shop on Semantic Evaluation (SemEval). Murali Sridharan and Swapna T. 2019. Amrita School of Engineering - CSE at SemEval-2019 Task 6: Manipulating attention with temporal convolutional neural network for offense identification and classi- In Proceedings of The 13th International fication. Workshop on Semantic Evaluation (SemEval). Anupam Jamatia, Bj¨orn Gamb¨ack, 2019. NIT Agartala NLP Team at SemEval-2019 Task 6: An ensemble approach to identifying and catego- rizing offensive language in Twitter social media In Proceedings of the 13th International corpora. Workshop on Semantic Evaluation (SemEval). Nithum Thain, Lucas Dixon, and Ellery Wulczyn. 2017. Wikipedia Talk Labels: Toxicity. D Thenmozhi, Senthil Kumar B, Srinethe Sharavanan, and Aravindan Chandrabose. 2019. SSN NLP at SemEval-2019 Task 6: Offensive language identi- fication in social media using machine learning and In Proceedings of The deep learning approaches. 13th International Workshop on Semantic Evalua- tion (SemEval). JTML at SemEval-2019 Task 6: Offensive tweets identifica- In Pro- tion using convolutional neural networks. ceedings of The 13th International Workshop on Se- mantic Evaluation (SemEval). Harrison Uglow, Martin Zlocha, and Szymon Zmys- lony. 2019. An exploration of state-of-the-art meth- ods for offensive language detection. arXiv preprint arXiv:1903.07445. Bin Wang, Xiaobing Zhou, and Xuejie Zhang. 2019. YNUWB at SemEval-2019 Task 6: K-max pooling cnn with average meta-embedding for identifying offensive language. In Proceedings of The 13th In- ternational Workshop on Semantic Evaluation (Se- mEval). Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. arXiv preprint arXiv:1705.09899. Gregor Wiedemann, Eugen Ruppert, and Chris Bie- mann. 2019. UHH-LT at SemEval-2019 Task 6: Su- pervised vs. unsupervised transfer learning for of- fensive language detection. In Proceedings of The 13th International Workshop on Semantic Evalua- tion (SemEval). Michael Wiegand, Melanie Siegel, and Josef Ruppen- hofer. 2018. Overview of the GermEval 2018 shared task on the identification of offensive language. In Proceedings of the GermEval 2018 Workshop (Ger- mEval). Zhenghao Wu, Hao Zheng, Jianming Wang, Weifeng Su, and Jefferson Fong. 2019. BNU-HKBU UIC NLP Team 2 at SemEval-2019 Task 6: Detecting offensive language using BERT model. In Proceed- ings of The 13th International Workshop on Seman- tic Evaluation (SemEval). Jun-Ming Xu, Kwang-Sung Jun, Xiaojin Zhu, and Amy Bellmore. 2012. Learning from bullying traces in social media. In Proceedings of the Annual Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technology (NAACL-HLT). Zhang Yaojie, Xu Bing, and Zhao Tiejun. 2019. CN- HIT-MI.T at SemEval-2019 Task6: Offensive lan- guage identification based on BiLSTM with double attention. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technology (NAACL-HLT). Chengjin Zhou, Jin Wang, and Xuejie Zhang. 2019. YNU-HPCC at SemEval-2019 Task 6: Identifying and categorising offensive language on Twitter. In Proceedings of The 13th International Workshop on Semantic Evaluation (SemEval). Jian Zhu, Zuoyu Tian, and Sandra K¨ubler. 2019. UM- IU@LING at SemEval-2019 Task 6: Identifying of- fensive tweets using BERT and SVMs. In Proceed- ings of The 13th International Workshop on Seman- tic Evaluation (SemEval).
{ "id": "1903.08734" }
1903.07666
An Updated Duet Model for Passage Re-ranking
We propose several small modifications to Duet---a deep neural ranking model---and evaluate the updated model on the MS MARCO passage ranking task. We report significant improvements from the proposed changes based on an ablation study.
http://arxiv.org/pdf/1903.07666
Bhaskar Mitra, Nick Craswell
cs.IR, cs.CL
null
null
cs.IR
20190318
20190318
2019: 9 1 0 2 r a M 8 1 ] R I . s c [ 1 v 6 6 6 7 0 . 3 0 9 1 : v i X r a arXiv:1903.07666v1 # AN UPDATED DUET MODEL FOR PASSAGE RE-RANKING # Bhaskar Mitra Microsoft AI & Research Montreal, Canada [email protected] Nick Craswell Microsoft AI & Research Redmond, USA [email protected] # ABSTRACT We propose several small modifications to Duet—a deep neural ranking model—and evaluate the updated model on the MS MARCO passage ranking task. We report significant improvements from the proposed changes based on an ablation study. Keywords Neural information retrieval · Passage ranking · Ad-hoc retrieval · Deep learning # 1 Introduction In information retrieval (IR), traditional learning to rank [Liu, 2009] models estimate the relevance of a document to a query based on hand-engineered features. The input to these models typically includes, among others, fea- tures based on patterns of exact matches of query terms in the document. Recently proposed deep neural IR models [Mitra and Craswell, 2018], in contrast, accept the raw query and document text as input. The input text is represented as one-hot encoding of words (or sub-word components [Kim et al., 2016, Jozefowicz et al., 2016, Sennrich et al., 2015])—and the deep neural models focus primarily on learning latent representations of text that are effective for matching query and document. Mitra et al. [2017] posit that deep neural ranking models should focus on both: (i) rep- resentation learning for text matching, as well as on (ii) feature learning based on patterns of exact matches of query terms in the document. They demonstrate that a neural ranking model called Duet1—with two distinct sub-models that consider both matches in the term space (the local sub-model) and the learned latent space (the distributed sub- model)—is more effective at estimating query-document relevance. In this work, we evaluate a duet model on the MS MARCO passage ranking task [Bajaj et al., 2016]. We propose several simple modifications to the original Duet architecture and demonstrate through an ablation study that incorpo- rating these changes results in significant improvements on the passage ranking task. # 2 Passage re-ranking on MS MARCO The MS MARCO passage ranking task [Bajaj et al., 2016] requires a model to rank approximately thousand passages for each query. The queries are sampled from Bing’s search logs, and then manually annotated to restrict them to questions with specific answers. A BM25 [Robertson et al., 2009] model is employed to retrieve the top thousand candidate passages for each query from the collection. For each query, zero or more candidate passages are deemed relevant based on manual annotations. The ranking model is evaluated on this passage re-ranking task using the mean reciprocal rank (MRR) metric [Craswell, 2009]. Participants are required to submit the ranked list of passages per query for a development (dev) set and a heldout (eval) set. The ground truth annotations for the development set are available publicly, while the corresponding annotations for the evaluation set are heldout to avoid overfitting. A public leaderboard2 presents all submitted runs from different participants on this task. 1 While Mitra et al. [2017] propose a specific neural architecture, they refer more broadly to the family of neural architectures that operate on both term space and learned latent space as duet. We refer to the specific architecture proposed by Mitra et al. [2017] as Duet—to distinguish it from the general family of such architectures that we refer to as duet (note the difference in capitilization). 2http://www.msmarco.org/leaders.aspx # 3 The updated Duet model In this section, we briefly describe several modifications to the Duet model. A public implementation of the updated Duet model using PyTorch [Paszke et al., 2017] is available online3. Word embeddings We replace the character level n-graph encoding in the input of the distributed model with word embeddings. We see significant reduction in training time given a fixed number of minibatches and a fixed minibatch size. This change primarily helps us to train on a significantly larger amount of data under fixed training time constraints. We initialize the word embeddings using pre-trained GloVe [Pennington et al., 2014] embeddings before training the Duet model. Inverse document frequency weighting In contrast to some of the other datasets on which the Duet model has been previously evaluated [Mitra et al., 2017, Nanni et al., 2017], the MS MARCO dataset contains a relatively larger percentage of natural language queries and the queries are considerably longer on average. In traditional IR mod- els, the inverse document frequency (IDF) [Robertson, 2004] of a query term provides an effective mechanism for weighting the query terms by their discriminative power. In the original Duet model, the input to the local sub-model corresponding to a query q and a document d is a binary interaction matrix X ∈ R|q|×|d| defined as follows: 1, ifq=d; Xiy=s J 1 J fi otherwise @) We incorporate IDF in the Duet model by weighting the interaction matrix by the IDF of the matched terms. We adopt the Robertson-Walker definition of IDF [Jones et al., 2000] normalized to the range [0, 1]. , _ JIDF(q), if qi = dj Xy = {0 otherwise @) IDF(t) = log(N/nt) log(N ) (3) Where, N is the total number of passages in the collection and nt is the number of passages in which the term t appears at least once. Non-linear combination of local and distributed models Zamani et al. [2018] show that when combining different sub-models in a neural ranking model, it is more effective if each sub-model produce a vector output that are further combined by additional multi-layer perceptrons (MLP). In the original Duet model, the local and the distributed sub- models produce a single score that are linearly combined. In our updated architecture, both models produce a vector that are further combined by an MLP—with two hidden layers—to generate the estimated relevance score. Rectifier Linear Units (ReLU) We replace the Tanh non-linearities in the original Duet model with ReLU [Glorot et al., 2011] activations. Bagging We observe some additional improvements from combining multiple Duet models—trained with different random seeds and on different random sample of the training data—using bagging [Breiman, 1996]. # 4 Experiments The MS MARCO task provides a pre-processed training dataset—called “triples.train.full.tsv”—where each training sample consists of a triple hq, p+, p−i, where q is a query and p+ and p− are a pair of passages, with p+ being more relevant to q than p−. Similar to the original Duet model, we employ the cross-entropy with softmax loss to learn the parameters of our model M: # 3https://github.com/dfcf93/MSMARCO/blob/master/Ranking/Baselines/Duet.ipynb 2 Table 1: Comparison of the different Duet variants and other state-of-the-art approaches from the public MS MARCO leaderboard. The update Duet model—referred to as Duet v2—benefits significantly from the modifications proposed in this paper. MRR@10 Dev Model Eval Other approaches BM25 Single CKNRM [Dai et al., 2018] model Ensemble of 8 CKNRM [Dai et al., 2018] models IRNet (a proprietary deep neural model) BERT [Nogueira and Cho, 2019] Duet variants Single Duet v2 w/o IDF weighting for interaction matrix Single Duet v2 w/ Tanh non-linearity (instead of ReLU) Single Duet v2 w/o MLP to combine local and distributed scores Single Duet v2 model Ensemble of 8 Duet v2 models 0.165 0.247 0.290 0.278 0.365 0.163 0.179 0.208 0.243 0.252 0.167 0.247 0.271 0.281 0.359 - - - 0.245 0.253 L = Eq,p+,p−∼θ[ℓ(Mq,p+ − Mq,p−)] (4) where, ℓ(∆) = log(1 + e−σ·∆) (5) Where, Mq,p is the relevance score for the pair hq, pi as estimated by the model M. Note, that by considering a single negative passage per sample, our loss is equivalent to the RankNet loss [Burges et al., 2005]. We use the Adam optimizer with default parameters and a learning rate of 0.001. We set σ in Equation 5 to 0.1 and dropout rate for the model to 0.5. We trim all queries and passages to their first 20 and 200 words, respectively. We restrict our input vocabulary to the 71, 486 most frequent terms in the collection and set the size of all hidden layers to 300. We use minibatches of size 1024 and train the model for 1024 minibatches. Finally, for bagging we train eight different Duet models with different random seeds and on different samples of the training data. We train and evaluate our models using a Tesla K40 GPU—on which it takes a total of only 1.5 hours to train each single Duet model and to evaluate it on both dev and eval sets. # 5 Results Table 1 presents the MRR@10 corresponding to all the Duet variants we evaluated on the dev set. The updated Duet model with all the modifications described in Section 3—referred hereafter as Duet v2—achieves an MRR@10 of 0.243. We perform an ablation study by leaving out one of the three modifications—(i) IDF weighting for interaction matrix, (ii) ReLU non-linearity instead of Tanh, and (iii) LP to combine local and distributed scores,—out at a time. We observe a 33% degradation in MRR by not incorporating the IDF weighting alone. It is interesting to note that the Github implementations4 of the KNRM [Xiong et al., 2017] and CKNRM [Dai et al., 2018] models also indicate that their MS MARCO submissions incorporated IDF term-weighting—potentially indicating the value of IDF weighting across multiple architectures. Similarly, we also observe a 26% degradation in MRR by using Tanh non-linearity instead of ReLU. Using a linear combination of scores from the local and the distributed model instead of combining their vector outputs using an MLP results in 14% degradation in MRR. Finally, we observe a 3% improvement in MRR by ensembling eight Duet v2 models using bagging. We also submit the individual Duet v2 model and the ensemble of eight Duet v2 models for evaluation on the heldout set and observe similar numbers. We include the MRR numbers for other non-Duet based approaches that are available on the public leaderboard in Table 1. As of writing this paper, BERT [Devlin et al., 2018] based approaches—e.g., [Nogueira and Cho, 2019]— are outperforming other approaches by a significant margin. Among the non-BERT based approaches, a proprietary deep neural model—called IRNet—currently demonstrates the best performance on the heldout evaluation set. This is followed, among others, by an ensemble of CKNRM [Dai et al., 2018] models and the single CKNRM model. The single Duet v2 model achieves comparable MRR to the single CKNRM model on the eval set. The ensemble of Duet v2 models, however, performs slightly worse than the ensemble of the CKNRM models on the same set. # 4 https://github.com/thunlp/Kernel-Based-Neural-Ranking-Models 3 # 6 Discussion and conclusion In this paper, we describe several simple modifications to the original Duet model that result in significant im- provements over the original architecture on the MS MARCO task. The updated architecture—we call Duet v2— achieves comparable performance to other non-BERT based top performing approaches, as listed on the public MS MARCO leaderboard. We note, that the Duet v2 model we evaluate contains significantly fewer learnable parameters—approximately 33 million—compared to other top performing approaches, such as BERT based models [Nogueira and Cho, 2019] and single CKNRM model [Dai et al., 2018]—both of which contains few hundred million learnable parameters. Comparing the models based on the exact number of learnable parameters, however, may not be meaningful as most of these parameters are due to large vocabulary size in the input embedding layers. It is not clear how significantly the vocabulary size impacts model performance—an aspect we may want to analyse in the future. It is worth emphasizing that compared to other top performing approaches, training the Duet v2 model takes significantly less resource and time—1.5 hours to train a single Duet model and to evaluate it on both dev and eval sets using a Tesla K40 GPU—which may make the model an attractive starting point for new MS MARCO participants. The model performance on the MS MARCO task may be further improved by adding more depth and / or more careful hyperparameter tuning. # References Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016. Leo Breiman. Bagging predictors. Machine learning, 24(2):123–140, 1996. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89–96. ACM, 2005. Nick Craswell. Mean reciprocal rank. In Encyclopedia of Database Systems, pages 1703–1703. Springer, 2009. Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. Convolutional neural networks for soft-matching n- In Proceedings of the eleventh ACM international conference on web search and data grams in ad-hoc search. mining, pages 126–134. ACM, 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323, 2011. K Sparck Jones, Steve Walker, and Stephen E. Robertson. A probabilistic model of information retrieval: development and comparative experiments: Part 2. Information processing & management, 36(6):809–840, 2000. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. Thirtieth AAAI Conference on Artificial Intelligence, 2016. In Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 3(3):225– 331, March 2009. Bhaskar Mitra and Nick Craswell. An introduction to neural information retrieval. Foundations and Trends®) in Information Retrieval (to appear), 2018. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In Proc. WWW, pages 1291–1299, 2017. Federico Nanni, Bhaskar Mitra, Matt Magnusson, and Laura Dietz. Benchmark for complex answer retrieval. In Proc. ICTIR. ACM, 2017. Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085, 2019. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. Proc. EMNLP, 12:1532–1543, 2014. 4 Stephen Robertson. Understanding inverse document frequency: on theoretical arguments for idf. Journal of docu- mentation, 60(5):503–520, 2004. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389, 2009. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 55–64. ACM, 2017. Hamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary. Neural ranking models with multiple document fields. In Proc. WSDM, 2018. 5
{ "id": "1810.04805" }
1903.05662
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Training activation quantized neural networks involves minimizing a piecewise constant function whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the "gradient" through the modified chain rule becomes non-trivial. Since this unusual "gradient" is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual "gradient" given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments.
http://arxiv.org/pdf/1903.05662
Penghang Yin, Jiancheng Lyu, Shuai Zhang, Stanley Osher, Yingyong Qi, Jack Xin
cs.LG, math.OC, stat.ML
in International Conference on Learning Representations (ICLR) 2019
null
cs.LG
20190313
20190925
9 1 0 2 p e S 5 2 ] G L . s c [ 4 v 2 6 6 5 0 . 3 0 9 1 : v i X r a Published as a conference paper at ICLR 2019 UNDERSTANDING STRAIGHT-THROUGH ESTIMATOR IN TRAINING ACTIVATION QUANTIZED NEURAL NETS # Penghang Yin,∗ Jiancheng Lyu,† Shuai Zhang,‡ Stanley Osher,∗ Yingyong Qi,‡ Jack Xin† ∗Department of Mathematics, University of California, Los Angeles [email protected], [email protected] †Department of Mathematics, University of California, Irvine [email protected], [email protected] ‡Qualcomm AI Research, San Diego {shuazhan,yingyong}@qti.qualcomm.com # ABSTRACT Training activation quantized neural networks involves minimizing a piecewise constant function whose gradient vanishes almost everywhere, which is undesir- able for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the “gradient” through the modified chain rule becomes non-trivial. Since this unusual “gradient” is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual “gradient” given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the asso- ciated coarse gradient descent algorithm converges to a critical point of the popu- lation loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments. # INTRODUCTION Deep neural networks (DNN) have achieved the remarkable success in many machine learning ap- plications such as computer vision (Krizhevsky et al., 2012; Ren et al., 2015), natural language processing (Collobert & Weston, 2008) and reinforcement learning (Mnih et al., 2015; Silver et al., 2016). However, the deployment of DNN typically require hundreds of megabytes of memory storage for the trainable full-precision floating-point parameters, and billions of floating-point op- erations to make a single inference. To achieve substantial memory savings and energy efficiency at inference time, many recent efforts have been made to the training of coarsely quantized DNN, meanwhile maintaining the performance of their float counterparts (Courbariaux et al., 2015; Raste- gari et al., 2016; Cai et al., 2017; Hubara et al., 2018; Yin et al., 2018b). Training fully quantized DNN amounts to solving a very challenging optimization problem. It calls for minimizing a piecewise constant and highly nonconvex empirical risk function f (w) subject to a discrete set-constraint w ∈ Q that characterizes the quantized weights. In particular, weight quantization of DNN have been extensively studied in the literature; see for examples (Li et al., 2016; Zhu et al., 2016; Li et al., 2017; Yin et al., 2016; 2018a; Hou & Kwok, 2018; He et al., 2018; Li & Hao, 2018). On the other hand, the gradient ∇f (w) in training activation quantized DNN is almost everywhere (a.e.) zero, which makes the standard back-propagation inapplicable. The arguably most effective way around this issue is nothing but to construct a non-trivial search direction by 1 Published as a conference paper at ICLR 2019 properly modifying the chain rule. Specifically, one can replace the a.e. zero derivative of quantized activation function composited in the chain rule with a related surrogate. This proxy derivative used in the backward pass only is referred as the straight-through estimator (STE) (Bengio et al., 2013). In the same paper, Bengio et al. (2013) proposed an alternative approach based on stochastic neurons. In addition, Friesen & Domingos (2017) proposed the feasible target propagation algorithm for learning hard-threshold (or binary activated) networks (Lee et al., 2015) via convex combinatorial optimization. 1.1 RELATED WORKS The idea of STE originates to the celebrated perceptron algorithm (Rosenblatt, 1957; 1962) in 1950s for learning single-layer perceptrons. The perceptron algorithm essentially does not calculate the “gradient” through the standard chain rule, but instead through a modified chain rule in which the derivative of identity function serves as the proxy of the original derivative of binary output function 1{x>0}. Its convergence has been extensive discussed in the literature; see for examples, (Widrow & Lehr, 1990; Freund & Schapire, 1999) and the references therein. Hinton (2012) extended this idea to train multi-layer networks with binary activations (a.k.a. binary neuron), namely, to back- propagate as if the activation had been the identity function. Bengio et al. (2013) proposed a STE variant which uses the derivative of the sigmoid function instead. In the training of DNN with weights and activations constrained to ±1, (Hubara et al., 2016) substituted the derivative of the signum activation function with 1{|x|≤1} in the backward pass, known as the saturated STE. Later the idea of STE was readily employed to the training of DNN with general quantized ReLU activations (Hubara et al., 2018; Zhou et al., 2016; Cai et al., 2017; Choi et al., 2018; Yin et al., 2018b), where some other proxies took place including the derivatives of vanilla ReLU and clipped ReLU. Despite all the empirical success of STE, there is very limited theoretical understanding of it in training DNN with stair-case activations. Goel et al. (2018) considers leaky ReLU activation of a one-hidden-layer network. They showed the convergence of the so-called Convertron algorithm, which uses the identity STE in the backward pass through the leaky ReLU layer. Other similar scenarios, where certain layers are not desirable for back-propagation, have been brought up recently by (Wang et al., 2018) and (Athalye et al., 2018). The former proposed an implicit weighted nonlocal Laplacian layer as the classifier to improve the generalization accuracy of DNN. In the backward pass, the derivative of a pre-trained fully- connected layer was used as a surrogate. To circumvent adversarial defense (Szegedy et al., 2013), (Athalye et al., 2018) introduced the backward pass differentiable approximation, which shares the same spirit as STE, and successfully broke defenses at ICLR 2018 that rely on obfuscated gradients. 1.2 MAIN CONTRIBUTIONS Throughout this paper, we shall refer to the “gradient” of loss function w.r.t. the weight variables through the STE-modified chain rule as coarse gradient. Since the backward and forward passes do not match, the coarse gradient is certainly not the gradient of loss function, and it is generally not the gradient of any function. Why searching in its negative direction minimizes the training loss, as this is not the standard gradient descent algorithm? Apparently, the choice of STE is non-unique, then what makes a good STE? From the optimization perspective, we take a step towards understanding STE in training quantized ReLU nets by attempting these questions. On the theoretical side, we consider three representative STEs for learning a two-linear-layer net- work with binary activation and Gaussian data: the derivatives of the identity function (Rosenblatt, 1957; Hinton, 2012; Goel et al., 2018), vanilla ReLU and the clipped ReLUs (Cai et al., 2017; Hubara et al., 2016). We adopt the model of population loss minimization (Brutzkus & Globerson, 2017; Tian, 2017; Li & Yuan, 2017; Du et al., 2018). For the first time, we prove that proper choices of STE give rise to training algorithms that are descent. Specifically, the negative expected coarse gradients based on STEs of the vanilla and clipped ReLUs are provably descent directions for the minimizing the population loss, which yield monotonically decreasing energy in the training. In contrast, this is not true for the identity STE. We further prove that the corresponding training algo- rithm can be unstable near certain local minima, because the coarse gradient may simply not vanish there. 2 Published as a conference paper at ICLR 2019 Complementary to the analysis, we examine the empirical performances of the three STEs on MNIST and CIFAR-10 classifications with general quantized ReLU. While both vanilla and clipped ReLUs work very well on the relatively shallow LeNet-5, clipped ReLU STE is arguably the best for the deeper VGG-11 and ResNet-20. In our CIFAR experiments in section 4.2, we observe that the training using identity or ReLU STE can be unstable at good minima and repelled to an inferior one with substantially higher training loss and decreased generalization accuracy. This is an impli- cation that poor STEs generate coarse gradients incompatible with the energy landscape, which is consistent with our theoretical finding about the identity STE. To our knowledge, convergence guarantees of perceptron algorithm (Rosenblatt, 1957; 1962) and Convertron algorithm (Goel et al., 2018) were proved for the identity STE. It is worth noting that Convertron (Goel et al., 2018) makes weaker assumptions than in this paper. These results, however, do not generalize to the network with two trainable layers studied here. As aforementioned, the identity STE is actually a poor choice in our case. Moreover, it is not clear if their analyses can be extended to other STEs. Similar to Convertron with leaky ReLU, the monotonicity of quantized activation function plays a role in coarse gradient descent. Indeed, all three STEs considered here exploit this property. But this is not the whole story. A great STE like the clipped ReLU matches quantized ReLU at the extrema, otherwise the instability/incompatibility issue may arise. Organization. In section 2, we study the energy landscape of a two-linear-layer network with binary activation and Gaussian data. We present the main results and sketch the mathematical analysis for STE in section 3. In section 4, we compare the empirical performances of different STEs in 2- bit and 4-bit activation quantization, and report the instability phenomena of the training algorithms associated with poor STEs observed in CIFAR experiments. Due to space limitation, all the technical proofs as well as some figures are deferred to the appendix. Notations. || - || denotes the Euclidean norm of a vector or the spectral norm of a matrix. 0, € R” represents the vector of all zeros, whereas 1,, € IR” the vector of all ones. I, is the identity matrix of order n. For any w, z € R", w'z = (w,z) = >? ; wiz: is their inner product. w © z denotes the Hadamard product whose i“ entry is given by (w © z); = wizi. # 2 LEARNING TWO-LINEAR-LAYER CNN WITH BINARY ACTIVATION We consider a model similar to (Du et al., 2018) that outputs the prediction y(Z, v, w) = a) =v'o(Zw) i=l for some input Z € R™*". Here w € R” and v € R™ are the trainable weights in the first and second linear layer, respectively; ZI denotes the ith row vector of Z; the activation function ¢ acts component-wise on the vector Zw, i.e., o(Zw); = o((Zw);) = o(Z} w). The first layer serves as a convolutional layer, where each row ZI can be viewed as a patch sampled from Z and the weight filter w is shared among all Oyret and the second linear layer is the classifier. The label is generated according to y*(Z) = T¢(Zw*) for some true (non-zero) parameters v* and w*. Moreover, we use the following sauwoa sample loss (v,w: Z) = Deuce ~ y" (Z)) = 5 (v o(Zw) — y"(2))”. a) Unlike in , the activation function o here is not ReLU, but the binary function a(x) = ie We assume “a the entries of Z € R™*” are iid. sampled from the Gaussian distribution \(0, 1) (Zhong ot a1] BOTT forutzkus & Groberson] POT}. § Since ((v, w; Z) = €(v, w/c; Z) for any scalar c > 0, without loss of generality, we take = 1 and cast the learning task as the following population loss minimization problem: min f(v,w) := Ez [e(v, w;Z)], (2) vER™ weR” where the sample loss £(v, w; Z) is given by (ip. 3 Published as a conference paper at ICLR 2019 # 2.1 BACK-PROPAGATION AND COARSE GRADIENT DESCENT With the Gaussian assumption on Z, as will be shown in section 2.2, it is possible to find the analytic expressions of f (v, w) and its gradient The gradient of objective function, however, is not available for the network training. In fact, we can only access the expected sample gradient, namely, oe oe Ez [seco 0:2)] and Ez, [Ftv wit) : # oe [Ftv wit) : 2f(v,w) = Bz (6(o.wiZ)] We remark that Ez [24(v, w; Z)] is not the same as 2f(v,w) = Bz (6(o.wiZ)] back-propagation or chain rule, we readily check that . By the standard 5 (v,w;Z) = (Zw) (v" o(Zw) — y"(Z)) G) and ae ag (Ow w;Z) =Z" (o' (Zw) © v) (v' o(Zw) - y'(Z)); (4) ww Note that 0’ is zero a.e., which makes (4) inapplicable to the training. The idea of STE is to simpl replace the a.e. zero component 0’ in (4) with a related non-trivial function j.’ (Hinton et al. 2013} Hubara et al. 2016} (Cai et al.||2017), which is the derivative of some (sub)differentiable function jz. More precisely, back-propagation using the STE jy’ gives the following non-trivial sur- rogate of 55 oe “a (V, w; Z), to which we refer as the coarse (partial) gradient g,.(v, w; Z) = Z" (u/(Zw) © v) (v" o(Zw) - y"(Z)): (5) Using the STE y’ to train the two-linear-layer convolutional neural network (CNN) with binary activation gives rise to the (full-batch) coarse gradient descent described in Algorithm[]] Algorithm 1 Coarse gradient descent for learning two-linear-layer CNN with STE i’. Input: initialization v° € R™, w° € R”, learning rate 7. Input: initialization v° € R™, w° € R”, learning rate fort = 0,1, ... do ott = yt — n Ez [So (v', w';Z)] wit = wl — 7 Ez [gu(v', w'; Z)| end for 2.2 PRELIMINARIES Let us present some preliminaries about the landscape of the population loss function f(v, w). To this end, we define the angle between w and w* as 0(w,w*) := arccos | for any w # On. Recall that the label is given by y*(Z) = (v*)'Zw* from (ip. we elaborate on the analytic expressions of f(v, w) and Vf(v, w). Lemma 1. [fw Â¥ 0,,, the population loss f(v,w) is given by 1 2 * * g 0. (En + Lmdn)e — 207 ((.- =0(w,w )) 1,4 Ip) 0° + (Â¥*)" (Zn + Lind jn)Â¥ | TT (v*)"(L, mt 1n1f,)v # v* for w = On. In addition, f(v,w) (v*)"(L, mt 1n1f,)v v* for w = On. Lemma 2. [fw # 0, O(w, w*) € (0,7), the partial gradients of f(v,w) w.rt. v and w are of 1 1 2 # m of 1 1 2 . vt (v,w) = F(Im + Imndn)e = 5 (1 ~ 26(w,w )) In + Ip) 0° (6) 4 . Published as a conference paper at ICLR 2019 and : T ay (In — wu )w* oF (yw) =~ 2 el wi (7) Ow 7 || ew ww ‘ | - fe) respectively. For any v ∈ Rm, (v, 0m) is impossible to be a local minimizer. The only possible (local) minimizers of the model (2) are located at 1. Stationary points where the gradients given by (6) and (7) vanish simultaneously (which may not be possible), i.e., * -1 2 * * v'v* =Oand v = (Im +1m1),) (1 — =0(w, w )) In+ Int) v*. (8) 2. Non-differentiable points where 0(w, w*) = 0 and v = v*, or 0(w,w*) = mand v = -1 . (Im +1m1f,) m1, — Im)v*. m Among them, {(v,w) : v = v*, 0(w, w*) = 0} are obviously the global minimizers of (2h. We show that the stationary points, if exist, can only be saddle points, and {(v,w) : 0(w,w*) = T,V= (In + 1n1),) nth, — I;,)v*} are the only potential spurious local minimizers. Proposition 1. [f the true parameter v* satisfies (1},v*)? < ™£4||v*||?, then m m − Im)v∗} are the only potential spurious local minimizers. # m mv∗)2 < m+1 < ™£4||v*||?, then 0")? m In 4 Inn) =e")? (m+ 1)|Iv*|? — 0")? m { (v.20) 0 = (In +1m1n)* ( In 4 Inn) vw, z_minivit _} 2 (m + 1)|lv* |? — ne")? O(w, w*) ne")? give the saddle points obeying sh, and {(v,w) : 0(w,w*) =m, v = (Im + Und) m1, - I,,,)v* } are the spurious local minimizers. Otherwise, the model (2) has no saddle points or spurious local minimizers. We further prove that the population gradient V f(v, w) given by (6 and (7h. is Lipschitz continuous when restricted to bounded domains. Lemma 3. For any differentiable points (v,w) and (6, W) with min{||w|, ||w||} = cw > 0 and max{||v]], ||O||} = Cy, there exists a Lipschitz constant L > 0 depending on Cy and Cw, such that |Vf(w, w) — VF(@, w)|| < LE\(v, w) — (8, w)I|- # 3 MAIN RESULTS We are most interested in the complex case where both the saddle points and spurious local mini- mizers are present. Our main results are concerned with the behaviors of the coarse gradient descent summarized in Algorithm [I] when the derivatives of the vanilla and clipped ReLUs as well as the identity function serve as the STE, respectively. We shall prove that Algorithm[I]using the derivative of vanilla or clipped ReLU converges to a critical point, whereas that with the identity STE does not. Theorem 1 (Convergence). Let {(v', w')} be the sequence generated by Algorithm[I] with ReLU h(a) = max{x,0} or clipped ReLU p(x) = min {max{x,0}, 1}. Suppose ||w'|| > cw for all t with some Cy > 0. Then if the learning rate n > 0 is sufficiently small, for any initialization (v°, w°), the objective sequence {f(v',w')} is monotonically decreasing, and {(v',w')} con- verges to a saddle point or a (local) minimizer of the population loss minimization 2). In addi- tion, if L},v* 4 0 and m > 1, the descent and convergence properties do not hold for Algorithm with the identity function .(x) = x near the local minimizers satisfying 0(w,w*) = a and 8 = (Im +11) (mI jn, — In)”. Remark 1. The convergence guarantee for the coarse gradient descent is established under the assumption that there are infinite training samples. When there are only a few data, ina coarse scale, the empirical loss roughly descends along the direction of negative coarse gradient, as illustrated by Figure] As the sample size increases, the empirical loss gains monotonicity and smoothness. This explains why (proper) STE works so well with massive amounts of data as in deep learning. 5 (9) Published as a conference paper at ICLR 2019 Remark 2. The same results hold, if the Gaussian assumption on the input data is weakened to that their rows i.i.d. follow some rotation-invariant distribution. The proof will be substantially similar. In the rest of this section, we sketch the mathematical analysis for the main results. sample size = 10 sample size = 50 sample size = 1000 2 3 O15 9 a ee ae = | fe £05 —, o 0 0 005 01 015 02 learning rate 7 20 8. | 01.5 \ = l\ = \ fe £05} o SN 0 0 005 01 015 02 learning rate 7 2 3 01.5\ = = fe £05 o 0 0 005 01 015 02 learning rate 7 Figure 1: The plots of the empirical loss moving by one step in the direction of negative coarse gradient v.s. the learning rate (step size) η for different sample sizes. 3.1 DERIVATIVE OF THE VANILLA RELU AS STE If we choose the derivative of ReLU y(”) = max{a,0} as the STE in (5), it is easy to see h(x) = a(x), and we have the following expressions of Ez, [ 24(v, w; Z)| ee b, [grota(v, w;Z) for Algorithm|[T] Lemma 4. The expected partial gradient of ¢(v,w;Z) wrt. v is Ba [soto 2)] = Fol (v, w). (10) Let µ(x) = max{x, 0} in (5). The expected coarse gradient w.r.t. w is h(v,v") we cos (A) viv* Tel + w* 2/27 ||w\l 2 Von lear 4 w* w qd) Ez [doein(v, w; Z)| # EZ where h(v, v*) = |lv||? + (1,2)? — (1,v)(dnv") +0". As stated in Lemma below, the key observation is that the coarse partial gradient Ez [grora(v, 3 Z)| has non-negative correlation with the population partial gradient of (v,w), and —Ez|@gretu(v, w; Z)| together with —Ez [3 (v, w; Z)| form a descent direction for minimiz- ing the population loss. Lemma 5. [fw 4 0,, and 0(w, w*) € (0,7), then the inner product between the expected coarse and population gradients w.r.t. w is (Bs [rein (vs w; Z)). of (v, w)) SCS (vl v*)? >0 Moreover, if further ||v|| < Cy and ||w|| > cw, there exists a constant Aye, > 0 depending on C, and Cw, such that [Ez [grotu(v, ws Z)| if < Avetu (oe ° + (Bs [aoutom:2)) $20.0) . (12) Clearly, when (Bz [grata(v, w; Z)). of (v, w)) > 0, Ez [groin (v, Z)| is roughly in the same Ow direction as of (v,w). Moreover, since by Lemma|4| Ez [“(v,w;Z)] = of (v, w), we expect 1We redefine the second term as 0n in the case θ(w, w∗) = π, or equivalently, w # Ter +w* =0n. 6 Published as a conference paper at ICLR 2019 that the coarse gradient descent behaves like the gradient descent directly on f (v, w). Here we would like to highlight the significance of the estimate (12) in guaranteeing the descent property of Algorithm 1. By the Lipschitz continuity of ∇f specified in Lemma 3, it holds that of of t4H1 patty — F(ast apt GT (at at) qyttl — at PT Eat apt) ayttl — apt fv, w'"*) — f(v',w') < (Fie ao!).2 vf) + (SE (ol a) a0 w') L 5 + 5 (lot? — vo"? + [leat — w' |) Lr? \ || Of -(n- 2 ) Ov of t —n aw (v' w'), Bz [grora(v’, w' )) (oss) fhe (vA) (thie) (v', w') Lp 2 . OF | [Bz [grem(v", w': Z)} || e ) IA (13) where a) is due to (12). Therefore, if η is small enough, we have monotonically decreasing energy until convergence. Ov Lemma 6. When Algorithm|1\converges, Ez [6 v,W; Z)| and Ez [rota(v, w; Z)| vanish simul- taneously, which only occurs at the 1. Saddle points where (8) is satisfied according to Proposition 1. 2. Minimizers of (2) 12) where v = v*, O(w, w*) = 0, or v = (Im tind) 1m, —Im)v*, O(w,w*) =7. Lemma 6 states that when Algorithm 1 using ReLU STE converges, it can only converge to a critical point of the population loss function. 3.2 DERIVATIVE OF THE CLIPPED RELU AS STE For the STE using clipped ReLU, p(x) = min {max{x,0},1} and p/(x) = lyocx<1}(x). We have results similar to Lemmas [5] Bland 6] That is, the coarse partial gradient using clipped ReLU STE Ez [geraiu(v, w; :2)| generally has positive correlation with the true partial gradient of the population loss 5 of 5 (U, w) (Lemmal7). Moreover, the coarse gradient vanishes and only vanishes at the critical points “(Lemmals}. Lemma 7. If w # 0, and 0(w, w*) € (0,7), then # w 0, w)h(v,v*) w . wy + Ez [gerem(v, w; Z)] = PO who) we — (vt v*) esc(0/2) - 4(0,w) Tal te al Il ear +w* w ~ (oT) (p(0,w) ~ cot(8/2) (0, w)) Pn, where h(v, v*) := | ||? + (Aj)? — (Lf,v)(Lu*) + vl v* same as in Lemma{3| and (Aj)? — ae) (Lf,v)(Lu*) # + vl v* same as in Lemma{3| and nro 2 [moe ae) votnin 2 [oe (a with €(x) = So mee # 0 r2 exp(− r2 2 )dr. The inner product between the expected coarse and true gradients Di 110.20) yr ye (Ba [dean (v. 0s 2)], SA (v.10) ) = SO oT or)? 0. 7 (13) Published as a conference paper at ICLR 2019 Moreover, if further ||v|| < Cy and ||w|| > Cw, there exists a constant Acrely > 0 depending on Cy and Cw, such that 2 (2) + (Bs [geroin(v, 0; Z)| : oF iw, »)) : |2x[aountean 2] <a (| 3f cen Lemma 8. When Algorithm|1|converges, Bz | 24 (v, w; Z)| and Ez [geve1n(, w; Z)| vanish simul- taneously, which only occurs at the 1. Saddle points where (8) is satisfied according to Proposition 1. 2. Minimizers of (2) 12) where v = v*, O(w, w*) = 0, or v = (Ln +1m1),)71 (Amd, —In)v*, O(w,w*) =7. 3.3 DERIVATIVE OF THE IDENTITY FUNCTION AS STE Now we consider the derivative of identity function. Similar results to Lemmas 5 and 6 are not valid anymore. It happens that the coarse gradient derived from the identity STE does not vanish at local minima, and Algorithm 1 may never converge there. Lemma 9. Let µ(x) = x in (5). Then the expected coarse partial gradient w.r.t. w is Bz|gia(v, w;Z)] = se (le ?— wv - (ote) (14) ||| # a # If 0(w, w*) = mand v = (Ip + Unt) If 0(w, w*) = mand v = (Ip + Unt) —I,)v*, 2(m — 1) V2n(m + 1)? [Ez [gia(, w;Z) )| | - = (1f.v*)? >0 ’ # [gia(, Ww; Z)| # i.e., EZ ie, Ez [gia(, Ww; Z)| does not vanish at the local minimizers if 1},v* # 0 and m > 1. Lemma 10. [fw # 0,, and 0(w, w*) € (0,7), then the inner product between the expected coarse and true gradients w.r.t. w is (Bz [sia(v. ww: 2], OF (ow) eee >0. as) When 0(w, w*) > 7, 0 > (Im +1m1),)~ ‘andl, —I,)v*, if 1,v* 4 0and m > 1, we have —I,)v*, if 1,v* [2e[ouow:2] 2L(v,w)|) + + (Ez [gia(. w; :D)|. 3. o£ (vy, w)) +o. (16) Lemma [| suggests that if L|,v* 4 0, the coarse gradient descent will never converge near the I,,)v*, because spurious minimizers with 6(w,w*) = 7 and v = (In + Ln) ‘dnl, - Ez|gia(v, w; Z)| does not vanish there. By the positive correlation implied by {15} of Lemma for some proper (v°, w ”), the iterates {(v', w’)} may move towards a local minimizer in the beginning. But when {(v‘,w')} approaches it, the descent property ) does not hold for Ez [gia(v, w; Z)] because of (16} (9, hence the training loss begins to increase and instability arises. # 4 EXPERIMENTS While our theory implies that both vanilla and clipped ReLUs learn a two-linear-layer CNN, their empirical performances on deeper nets are different. In this section, we compare the performances of the identity, ReLU and clipped ReLU STEs on MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky, 2009) benchmarks for 2-bit or 4-bit quantized activations. As an illustration, we plot 8 Published as a conference paper at ICLR 2019 the 2-bit quantized ReLU and its associated clipped ReLU in Figure 3 in the appendix. Intuitively, the clipped ReLU should be the best performer, as it best approximates the original quantized ReLU. We also report the instability issue of the training algorithm when using an improper STE in section 4.2. In all experiments, the weights are kept float. The resolution α for the quantized ReLU needs to be carefully chosen to maintain the full-precision level accuracy. To this end, we follow (Cai et al., 2017) and resort to a modified batch normalization layer (Ioffe & Szegedy, 2015) without the scale and shift, whose output components approximately follow a unit Gaussian distribution. Then the α that fits the input of activation layer the best can be pre-computed by a variant of Lloyd’s algorithm (Lloyd, 1982; Yin et al., 2018a) applied to a set of simulated 1-D half-Gaussian data. After determining the α, it will be fixed during the whole training process. Since the original LeNet-5 does not have batch normalization, we add one prior to each activation layer. We emphasize that we are not claiming the superiority of the quantization approach used here, as it is nothing but the HWGQ (Cai et al., 2017), except we consider the uniform quantization. The optimizer we use is the stochastic (coarse) gradient descent with momentum = 0.9 for all ex- periments. We train 50 epochs for LeNet-5 (LeCun et al., 1998) on MNIST, and 200 epochs for VGG-11 (Simonyan & Zisserman, 2014) and ResNet-20 (He et al., 2016) on CIFAR-10. The pa- rameters/weights are initialized with those from their pre-trained full-precision counterparts. The schedule of the learning rate is specified in Table 2 in the appendix. 4.1 COMPARISON RESULTS The experimental results are summarized in Table 1, where we record both the training losses and validation accuracies. Among the three STEs, the derivative of clipped ReLU gives the best overall performance, followed by vanilla ReLU and then by the identity function. For deeper networks, clipped ReLU is the best performer. But on the relatively shallow LeNet-5 network, vanilla ReLU exhibits comparable performance to the clipped ReLU, which is somewhat in line with our theoret- ical finding that ReLU is a great STE for learning the two-linear-layer (shallow) CNN. MNIST CIFAR10 Network LeNet5 VGG11 ResNet20 BitWidth 2 4 2 4 2 4 identity 2.6 × 10−2/98.49 6.0 × 10−3/98.98 0.19/86.58 3.1 × 10−2/90.19 1.56/46.52 1.38/54.16 Straight-through estimator vanilla ReLU 5.1 × 10−3/99.24 9.0 × 10−4/99.32 0.10/88.69 1.5 × 10−3/92.01 1.50/48.05 0.25/86.59 clipped ReLU 5.4 × 10−3/99.23 8.8 × 10−4/99.24 0.02/90.92 1.3 × 10−3/92.08 0.24/88.39 0.04/91.24 Table 1: Training loss/validation accuracy (%) on MNIST and CIFAR-10 with quantized activations and float weights, for STEs using derivatives of the identity function, vanilla ReLU and clipped ReLU at bit-widths 2 and 4. 4.2 INSTABILITY We report the phenomenon of being repelled from a good minimum on ResNet-20 with 4-bit acti- vations when using the identity STE, to demonstrate the instability issue as predicted in Theorem 1. By Table 1, the coarse gradient descent algorithms using the vanilla and clipped ReLUs converge to the neighborhoods of the minima with validation accuracies (training losses) of 86.59% (0.25) and 91.24% (0.04), respectively, whereas that using the identity STE gives 54.16% (1.38). Note that the landscape of the empirical loss function does not depend on which STE is used in the training. Then we initialize training with the two improved minima and use the identity STE. To see if the algorithm is stable there, we start the training with a tiny learning rate of 10−5. For both initializations, the training loss and validation error significantly increase within the first 20 epochs; see Figure 4.2. To speedup training, at epoch 20, we switch to the normal schedule of learning rate specified in Table 2 and run 200 additional epochs. The training using the identity STE ends up with a much worse minimum. This is because the coarse gradient with identity STE does not vanish at the good minima in this case (Lemma 9). Similarly, the poor performance of ReLU STE on 2-bit activated ResNet-20 9 Published as a conference paper at ICLR 2019 is also due to the instability of the corresponding training algorithm at good minima, as illustrated by Figure 4 in Appendix C, although it diverges much slower. z= — CReLU 47.48% | 380 —— ReLU 49.73% oO ee | = g 60 5 40 i g 20 0 50 100 150 200 epoch 5 fee CReLU 153 a — RelU 148 wo 2 2 3 c s? 5, 0 0 50 100 150 200 epoch 5 fee CReLU 153 z= — CReLU 47.48% | a — RelU 148 380 —— ReLU 49.73% wo oO ee | 2 = 2 3 g 60 c s? 5 40 5, i 0 g 20 0 50 100 150 200 0 50 100 150 200 epoch epoch Figure 2: When initialized with weights (good minima) produced by the vanilla (orange) and clipped (blue) ReLUs on ResNet-20 with 4-bit activations, the coarse gradient descent using the identity STE ends up being repelled from there. The learning rate is set to 10−5 until epoch 20. # 5 CONCLUDING REMARKS We provided the first theoretical justification for the concept of STE that it gives rise to descent training algorithm. We considered three STEs: the derivatives of the identity function, vanilla ReLU and clipped ReLU, for learning a two-linear-layer CNN with binary activation. We derived the explicit formulas of the expected coarse gradients corresponding to the STEs, and showed that the negative expected coarse gradients based on vanilla and clipped ReLUs are descent directions for minimizing the population loss, whereas the identity STE is not since it generates a coarse gradient incompatible with the energy landscape. The instability/incompatibility issue was confirmed in CIFAR experiments for improper choices of STE. In the future work, we aim further understanding of coarse gradient descent for large-scale optimization problems with intractable gradients. ACKNOWLEDGMENTS This work was partially supported by NSF grants DMS-1522383, IIS-1632935, ONR grant N00014- 18-1-2527, AFOSR grant FA9550-18-0167, DOE grant DE-SC0013839 and STROBE STC NSF grant DMR-1548924. # REFERENCES Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. arXiv preprint arXiv:1702.07966, 2017. Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2017. Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srini- vasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018. Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In International Conference on Machine Learning, pp. 160–167. ACM, 2008. 10 Published as a conference paper at ICLR 2019 Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123–3131, 2015. Simon S. Du, Jason D. Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient de- scent learns one-hidden-layer CNN: Don’t be afraid of spurious local minimum. arXiv preprint arXiv:1712.00779, 2018. Yoav Freund and Robert E Schapire. Large margin classification using the perceptron algorithm. Machine learning, 37(3):277–296, 1999. Abram L Friesen and Pedro Domingos. Deep learning as a mixed convex-combinatorial optimiza- tion problem. arXiv preprint arXiv:1710.11573, 2017. Surbhi Goel, Adam Klivans, and Raghu Meka. Learning one convolutional layer with overlapping patches. arXiv preprint arXiv:1802.02547, 2018. Juncai He, Lin Li, Jinchao Xu, and Chunyue Zheng. ReLU deep neural networks and linear finite elements. arXiv preprint arXiv:1807.03973, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Geoffrey Hinton. Neural networks for machine learning, coursera. Coursera, video lectures, 2012. Lu Hou and James T Kwok. Loss-aware weight quantization of deep networks. arXiv preprint arXiv:1802.08635, 2018. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016. Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18:1–30, 2018. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Alex Krizhevsky. Learning multiple layers of features from tiny images. Tech Report, 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in Neural Information Processing Systems, pp. 1097–1105, 2012. Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, and Yoshua Bengio. Difference target propagation. In Joint european conference on machine learning and knowledge discovery in databases, pp. 498–515. Springer, 2015. Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. Hao Li, Soham De, Zheng Xu, Christoph Studer, Hanan Samet, and Tom Goldstein. Training quantized nets: A deeper understanding. In Advances in Neural Information Processing Systems, pp. 5811–5821, 2017. Qianxiao Li and Shuji Hao. An optimal control approach to deep learning and applications to discrete-weight neural networks. In International Conference on Machine Learning, 2018. Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems, pp. 597–607, 2017. 11 Published as a conference paper at ICLR 2019 Stuart P. Lloyd. Least squares quantization in PCM. IEEE Trans. Info. Theory, 28:129–137, 1982. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle- mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525–542. Springer, 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing systems, pp. 91–99, 2015. Frank Rosenblatt. The perceptron, a perceiving and recognizing automaton Project Para. Cornell Aeronautical Laboratory, 1957. Frank Rosenblatt. Principles of neurodynamics. Spartan Book, 1962. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484, 2016. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. Yuandong Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560, 2017. Bao Wang, Xiyang Luo, Zhen Li, Wei Zhu, Zuoqiang Shi, and Stanley J Osher. Deep neural nets with interpolating function as output activation. In Advances in Neural Information Processing Systems, 2018. Bernard Widrow and Michael A Lehr. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE, 78(9):1415–1442, 1990. Penghang Yin, Shuai Zhang, Yingyong Qi, and Jack Xin. Quantization and training of low bit-width convolutional neural networks for object detection. arXiv preprint arXiv:1612.06052, 2016. Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. Bina- ryrelax: A relaxation approach for training deep neural networks with quantized weights. arXiv preprint arXiv:1801.06313, 2018a. Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. Blended coarse gradient descent for full quantization of deep neural networks. arXiv preprint arXiv:1808.05240, 2018b. Kai Zhong, Zhao Song, Prateek Jain, Peter L Bartlett, and Inderjit S Dhillon. Recovery guarantees for one-hidden-layer neural networks. arXiv preprint arXiv:1706.03175, 2017. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016. 12 Published as a conference paper at ICLR 2019 # APPENDIX THE PLOTS OF QUANTIZED AND CLIPPED RELUS quantized ReLU clipped ReLU o(a) 3a 2a} — a O ai da 3a ea G(x) 3a 2a a O ai da 3a ea Figure 3: The plots of 2-bit quantized ReLU σα(x) (with 22 = 4 quantization levels including 0) and the associated clipped ReLU ˜σα(x). α is the resolution determined in advance of the network training. B. # THE SCHEDULE OF LEARNING RATE Network LeNet5 VGG11 ResNet20 # epochs Batch size 50 200 200 64 128 128 initial 0.1 0.01 0.01 Learning rate decay rate milestone [20,40] [80,140] [80,140] 0.1 0.1 0.1 Table 2: The schedule of the learning rate. C. INSTABILITY OF RELU STE ON RESNET-20 WITH 2-BIT ACTIVATIONS 0.8 a fe} 0.6 Ss S £ 0.4 0 50 100 150 200 epoch = > 85 ig i 3 80 © < £75 ® z $70 ie} 50 100 150 200 epoch = 0.8 > 85 ig a i fe} 3 80 0.6 © Ss < S £75 £ 0.4 ® z $70 ie} 50 100 150 200 0 50 100 150 200 epoch epoch Figure 4: When initialized with the weights produced by the clipped ReLU STE on ResNet-20 with 2-bit activations (88.38% validation accuracy), the coarse gradient descent using the ReLU STE with 10−5 learning rate is not stable there, and both classification and training errors begin to increase. D. ADDITIONAL SUPPORTING LEMMAS Lemma 11. Let z ∈ Rn be a Gaussian random vector with entries i.i.d. sampled from N (0, 1). Given nonzero vectors w, ˜w ∈ Rn with the angle θ, we have 1 E [p27 w>0}] = > E [Vy2Tw>0,27H>0}1 = nm—0 Qn? , 13 Published as a conference paper at ICLR 2019 and 1 cos(6/2) Tan + teen] = aeiag Beer wooar wool = “Yae e TL Proof of Lemma[II] The third identity was proved in Lemma A.1 of ors}. To with wy show the first one, without loss of generality we assume w = [w1,0,!_, > 0, then E [1gztws0}] = Plas > 0) = § We further assume w = [t1, t2,0,!_»]'. It is easy to see that t2,0,!_»]'. It is easy to see that 20 E[ltetw>0,27w>0}] = P(z'w > 0, 2'w > 0) = . To prove the last identity, we use polar representation of two-dimensional Gaussian random vari- ables, where r is the radius and ¢ is the angle with dP,. = r exp(—r?/2)dr and dP, = xd. Then E [2:1 ¢27 w>0,27w>0}| = 0 for i > 3. Moreover, 1s r? 2 _ i+ cos(6 E [211 27 w>0,27 w>0}] = Pal r” exp (-5) ar [ exo cos(¢)d¢ pa ) -3 and 1° r2 z ; sin(@ E [221 {27 w>0,27w>0}] = Pal 7” exp (-5) ar [ a sin($)d@ = oe 72 Therefore, al;,+ _ _ cos(0/2) cos sin + 7 _ cos(0/2) Ter + TSI E [21 (27 w>0,27>0}] ie [cos(8/2), sin(/2), 0,2)" = To =a qe ae + P w 1 Ts where the last equality holds because Teor and a are two unit-normed vectors with ang Tet + Per 6/2. 8 a le ES Lemma 12. Let z ∈ Rn be a Gaussian random vector with entries i.i.d. sampled from N (0, 1). Given nonzero vectors w, ˜w ∈ Rn with the angle θ, we have E [zl pocgt wei} = p(0, w) Teo’ E [21 oceTw<1,27H>0}| = ((p(0, w)) — cot(0/2) - q(0, w)) [el + esc(0/2) - q(0, w) Ter + ical Here p(8,w) = aed ga costo)6 (Tr) ae al) = 35 I. sn(os (Ta) with E(x) = for’ 2 exp(—5)dr satisfying €(+-00) = p(t, w) = q(0, w) = q(7,w) = 0. In addition, >] Same as in Lemmaf4] we redefine E [Zl pet w>o, 2Tw>oy] = 0,, in the case 0(w, w) = 7. 14 . # dφ , Published as a conference paper at ICLR 2019 Proof of Lemma{[i2} Following the proof of Lemma|I]| we let w = [w1,0,!_,]' with w, > 0, w = [ti, w2,0,'_]", and use the polar representation of two-dimensional Gaussian distribution. Then sec() 1 3 Ter 2 ; E[algocetwei}] = 5 | _ cos(#) [ 7? exp (-5) drdé = p(0,w) 72 and sec(p) 2 sin(o) [ rl r? exp (-5) drd¢ = q(0, w) = 0, 0 wa 1 E [221 ocnTw<1}] = =|. z since the integrand above is an odd function in ¢. Moreover, E [Zilpocet we1}] = 0 fori > 3. So the first identity holds. For the second one, we have x sec($) 3 1 2 ; w r2 j E [211 oceTw<iz7 w>0}) = on | ; cos(o) [ r? exp (-5) drd¢ = p(0, w), -3+ and similarly, E [221 ,o<¢7 w<1,27w>0}| = 9(9; w). Therefore, Elz1 oc2t weiztw>o}] = [p(O, w), (8, w), 0,9] '- . Tett Ter iti Since poy = [1 01_,]' and Tes) = = [cos(6/2), sin(@/2),0,'_4]', it is easy to check the second identity is also true. Lemma 13. Let p(θ, w) and q(θ, w) be defined in Lemma 12. Then for θ ∈ [ π 2 , π], we have 1. p(0,w) < q(0,w). 2. (1 - 2) p(0, w) < p(0, w) < q(0, w). π # π Proof of Lemma 13. 1. Let θ ∈ [ π 2 , π], since ξ ≥ 0, wir = coo 8) ae 3 ff an(S)e( a 2 = f* snl 0 (2) ao < p stay (222) a6 = (00, IA 2n |[2o|| 5 ia] where the last inequality is due to the rearrangement inequality since both sin(φ) and ξ are increasing in φ on [0, π cos(¢)& (3) is even, we have Teel] wis) m2 [te ef sno (ap) za [800 (Far) Ea 3 ie [ <s 0 2. Since cos(φ)ξ The first inequality is due to part 1 which gives p(π/2, w) ≤ q(π/2, w), whereas the second one holds because sin(φ)ξ Lemma 14. For any nonzero vectors w and w with ||w|| > ||w|| = c > 0, we have 15 (a3 7) Published as a conference paper at ICLR 2019 1. |0(w, w*) — 0(w, w*)| < F|lw — w}. ww l * ww! * I,-2Â¥y }w In e w 1 Pel IC. ester ra | (e225 Je < allw — all. 2. Proof of Lemma[I4] 1. Since by Cauchy-Schwarz inequality, (ww - =) = iw —elaill <0, ||| we have c cw 2 c 2 i» ||? tee [Oped Cte) = 1 ae) Fei ||| |[2o| ||| |[2o| ~ 4/2 > |w- 5 ae ° (17) [al |[2o| Therefore, w w O(w, w*) — 0(w, w*)| < O(w, w (i a) | (ew S60) =O Teal ral eg OC rer) ) ew <msin < 7 Iw — wll, 2 ||| rl =e where we used the fact sin(x) ≥ 2x π for x ∈ [0, π 2 ] and the estimate in (17). 2. Since (In- Teo 7) w* is the projection of w* onto the complement space of w, and likewise for (In - iar) w* , the angle between [( I, — an )w* and (In - iar )w* is equal to the angle Trey] al between w and w. Therefore, — a (2, (1. — tz) w" | and thus 1 (In _ qty) w* 1 (In _ we )w* Wei |e er] P(E ase) | jar - gare fone Sale [wll feo? J] feel lleo = The second equality above holds because w |? 1 1 2(w, to) \|w — wl]? lier / felP [[eo||? © eo||? feo]? em]? [lew ew?” E. MAIN PROOFS Lemma 1. [fw # 0), the population loss f(v,w) is given by 1 2 :| "(Im +1m1),)v — 20° ((.- 2 4(w,10")) In 4 Ind) 0° + (v*)" (Im + Int, In addition, f(v,w) = a(v (In + Ln1j,)v v* for w = On. 16 Published as a conference paper at ICLR 2019 # Proof of Lemma 1. We notice that f(v,w) = 3 (& Bala (Zww)o(Zev) "ov — 2uEz[o(Zw)o(Zw*) "|v* + (v*)"Ez[o(Zw*)o(Zw*) "Jv*). Let Z) be the i‘ row vector of Z. Since w # On, using Lemmalt]| we have Ez [o(Zw)o(Zw) "] ,, a =E [o(Z} w)o(Z; w)| =E [lize w>0)| _ > , and for i 4 j, 1 4 . Ez [o(Zw)o(Zw)"],, =E[o(Z! w)o(Zj w)] =E [127 w>09| E [127 w>03] = [o(Zw)o(Zw)"],, =E[o(Z! [o(Zw)o(Zw)"| = Ez . w>09| + (Im +1,1,,). w>03] = Furthermore, Therefore, Ez [o(Zw)o(Zw)"| = Ez [o(Zw*)o(Zw*)"] = + (Im +1,1,,). Furthermore, _ 1 — 0(w,w*) Ez [o(Zw)o(Zw*)"],, =E [Liar w>0,21 w>0) On , # and EZ [o(Zw)o(Zw*)"] ij = 1 4 . So, Ez [o(Zw)o(Zw*) "| ‘ ((: mee) In n): t In} qT Then it is easy to validate the first claim. Moreover, if w = 0n, then f(v,w) = 3(v*) "Bz [o(Zav")o(Zaw*) "Jo" = 50") Um + ApLh)or. Lemma 2. [fw # 0,, and 0(w, w*) € (0,7), the partial gradients of f(v,w) w.rt. v and w are of 1 1 2 of 1 1 2 af (v, w) qlln + Im1j,)v i (1 - ce) In + Inn) v* and + f ote (In = itz w aw”) =~ Feleo] wut | - fe) respectively. Proof of Lemma 2. The first claim is trivial, and we only show the second one. Since θ(w, w∗) = arccos of (v, w) vl v* 00 (w,w") v!v* ||w||2w* — (wl w*)w Ow” 2r Ow” Qn \jew|[3/1 — (w' w*)? {Jel? + ote (In = itz w* 2r|\w TY ll “Wel (ts — fie) # ∂f ∂w 17 Published as a conference paper at ICLR 2019 mv∗)2 < m+1 mv∗)2 Proposition 1. [f the true parameter v* satisfies ™#+||v*||?, then Proposition 1. [f the true parameter v* satisfies (1,,v*)” < ™#+||v*||?, then - —(1ne"*)? T (v,w):v=(Imt1 13) ( aE alin + 1m, |v", { m+ Imln (mF Del? = Genetn ttm tn O(w, w”) T (m+ \)|lv*|? \ 2 (m+ 1| *|? — 1,0")? m give the saddle points obeying {3}, and {(v,w) : 0(w,w*) =m, v = (Im + Ind.) (mL, - I,,)v* } are the spurious local minimizers. Otherwise, the model (2) has no saddle points or spurious local minimizers. Proof of Proposition(I] Suppose v'v* = 0 and Proof of Proposition(I] Suppose v'v* = 0 and av, w) = 0, then by Lemma||| av, w) = 0, then by Lemma||| 2 * * - 2 * * 0 =v! v* = (v*) "(In +1mln) 4 (1 — =0(w, w )) In + Int) vo (18) From (18) it follows that 2 iy" =(w, w")(v")" Lm +L Lyn m \ote* = (v*) "(Lin t+Umlin)) (Lin + Umljh) v* = ||v*||?. 9) m vig On the other hand, from it also follows that On the other hand, from it also follows that 2 * * “1s (Zacw. 0 )- 1) (v*)" (Lin + Umln)710* 2 * * “1s * - * 1'v")? (Zacw. 0 )- 1) (v*)" (Lin + Umln)710* = (v*)" (Im + m1) 711m (1i0*) = oe where we used (Ip, + Lin1 1m = (m+ 1)1m. Taking the difference of the two equalities above gives ney? m+1~ By (19), we have 6(w, w*) = Soot which requires (0°) (In + Im1n) 0" = |Iv" |? - (20) m (m+ Dllv*|? 2 (m+ 1)||v*|? — (1,0*) m T m m+1 2 vy< |v" |)?. 3 <7, or equivalently, (1 Furthermore, since ∂f ∂v (v, w) = 0, we have 2 v= (Im +1m1n)! (1 - 240,10") In + Int) v = (Im +1m1),)7! =e"? In + Umi, ) vo mek) Geter agepete Sete) Next, we check the local optimality of the stationary points. By ignoring the scaling and constant terms, we rewrite the objective function as ~ 2 f(v,0) = v' (Im + Ln1),)Â¥ —2v! (G - =) Im + Int) v*, for € [0,7]. It is easy to check that its Hessian matrix 2a yy — [2m +1mth) 40" Vv f(v, 9) = 4iy*)t 0 is indefinite. Therefore, the stationary points are saddle points. Moreover, if (1,[,v*)? < ™$+||v*||?, at the point (v, 0) = ((Im we have Moreover, if (1,[,v*)? < ™$+||v*||?, at the point (v, 0) = ((Im + 1m1},)7!(Im1, — Im)v*, 7), we have vly® = (v°)" (Tn + Indy)" (Imdyn ~ Lm)” . . “te 2A e*2 . = lo" |? — 200°)" En + Int ytot = Gn eri? <0, an 18 , Published as a conference paper at ICLR 2019 where we used (20) in the last identity. We consider an arbitrary point (v + ∆v, π + ∆θ) in the neighborhood of (v, π) with ∆θ ≤ 0. The perturbed objective value is f(v + Av, + M8) = (v+ Av)" (Im +1 mile + Av) — 2(v + Av)" (11), — Im) v* 2A0 + — (w+ Av lv On the right hand side, since v = (Im + Lint) m1, m — Im)v* is the unique minimizer to the quadratic function f(v,7), we have if Av 4 0, (v + Av)" (In + Im.) (v + Av) — 2(v + Av)" (Int, — In) v* > Flv, 7). weer for sufficiently small ||Av||, it holds that A@ - (v + Av)! v* > 0 for AO < 0 because of (21). Therefore, f(v + Av, + A) > f(v, 7) whenever (Av, Aé) is small and non-zero, and tne + Iindj,)7}(1m1 1, — Im)v*, 7) is a local minimizer of f. tne + Iindj,)7}(1m1 1, — Im)v*, 7) To prove the second claim, suppose or of (v, w) and or 5 (UV; w) do not vanish To prove the second claim, suppose (1,,v*)? > ™++||v*||?, then either of (v, w) does not exist, or of (v, w) and or 5 (UV; w) do not vanish simultaneously, and thus there is no stationary point. mv At the point (v, 0) = ((Lm + 1m1),)~ m M11), — Im)v*, 7), we have 21 hv : prot = 2h? ee so m+ If v'v* > 0, since Vf(v,0) = $[0/,,2v'v*]", a small perturbation [0/,,A0]" with Ad < 0 will give a strictly decreased objective value, so (uv, 0) = (im + Im1j,)~ M11, —In)v*, n)i is not a local minimizer. If v'v* = 0, then Vil», 0) = 0,,41, the same conclusion can be reached by examining the second order necessary condition. Lemma 3. For any differentiable points (v,w) and (©, 6) with min{||w|, ||w||} = cy > 0 and max{||v]], ||O||} = Cy, there exists a Lipschitz constant L > 0 depending on Cy and Cw, such that |Vf(w, w) — VF(@, w)|| < LE\(v, w) — (, w)||- Proof of Lemma[3] It is easy to check that ||Z;n + Im1l # =m-+1. Then | )- Lea, w)| ile Amd )(v~ 8) + = (8(w, wo") — O(a, w"))0* < Ny — ay 4 PM a(an, wo") — (0, 20°) < mtv —al et ||w — wl] <t(m414 2D) (ow) - 0) where the last inequality is due to Lemma 14.1. 19 Published as a conference paper at ICLR 2019 We further have T aT af af vive Un- waa ) w" btye Un- a) w" Bw) — pe) Iw wwT Iw wT | (4m = ae) wo | (4m — Bae) we T aT otor (In waa ) w ote (In aa) w lw] Nl L- aa Im] | L,- a aT aT yl y* (tn = aa) w* ely (In - ai) w* 27||w] ||(Z- way) w* 27||w| || = aa) w* [ore lao — ay + ly ay ~ 2ne2, QTCw Cy + Cw) ||v* ~ < (Cot ea) lP*l icy, w) — (8,0), Ce, where the second last inequality is to due to Lemma 14.2. Combining the two inequalities above validates the claim. Lemma 4. The expected partial gradient of ¢(v,w;Z) wrt. v is ol a Ez [Fe(oswi2)] = 2 (ww). Let µ(x) = max{x, 0} in (5). The expected coarse gradient w.r.t. w is h(v,v*) w (A) ole" Top + @" |, — cos Wan (wll 2 V20 ra + w* w Ez [grata v, w; Z)| where h(v, v*) = |lv||? + (1/,v)? — (17,v)(1],v*) + vl u* m m m Proof of Lemma/4] The first claim is true because g,.(v, w; Z) = Z" (u!(Zw) © v) (v' ∂v (v, w; Z) is linear in v. By (5), Proof of Lemma/4] The first claim is true because iy, w; Z) is linear in v. By g,.(v, w; Z) = Z" (u!(Zw) © v) (v' o(Zw) - (v')"o(Zw")). Using the fact that pu’ = o = 1450}, we have (x: vjo(Z) w) — > rota) (x Z(t) i=1 Ez [Greiu(v, w; Z)] = Ez i=l i=l m m m =82 | (Somtrwoy- So ttearwan) (So herwonn) | i=l i=l i=l Invoking Lemma 11, we have Low ogee yey. OCift = 9, E [Zi (27 w>0,27 w>0)| -{ a" rel init; BV Tey NYA Jp and . cos(6(w,w") /2) et 0" ifi = j, B[Zilparwsoz7w->0)| =) 4 an Ihrer" H J w ey Everall ft A J. 3We redefine the second term as 0n in the case θ(w, w∗) = π, or equivalently, w # Ter +w* =0n. 20 Published as a conference paper at ICLR 2019 Therefore, m mm [Greu(v, w; Z)] = Ss v7E [Zilizrw>0)| + Sy vjvjE [Zilarw>0.27 w>0}| i=1 i=l j=1 Hi m _ Ss vyy7E (Zila! w>0.2! w>0)| i=l mom _ Ss Ss vivjE [Zilzrw>0.2) w->0)| i=l j=1 pHi 1 oat wo O(w, w*) viv Tey + ay (tell? + ne)?) Ta con( 2 ) ve | py +e" it ((1,,v)(1,,0") — v'v*) eT an m EZ [grelu(v, w; Z)] = and the result follows. Lemma 5. [fw 4 0,, and 0(w, w*) € (0,7), then the inner product between the expected coarse and true gradients w.r.t. w is (Bs [grata 0, w; Z)). of (v, w)) ee (vl v*)? >0. Moreover, if further ||v\| < Cy and ||w|| > Cw, there exists a constant Aye\, > 0 depending on Cy and Cw, such that OF (w,w) ae : + (Bs lntnn:2)) 920.0) . Peloton < Aan Proof of Lemma 5. By Lemmas 2 and 4, we have af vtor (In fitz) w" Bw) = areal || (a - ww and * vu ,w* T ay* Ter + Bz[dun(v.wi2)] = Se pe eos (Se) oe li ml 21 Published as a conference paper at ICLR 2019 Notice that (In — ia )w = 0, and ||w*|| = 1, if 0(w, w.) A 0, 7, then we have Te? (Bz [aoan(v,wsZ)}. 5 (0,2) ) cos (ew) & Tv 7a 1 (In - ici a w* 2 (v2n)% \ Iholl Cz, — wes wel)’ |] ey + w | cos (Ms w*) !) oe |||]? — (ew Tw)? cau |[||w||?2* — otoTer TT ||w + ||w||w*|| cos (“ w*) i) ee ||w||? — (wl w*)? am Vol = Teo Pow)? 2 (ew? + [ew (wT w*)) cos (Ms w*) (vl v*)? || w ||? _ (wl w*)? Wr ~ Virol — (ow)? Veal = (ow) cos (ene ye vie gic cos (“ w*) ye 1 — cos(O(w, w*)) " esa sin (6(w, w* . rial OO To show the second claim, without loss of generality, we assume ||w|| = 1. Denote 6 := 6(w, w*). By Lemma[I] we have of _l + 1 20 + By” w)= qin + Ulin)? — q ((.- =) Im +1m1,, By Lemma 4, , (22) h(v,v*) w O\ v'v* Jol +e Ez [grain (v, w;Z)| = ~ Tal - cos ( ) vi where h(v,v*) = |lol|? + (1,.v)? — (L.e)(1 10") + vl =v! (Im+ wipe v—v! (Il), — Im)v* 20 0 =y! (Lin + Lint.) v -v "(doth 4 : (1 -= ) 1) vt +2 (1 - *) viv ty Fy, w) + 2 (1 - *) viv*, (23) Ov and by the first claim, of sin (0) * (Bz [auan(vsws2)], 5(0, 1) ) Wamjel” y. 22 Published as a conference paper at ICLR 2019 Hence, for some Arein depending only on C, and cw, we have \ee[ontowz]f \ee[ontowz]f av! 2 (vy, w) w (5) viv | w Ter +w* = + cos { = vir ||wi 2) V2r \ lvl lear +" 2 (1 0 (5)) viv® w t ——-—cos|{ = 75) Yam Tel 2 w ? C2 , 2 (ay T ay*)2 Wt wy* < 6Cy || OF (v,w)|} + cos? (5) 3@ le") | w Tel Tw Ov 2 2a || || er +w* 2 a7. T py*)2 (1-4 on (2) 20a? T 2 Qn 2 , 2 an2 2 < 60r |] OF (v,w)]| + cos? (5) 0 (vl v*)? 4 (1- a - os (5)) a || Ov 2) 84 T 2 6C2 || OF 3no (O\ . 2 [8 tr oxyg , 38in(9), < 20S o\s aa * << Fy Ow) + 3 cos (5) sin (5) (vl v*)* + on (v' ° ( [grata ( (v, w; 2)). 5 oF | (v, )) ; < tan (|Z # 2 aa T ax)? # sey # 2π # Tyo * (v' v*) where the equality is due to and (23), the first inequality is due to Cauchy- Schwarz in- w* yey and is 5 and = 3 re] pate equality, the second nena holds because the angle between Tel - | < g, whereas the third inequality is due to sin(x) > —- cos(z) > 1 — aE Tel and 2 2a 2a 2a 5 = pate —- cos(z) > 1 — aE # Tel 2 2a 2a 2a (1 = cos(e)) < (coste) 1+ ~) (costs) +1- =) < sin(a)(2 cos(x)) = sin(2x) 7 7 7 for all a € [0, §]. Lemma 6. When Algorithm|1\converges, Ez, [ 36( (v, w; Z)| and Ez [grota(v, w; Z)| vanish simul- Ov taneously, which only occurs at the 1. Saddle points where (8) is satisfied according to Proposition 1. 2. Minimizers of (2) 12) where v=v*, O(w,w*) =0, orv = (In+1mn1f,)-! (m1, —In)v*, O(w,w*) =7. Proof of Lemma 6. By Lemma 4, suppose we have ae + 1 2 . tT) a By | 5, (0. wiZ)| = 7 (Em + 1nd ne = 5 ((: =0(w, w )) In + Int) v =Om (24) and On, (25) h(v,v*) w (“) vile Top te — Cos 2/27 |\w|l 2 V20 rar +w"| Ez [grata(v, w; Z)| where h(v, v*) = ||v||? + (17,v)? — (a],v)(1 1,0") +0 lv". By (25), we must have 6(w, w*) = 0 )= or O(w, w*) = ror! v* = 0. 23 Published as a conference paper at ICLR 2019 If 6(w, w*) = 0, then by (24), v = v*, and (25) is satisfied. If 6(w, w*) = 7, then by (24), v = (Lin +1m1f,)7 1m, — Im)v*, and (25) is satisfied. If v'v* = 0, then by (2 (25) is satisfied. we have the expressions for v and 6(w, w*) from Proposition[I| and # Lemma 7. If w # 0, and 0(w, w*) € (0,7), then * Tay + Ww Ex [gerem(v,w;Z)] = POMP) WT y*) ese(6/2) -q(8, w) Hel 2 a arte Taye w — (wv v*) (p(9, w) — cot(6/2) - q(8, w)) Teor’ (26) where h(v, v*) := ||v||? + (11,v)? — (1],v)(Lf.v*) + uv! v* same as in Lemmal5| and pO, w) = xl, cos(g)& Con ~) dd, 9(0,w) := zl, sin(P)é e) do where h(v, v*) := ||v||? + pO, w) = xl, cos(g)& with €(x) = So r? exp(— Jar. wert. Ww 2 )dr. The inner product between the expected coarse and true gradients (0,w) of q(, w Ba| 2). .w)) = ( Z| Jcrelu(V, W; Hag (Oe) am (vl v*)? >0. Moreover, if further ||v|| < Cy and ||w|| > Cw, there exists a constant Acrely > 0 depending on Cy and Cw, such that 2 (2) 2 (a) 2 (2) + (Bs [geroin(v, 0; Z)| > oF iw, )) : 2 (a) [Be [aann(v. 1: 2)] |" < Aca (eae By 6}. Proof of Lemma[J] Denote @ := 0(w, w*). We first compute Ez g..(v, w; Z) = Z" (u'(Zw) © v) (v'o(Zw) - # [geveru(v, w; Proof of Lemma[J] Denote @ := 0(w, w*). We first compute Ez [geveru(v, w; Z)). By 6}. (v*)"o(Zw")). (v'o(Zw) - . Since p! = Lygcx<ij and o = Lf}, we have (= vip (Zw) — Yo vuaiw)) (x Zana) i=1 Ez [9creiu(v, w; Z)] = Ez i=l i=l m m m =Ez (Sooticeiwen ~ Yehecarw-<} (x wnnts)| i=l i=l i=l er + w* — PCO, w)h(v, v*) _w_ — (v7 v") ese(0/2) « q(0, w) Hel 2 leo Jer + w* — (07 v*) (9, wl) — c0t(0/2) -4(8.w)) Pen. In the last equality above, we called Lemma 12. that (In - wey) w = 0, and ||w*|| = 1. If 0(w,w.) A 0,7, then the inner product of ow (Bs [geveru(v, w; Z)). oF iw, w)) csc (5) a8, w) wer? ( 1 (In - wai ) w* we Ez [gerain(v, w; Z)| and 5+ (v, w) is given by 2 Qn ||w|| (a _ tats )w “| at +w* _ q(0, w) (vl v*)? >0. 2r||w!| ~ # Notice that # between EZ 24 Published as a conference paper at ICLR 2019 In the last line, g(0, w) > 0 because sin(¢)& (av) is odd in ¢ and positive for ¢ € (0, §]. Te] 2 Next, we bound Ez [geretu(v, w; Z)| | . Since gives ¢ h(w,v*) = 40" oly w) 4 2(1 ) ore T where according to Lemma 1, of —il tT 1 20 + Fy Pew) = qm +1n1),)Â¥ 7 (G =) In +1m1), We rewrite the coarse partial gradient w.r.t. w in (26) as Ez [9ereiu(v, w; Z)] = p(0, w) (207 Fw, w) | +(1 2) or vs [apt e — (v'v") csc(0/2) - q(0, w) [a] © hw ~ (uv v") (p(0, w) — cot(0/2) - sw) = 2p(0, wy Low, Te] +(vTv") ((: - *) p(0, w) — p(6, w)) = Jel Y Te 7” Ww ia k = — (v'v*)(ese(0/2) — cot(6/2))q(8, w) To prove the last claim, we notice that in the above eee vt oy “al <° eee vt oy ; J w,w) (27) “al <° (esc(0/2) — cot(8/2))q(0, w) = (csc(6/2) — cot(@/2))?q(8, w)? < q(0,w)?, (28) Te esc(0/2) - q(0, w) (= cin tw el ler ptw" 2 2 72 < (sory) ew)? < Tal0.u0)?. 2% Now, what is left is to bound ((1 — 2) p(0,w) — p(8, w))’, using a multiple of q(@, w). Recall that z We first show that both ((1 - o) p(0, w) — p(9, w))? and q(6, w) are symmetric with respect to 6 = § on (0, 7]. This is because a(n 6.20) = J sincone (2) a6 2 = q(0,w) + = [ sin(@)é e) dg = q(0,w). an Jz 6 ical 25 Published as a conference paper at ICLR 2019 and (1- ==") n(0,) — n(n ~0,w) beef (9) 2 fae) fe) ((1- 2) (0,1) ~ (0,0). a] only. Then calling Lemma|I3]for 6 € [%,7], we have q(@, w). Therefore, Therefore, it suffices to consider 0 € [ p(0,w) < q(0,w) and (1 — 2) p(0, w ae Js # π (010, w) - (1 - *) p(0, w) < < q(0,w)?. Combining the above estimate together with (27), (28) and (29), and using Cauchy-Schwarz in- equality, we have 2 (245 =) «(0 w)?(v wy), oF (ww) [Ez [gcrom(v, w; Z)]|/? < 4 (sv w)Cy where p(0, w) and q(θ, w) are uniformly bounded. This completes the proof. Lemma 8. When Algorithm|i|converges, Bz | 24 (v, w; Z)| and Ez [geve1n(, w; Z)| vanish simul- taneously, which only occurs at the 1. Saddle points where (8) is satisfied according to Proposition 1. 2. Minimizers of (2) 12) where v = v*, O(w, w*) = 0, or v = (Imt+1m1),)~ (m1, —Im)v*, O(w,w*) =7. Proof of Lemma 8. The proof of Lemma 8 is similar to that of Lemma 6, and we omit it here. The core part is that q(θ, w) defined in Lemma 12 is non-negative and equals 0 only at θ = 0, π, as well as p(0, w) ≥ p(θ, w) ≥ p(π, w) = 0. Lemma 9. Let µ(x) = x in (5). Then the expected coarse partial gradient w.r.t. w is Bz [suv w:2)] = = (o|? Ee — (wT w")w"). If 6(w, w*) =m and v = (Im + 1m) (m1, — Im)v*, [Ez [gia(, w; Z)||| = = areal Ty) >0, # [gia(, Ww; Z)| ie, Ez [gia(, Ww; Z)| does not vanish at the local minimizers if 1},v* # 0 andm > 1,. # Proof of Lemma 9. By (5), g..(v, w; Z) = Z" (u'(Zw) © v) (v' o(Zw) - (v*)"o(Zw")). . 26 Published as a conference paper at ICLR 2019 Using the facts that yu’ = 1 and o = 1y,5}, we have (x vil {zT w>0} — Ss Wixrwon (x nt) Ez [gia(v, w; Z)| = Ez i=1 i=l i=l m om m om = Sy vjvjE [Zilyz7w>0)| - Ss Ss v; vjE [Zilparw->09] i=l i=1 j=l Se (lel? SZ, - oT w yw"). a In the last equality above, we the third identity in Lemma[I]] If 6(w, w*) = m7 and v = (Im + Ln1f,)- ‘ml, 1, — Im)v*, then # m − Im)v∗, then 1 √ 2π 1 * Ez [gia(v, w; Z)]|| = JR & +v*)| -1 a (0) (dnd = Ln) (Zn + Lindon) (Em + Ann) "(mM = Zn) + Sm) © -1 = =|") TnL, = Lin)(Zn + Lm1 jy)! (Zin + Um.) Lin (Lo 2 * ~ Pion pp OY" Aerts — nbn) _ _%m=1) (Tet V2n(m + 1/2” In the third equality, we used the identity (Ip, + Ln1),)1m # m)1m = (m + 1)1m twice. In the third equality, we used the identity (Ip, + Ln1),)1m = (m+ 1)1,, twice. Lemma 10. [fw # 0,, and 0(w, w*) € (0,7), then the inner product between the expected coarse and true gradients w.r.t. w is (Bs [gia(, w; Z)| ; of (v, “) ee (vl v*)? > 0. (Bs [gia(, w; Z)| ; of When 0(w, w*) > 1, v > (In +1 ml) When 0(w, w*) > 1, v > (In +1 ml) ‘m1, — I,,)v* if lie # 0 and m > 1, we have # — I,,)v* if lie 2] x[otem 2] 2L(v,w)|) +4 (Ez [gia(v, w: 2). 3£(v.w)) Foo. Proof of Lemma 10. By Lemmas 2 and 4, we have of vlv* (= fiz)" aw) ~~ arf Jp, = and # Since w’) — # Bz|gia v,w;2)| = 1 √ 2π 2 Ww v|\? — (lol? ( # EZ — (v'v* (oT) = . # gid(v, w; Z) (In - ip) w = 0, and ||w*|| = 1, if 0(w, w,) 4 0, 7, then we have af (vt ve wt (In — i) w" odoin en) = See wey? 1-“r wTey? [wt (2m eel fi ore (V2RYF Ino [or CO (Vom aaa ee) # w" 27 Published as a conference paper at ICLR 2019 aoe) # O and m > 1, When 6(w,w*) > 7, vu > (In + Inf)” ‘nll, - ime both and (Ez|gia(v, w; 2). o£ (vy, w)) converge to 0. But if 1/ mU Ez [gia(v, w; Z)| | > Joanne (hnv*)? > 0, which completes the proof. Theorem 1. Let {(v',w')} be the sequence generated by Algorithm |1| with ReLU u(x) = max{x,0} or clipped ReLU y(%) = min{max{x,0},1}. Suppose ||w']] > cw for all t with some Cy > 0. Then if the learning rate » > 0 is sufficiently small, for any initialization (v°, w°), the objective sequence {f(v',w')} is monotonically decreasing, and {(v',w')} con- verges to a saddle point or a (local) minimizer of the population loss minimization (2). In addi- tion, if L},v* 4 0 and m > 1, the descent and convergence properties do not hold for Algorithm [/| with the identity function p@) = «x near the local minimizers satisfying 0(w,w*) = a and @ = (Im + UmUn) 1mm — Im)o*. m Proof of Theorem[I]. We first prove the upper boundedness of {v'}. Due to the coerciveness of f(v,w) wart v, there exists C, > 0, such that ||v|| < C, for any v € {v € R™ : f(v,w) < f(v°, w°) for some w}. In particular, ||v°|| < C,. Using induction, suppose we already have f(v',w') < f(v®, w®) and ||v'|| < Cy. If O(w', w*) = 0 or z, then 0(w?, w*) = 0 or x for all T 2 t, and the original problem reduces to a quadratic program in terms of v. So {v'} will converge to v* or (Im +1m11,)~!(1m1,1, — Im)v* by choosing a suitable step size 17. In either case, we have Ez [6( v',w';Z) )| and lee! [goern(v', w'; Z) )| both converge to 0. Else if @(w', w*) € (0,7), we define for any a € [0, 1] that ol t yt t+1_ yt) — yt _ank v (a) :=v —alv v) =v —anEz ae and w'(a) = wl — a(w't? — w') = w! — anEg [Gre (v', w'; Z)] , which satisfy vt(0) = vt, vt(1) = vt+1, wt(0) = wt, wt(1) = wt+1. Let us fix 0 < c < cw and C > Cy. By the expressions of Ez [3e (v',w'; Z)| and av Ez [9retu(v', w’; Z)] given in Lemma|4] and since ||w*|| = 1, for sufficiently small 7 depending on Co and cy, with 7 < fj, it holds that ||v‘(a)|| < C and ||w'(a)|| > c for all a € [0, 1]. Possibly at some point ao where 6(w* (ao), w*) = 0 or 7, the partial gradient 24 (v! (ao), w*(ag)) does not Fe 2F (yt(a), w"(a)) | is uniformly bounded for all a € {0, 1]/{ao}, which makes it integrable over the interval [0, 1]. Then for some constants L and A;e1, depending on C and c, we exist. Otherwise, 28 Published as a conference paper at ICLR 2019 have flow!) = fot + (wt — vw! + (w!t! — w')) = f(v',w') + [ (Fo'(a),w'(a)) 0" - v') da + [ (52 (o'(a)twi(a)). ww") da = f(v',w') + (FE!) 0 - v') + (SE (ow!) _ w') + [ (F(w'(a),w'(a)) - Bee - “ da (# ) of 2 + < fw',w') - OF ww" —n (Sto! w'), Be [grota(v' ws z)]) + br [ee [groin(v', w's Z)| if < flo!.w') = (n= (1+ Ania) ) [2A (ot ew! = (n= Sesh) (SE (ot ro") Be guia(o" to’ Z)] ). (30) The third equality is due to the fundamental theorem of calculus. In the first inequality, we called Lemma|3| for (v‘, w’) and (v‘(a), w'(a)) with a € [0,1]/{ao}. In the last inequality, we used Lemma|5| So when 7 < 7 := min { i, we have f(v'+!,w't1) < f(v',w’) < f(v°, w®), and thus ||v’*4|| < Cy. _—2 _ (+Areiu)L? Summing up the inequality (30) over t from 0 to ∞ and using f ≥ 0, we have ~ . In\ Of oe al) AvemLn\ / Of ‘ > (1 (1+ Are) 5 ) ap ow) + (1 5 7g (Uw): Bz |groia(v', w" :2)| < f(v°,w®)/n < oc. t=0 ≤ f (v0, w0)/η < ∞. Hence, . too ty|} jm, de” wi) =9 and too . of lim (she! w'), Ez [grata (vt, wh 2)| ) 0. Invoking Lemmaf5Jagain, we further have jim, [Ez jim, [Ez [grata (vw! Z)| | =0. Invoking Lemma 6, we have that coarse gradient descent with ReLU µ(x) (subsequentially) con- verges to a saddle point or a minimizer. Using Lemmas 7, 8 and similar arguments, we can prove the convergence of coarse gradient descent with clipped ReLU STE. The second claim follows from Lemmas 9 and 10. F. CONVERGENCE TO GLOBAL MINIMIZERS We prove that if the initialization weights (v°, w°) satisfy (v°)'v* > 0, 0(w°,w*) < % and (1/,v*)(11,v°) < (1,v*)?, then we have convergence guarantee to global optima by using the vanilla or clipped ReLU STE 29 (30) Published as a conference paper at ICLR 2019 Theorem 2. Under the assumptions oj Theorem | i firther the initialization (v°, w°) satisfies (v°)Tv* > 0, O(w?, w*) < Zand (1f,v*)(1},v°) < (Lv ye then by using the vanila or clipped ReLU STE for sufficiently learning rate n > 0, we have (v‘)'v* > 0 and 0(w', w*) < 5 for all t > 0, and {(v', w')} converges to a global minimizer. Proof of re Proof by induction. Suppose (v')"v* (1/,v*)(1 1,0") < (17,v*)?. Then for small enough 77, we have > 0, O(w',w*) < and NIA mv 1,0") < + + 2 (utttyTy* = G - ; (it +1n1),)0' — A - rte) In + Int) “)) vw = (1-2) wy +2 (ake? Leryahe")) +E (12a. w") w* 2 > 0. : I" and (het ya m v)= (1 _ (m+ mew) ate the jd (m+ 1- 20,1") (1 Tw *y? (1— jh 0(w,w*)) (Uv*)?? < (Lhe)? IA Moreover, by Lemmas(4][5Jand/7] both Ez [Greiu(v, w; Z)| and Ez[gerem(v, w; Z)] can be written in — ww Te? on C, and cy. Therefore, the form of ay ( Lin w* + aw, where a; < 0 and az is bounded by a constant depending ww! (wt!) Taw* = (1 — naz)(w’) w* — ay (w*)? (1 - aa) w* > (1 —naz)(w')'w* > 0, and thus θ(wt+1, w∗) < π minimizer. 2 . Finally, since {(vt, wt)} converges, it can only converge to a global 30
{ "id": "1502.03167" }
1903.03862
Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and consistent across different word embedding models, causing serious concern. Several recent works tackle this problem, and propose methods for significantly reducing this gender bias in word embeddings, demonstrating convincing results. However, we argue that this removal is superficial. While the bias is indeed substantially reduced according to the provided bias definition, the actual effect is mostly hiding the bias, not removing it. The gender bias information is still reflected in the distances between "gender-neutralized" words in the debiased embeddings, and can be recovered from them. We present a series of experiments to support this claim, for two debiasing methods. We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling.
http://arxiv.org/pdf/1903.03862
Hila Gonen, Yoav Goldberg
cs.CL
Accepted to NAACL 2019
null
cs.CL
20190309
20190924
9 1 0 2 p e S 4 2 ] L C . s c [ 2 v 2 6 8 3 0 . 3 0 9 1 : v i X r a Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them Hila Gonen1 and Yoav Goldberg1,2 1Department of Computer Science, Bar-Ilan University 2Allen Institute for Artificial Intelligence {hilagnn,yoav.goldberg}@gmail.com # Abstract Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phe- nomenon is pervasive and consistent across different word embedding models, causing se- rious concern. Several recent works tackle this problem, and propose methods for signifi- cantly reducing this gender bias in word em- beddings, demonstrating convincing results. However, we argue that this removal is super- ficial. While the bias is indeed substantially reduced according to the provided bias defi- nition, the actual effect is mostly hiding the bias, not removing it. The gender bias infor- mation is still reflected in the distances be- tween “gender-neutralized” words in the debi- ased embeddings, and can be recovered from them. We present a series of experiments to support this claim, for two debiasing meth- ods. We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling. # Introduction Word embeddings have become an important component in many NLP models and are widely used for a vast range of downstream tasks. How- ever, these word representations have been proven to reflect social biases (e.g. race and gender) that naturally occur in the data used to train them (Caliskan et al., 2017; Garg et al., 2018). In this paper we focus on gender bias. Gender bias was demonstrated to be consistent and per- vasive across different word embeddings. Boluk- basi et al. (2016b) show that using word em- beddings for simple analogies surfaces many gen- der stereotypes. For example, the word embed- ding they use (word2vec embedding trained on the Google News dataset1 (Mikolov et al., 2013)) an- # 1https://code.google.com/archive/p/word2vec/ swer the analogy “man is to computer program- mer as woman is to x” with “x = homemaker”. Caliskan et al. (2017) further demonstrate associ- ation between female/male names and groups of words stereotypically assigned to females/males (e.g. arts vs. science). In addition, they demon- strate that word embeddings reflect actual gender gaps in reality by showing the correlation between the gender association of occupation words and labor-force participation data. Recently, some work has been done to reduce the gender bias in word embeddings, both as a post-processing step (Bolukbasi et al., 2016b) and as part of the training procedure (Zhao et al., 2018). Both works substantially reduce the bias with respect to the same definition: the projection −→ she), introduced on the gender direction (i.e. in the former. They also show that performance on word similarity tasks is not hurt. We argue that current debiasing methods, which lean on the above definition for gender bias and directly target it, are mostly hiding the bias rather than removing it. We show that even when dras- tically reducing the gender bias according to this definition, it is still reflected in the geometry of the representation of “gender-neutral” words, and a lot of the bias information can be recovered.2 # 2 Gender Bias in Word Embeddings In what follows we refer to words and their vectors interchangeably. Definition and Existing Debiasing Methods (2016b) define the gender bias Bolukbasi et al. of a word w by its projection on the “gender di- −→ rection”: −→w · ( she), assuming all vectors are normalized. The larger a word’s projection is on 2The code for our experiments is available https://github.com/gonenhila/gender_ bias_lipstick. at −→ −→ he − she, the more biased it is. They also quantify the bias in word embeddings using this definition and show it aligns well with social stereotypes. Both Bolukbasi et al. (2016b) and Zhao et al. (2018) propose methods for debiasing word em- beddings, substantially reducing the bias accord- ing to the suggested definition.3 (2016b) use a post-processing debiasing method. Given a word embedding matrix, they make changes to the word vectors in order to reduce the gender bias as much as possible for all words that are not in- herently gendered (e.g. mother, brother, queen). They do that by zeroing the gender projection of each word on a predefined gender direction.4 In addition, they also take dozens of inherently gen- dered word pairs and explicitly make sure that all neutral words (those that are not predefined as in- herently gendered) are equally close to each of the two words. This extensive, thoughtful, rigor- ous and well executed work surfaced the problem of bias in embeddings to the ML and NLP com- munities, defined the concept of debiasing word embeddings, and established the defacto metric of measuring this bias (the gender direction). It also provides a perfect solution to the problem of re- moving the gender direction from non-gendered words. However, as we show in this work, while the gender-direction is a great indicator of bias, it is only an indicator and not the complete manifes- tation of this bias. Zhao et al. (2018) take a different approach and suggest to train debiased word embeddings from scratch. Instead of debiasing existing word vec- tors, they alter the loss of the GloVe model (Pen- nington et al., 2014), aiming to concentrate most of the gender information in the last coordinate of each vector. This way, one can later use the word representations excluding the gender coordinate. They do that by using two groups of male/female seed words, and encouraging words that belong to different groups to differ in their last coordi- nate. In addition, they encourage the representa- tion of neutral-gender words (excluding the last coordinate) to be orthogonal to the gender direc- 3Another work in this spirit is that of Zhang et al. (2018), which uses an adversarial network to debias word embed- dings. There, the authors rely on the same definition of gen- der bias that considers the projection on the gender direction. We expect similar results for this method as well, however, we did not verify that. 4The gender direction is chosen to be the top principal component (PC) of ten gender pair difference vectors. tion.5 This work did a step forward by trying to remove the bias during training rather than in post- processing, which we believe to be the right ap- proach. Unfortunately, it relies on the same defi- nition that we show is insufficient. These works implicitly define what is good gen- der debiasing: according to Bolukbasi et al. (2016b), there is no gender bias if each non- explicitly gendered word in the vocabulary is in equal distance to both elements of all explicitly gendered pairs. In other words, if one cannot de- termine the gender association of a word by look- ing at its projection on any gendered pair. In Zhao et al. (2018) the definition is similar, but restricted to projections on the gender-direction. Remaining bias after using debiasing methods Both works provide very compelling results as evi- dence of reducing the bias without hurting the per- formance of the embeddings for standard tasks. However, both methods and their results rely on the specific bias definition. We claim that the bias is much more profound and systematic, and that simply reducing the projection of words on a gender direction is insufficient: it merely hides the bias, which is still reflected in similarities be- tween “gender-neutral” words (i.e., words such as “math” or “delicate” are in principle gender- neutral, but in practice have strong stereotypical gender associations, which reflect on, and are re- flected by, neighbouring words). Our key observation is that, almost by defi- nition, most word pairs maintain their previous similarity, despite their change in relation to the gender direction. The implication of this is that most words that had a specific bias before are still grouped together, and apart from changes with re- spect to specific gendered words, the word embed- dings’ spatial geometry stays largely the same.6 In what follows, we provide a series of experiments that demonstrate the remaining bias in the debi- ased embeddings. 5The gender direction is estimated during training by av- eraging the differences between female words and their male counterparts in a predefined set. 6We note that in the extended arxiv version, Bolukbasi et al. (2016a) do mention this phenomenon and refer to it as “indirect bias”. However, they do not quantify its extensive- ness before and after debiasing, treat it mostly as a nuance, and do not provide any methods to deal with it. # 3 Experimental Setup We refer to the word embeddings of the previ- ous works as HARD-DEBIASED (Bolukbasi et al., 2016b) and GN-GLOVE (gender-neutral GloVe) (Zhao et al., 2018). For each debiased word em- bedding we quantify the hidden bias with respect to the biased version. For HARD-DEBIASED we compare to the embeddings before applying the debiasing procedure. For GN-GLOVE we com- pare to embedding trained with standard GloVe on the same corpus.7 Unless otherwise specified, we follow Boluk- basi et al. (2016b) and use a reduced version of the vocabulary for both word embeddings: we take the most frequent 50,000 words and phrases and remove words with upper-case letters, digits, or punctuation, and words longer than 20 characters. In addition, to avoid quantifying the bias of words that are inherently gendered (e.g. mother, father, queen), we remove from each vocabulary the re- spective set of gendered words as pre-defined in each work.8 This yeilds a vocabulary of 26,189 words for HARD-DEBIASED and of 47,698 words for GN-GLOVE. As explained in Section 2 and according to the definition in previous works, we compute the bias of a word by taking its projection on the gender direction: In order to quantify the association between sets of words, we follow Caliskan et al. (2017) and use their Word Embedding Association Test (WEAT): consider two sets of target words (e.g., male and female professions) and two sets of attribute words (e.g., male and female names). A permutation test estimates the probability that a random permuta- tion of the target words would produce equal or greater similarities to the attribute sets. # 4 Experiments and Results Male- and female-biased words cluster together We take the most biased words in the vocab- ulary according to the original bias (500 male- 7We use the embeddings provided by Bolukbasi et in https://github.com/tolga-b/ (2018) in https:// al. debiaswe and by Zhao et al. github.com/uclanlp/gn_glove. (2016b) 8For HARD-DEBIASED we use first three lists from: https://github.com/tolga-b/debiaswe/ tree/master/data and for GN-GLOVE we use the two lists from: https://github.com/uclanlp/gn_ glove/tree/master/wordlist «Original (a) Clustering for HARD-DEBIASED embedding, before (left hand-side) and after (right hand-side) debiasing. * TOriginal Fe) «o{Debiased % (b) Clustering for GN-GLOVE embedding, before (left hand- side) and after (right hand-side) debiasing. Figure 1: Clustering the 1,000 most biased words, be- fore and after debiasing, for both models. biased and 500 female-biased9), and cluster them into two clusters using k-means. For the HARD- DEBIASED embedding, the clusters align with gender with an accuracy of 92.5% (according to the original bias of each word), compared to an ac- curacy of 99.9% with the original biased version. For the GN-GLOVE embedding, we get an accu- racy of 85.6%, compared to an accuracy of 100% with the biased version. These results suggest that indeed much of the bias information is still embed- ded in the representation after debiasing. Figure 1 shows the tSNE (Maaten and Hinton, 2008) pro- jection of the vectors before and after debiasing, for both models. bias-by- Bias-by-projection neighbours This clustering of gendered words indicates that while we cannot directly “observe” the bias (i.e. the word “nurse” will no longer be closer to explicitly marked feminine words) the bias is still manifested by the word being close to socially-marked feminine words, for example “nurse” being close to “receptionist”, “caregiver” and “teacher”. This suggests a new mechanism for measuring bias: the percentage of male/female socially-biased words among the k nearest neighbors of the target word.10 We measure the correlation of this new bias 9highest on the two lists for HARD-DEBIASED are ’pe- tite’, ’mums’, ’bra’, ’breastfeeding’ and ’sassy’ for female and ’rookie’, ’burly’, ’hero’, ’training camp’ and ’journey- man’ for male. Lowest on the two lists are ’watchdogs’, ’wa- tercolors’, ’sew’, ’burqa’, ’diets’ for female and ’teammates’, ’playable’, ’grinning’, ’knee surgery’, ’impersonation’ for male. 10While the social bias associated with a word cannot be observed directly in the new embeddings, we can approxi- mate it using the gender-direction in non-debiased embed- dings. 100 a : wari Original ‘ - 3k 80 . ipper| . Monofgnee” 60 40 100 warTlgKipper| oy pzapecin 80 . 60 oe re 0.2 -01 0.0 ol 02 100 Original 80 60 40 : : Bur = seceptionist * , a ‘housekeeper nanny ° -0.15 =0.10 -0.05 0.00 0.05 0.10 0.15 100 Debiased 80 60 40 |. nurse AR 20 Agsiny 0 0.15 0.10 0.05 0.00 0.05 0.10 0.15 (a) The plots for HARD-DEBIASED embedding, before (top) and after (bottom) debiasing. (b) The plots for GN-GLOVE embedding, before (top) and after (bottom) debiasing. Figure 2: The number of male neighbors for each profession as a function of its original bias, before and after debiasing. We show only a limited number of professions on the plot to make it readable. measure with the original bias measure. For the HARD-DEBIASED embedding we get a Pearson correlation of 0.686 (compared to a correlation of 0.741 when checking neighbors according to the biased version). For the GN-GLOVE embedding we get a Pearson correlation of 0.736 (compared to 0.773). All these correlations are statistically significant with p-values of 0. Professions We consider the list of professions used in Bolukbasi et al. (2016b) and Zhao et al. (2018)11 in light of the neighbours-based bias def- inition. Figure 2 plots the professions, with axis X being the original bias and axis Y being the num- ber of male neighbors, before and after debiasing. For both methods, there is a clear correlation be- tween the two variables. The first experiment evaluates the association between female/male names and family and ca- reer words. The second one evaluates the associ- ation between female/male concepts and arts and mathematics words. Since the inherently gendered words (e.g. girl, her, brother) in the second ex- periment are handled well by the debiasing mod- els we opt to use female and male names instead. The third one evaluates the association between fe- male/male concepts and arts and science words. Again, we use female and male names instead.12 For the HARD-DEBIASED embedding, we get a p-value of 0 for the first experiment, 0.00016 for the second one, and 0.0467 for the third. For the GN-GLOVE embedding, we get p-values of 7.7 × 10−5, 0.00031 and 0.0064 for the first, second and third experiments, respectively. We observe a Pearson correlation of 0.606 (compared to a correlation of 0.747 when check- ing neighbors according to the biased version) for HARD-DEBIASED and 0.792 (compared to 0.820) for GN-GLOVE. All these correlations are signif- icant with p-values < 1 × 10−30. Association and female/male-stereotyped words We replicate the three gender-related association experiments from Caliskan et al. (2017). For these experiments we use the full vocabulary since some of the words are not included in the reduced one. 11https://github.com/tolga-b/debiaswe/ tree/master/data/professions.json Classifying previously female- and male-biased words Can a classifier learn to generalize from some gendered words to others based only on their 12All word lists are taken from Caliskan et al. (2017): First experiment: Female names: Amy, Joan, Lisa, Sarah, Di- ana, Kate, Ann, Donna. Male names: John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill. Family words: home, par- ents, children, family, cousins, marriage, wedding, relatives. Career words: executive, management, professional, corpo- ration, salary, office, business, career. Second experiment: Arts Words: poetry, art, dance, literature, novel, symphony, drama, sculpture. Math words: math, algebra, geometry, cal- culus, equations, computation, numbers, addition. Third ex- periment: Arts words: poetry, art, Shakespeare, dance, lit- erature, novel, symphony, drama. Science words: science, technology, physics, chemistry, Einstein, NASA, experiment, astronomy. representations? We consider the 5,000 most bi- ased words according to the original bias (2,500 from each gender), train an RBF-kernel SVM clas- sifier on a random sample of 1,000 of them (500 from each gender) to predict the gender, and evalu- ate its generalization on the remaining 4,000. For the HARD-DEBIASED embedding, we get an ac- curacy of 88.88%, compared to an accuracy of 98.25% with the non-debiased version. For the GN-GLOVE embedding, we get an accuracy of 96.53%, compared to an accuracy of 98.65% with the non-debiased version. # 5 Discussion and Conclusion The experiments described in the previous section reveal a systematic bias found in the embeddings, which is independent of the gender direction. We observe that semantically related words still main- tain gender bias both in their similarities, and in their representation. Concretely, we find that: 1. Words with strong previous gender bias (with the same direction) are easy to cluster to- gether. 2. Words that receive implicit gender from so- cial stereotypes (e.g. receptionist, hair- dresser, captain) still tend to group with other implicit-gender words of the same gender, similar as for non-debiased word embed- dings. 3. The implicit gender of words with prevalent previous bias is easy to predict based on their vectors alone. The implications are alarming: while suggested debiasing methods work well at removing the gen- der direction, the debiasing is mostly superficial. The bias stemming from world stereotypes and learned from the corpus is ingrained much more deeply in the embeddings space. We note that the real concern from biased repre- sentations is not the association of a concept with words such as “he”, “she”, “boy”, “girl” nor being able to perform gender-stereotypical word analo- gies. While these are nice “party tricks”, algo- rithmic discrimination is more likely to happen by associating one implicitly gendered term with other implicitly gendered terms, or picking up on gender-specific regularities in the corpus by learn- ing to condition on gender-biased words, and gen- eralizing to other gender-biased words (i.e., a re- sume classifier that will learn to favor male over female candidates based on stereotypical cues in an existing—and biased—resume dataset, despite of being “oblivious” to gender). Our experiments show that such classifiers would have ample op- portunities to pick up on such cues also after debi- asing w.r.t the gender-direction. The crux of the issue is that the gender-direction provides a way to measure the gender-association of a word, but does not determine it. Debiasing methods which directly target the gender-direction are for the most part merely hiding the gender bias and not removing it. The popular definitions used for quantifying and removing bias are insufficient, and other aspects of the bias should be taken into consideration as well. # Acknowledgments This work is supported by the Israeli Science Foundation (grant number 1555/15), and by the Is- raeli ministry of Science, Technology and Space through the Israeli-French Maimonide Coopera- tion program. # References James Zou, Venkatesh Saligrama, and Adam Kalai. 2016a. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. arXiv:1607.06520. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016b. Man is to computer programmer as woman is to In Ad- homemaker? debiasing word embeddings. vances in Neural Information Processing Systems. and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635–E3644. Laurens van der Maaten and Geoffrey Hinton. 2008. Journal of machine Visualizing data using t-sne. learning research, 9(Nov):2579–2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with In Proceedings of the 2018 adversarial learning. AAAI/ACM Conference on AI, Ethics, and Society. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word In Proceedings of EMNLP, pages embeddings. 4847–4853.
{ "id": "1607.06520" }