{ "id": "1902.00751", "annotator": "jiangshu", "input": [ "\\documentclass{article}\n", "\\usepackage{microtype}\n", "\\usepackage{graphicx}\n", "\\usepackage{subfigure}\n", "\\usepackage{capt-of}\n", "\\usepackage{booktabs} \n", "\\usepackage{hyperref}\n", "\\usepackage{amssymb}\n", "\\usepackage{bm}\n", "\\usepackage{enumitem}\n", "\\usepackage{pbox}\n", "\\usepackage{adjustbox}\n", "\\usepackage[T1]{fontenc}\n", "\\usepackage{todonotes}\n", "\\usepackage{sidecap}\n", "\\sidecaptionvpos{figure}{c}\n", "\\newcommand{\\theHalgorithm}{\\arabic{algorithm}}\n", "\\usepackage[accepted]{icml2019}\n", "\\icmltitlerunning{Parameter-Efficient Transfer Learning for NLP}\n", "\\begin{document}\n", "\\twocolumn[\n", "\\icmltitle{Parameter-Efficient Transfer Learning for NLP}\n", "\\icmlsetsymbol{equal}{*}\n", "\\begin{icmlauthorlist}\n", "\\icmlauthor{Neil Houlsby}{goo}\n", "\\icmlauthor{Andrei Giurgiu}{goo,equal}\n", "\\icmlauthor{Stanis\\l{}aw Jastrz\\c{e}bski}{jag,equal}\n", "\\icmlauthor{Bruna Morrone}{goo}\n", "\\icmlauthor{Quentin de Laroussilhe}{goo}\n", "\\icmlauthor{Andrea Gesmundo}{goo}\n", "\\icmlauthor{Mona Attariyan}{goo}\n", "\\icmlauthor{Sylvain Gelly}{goo}\n", "\\end{icmlauthorlist}\n", "\\icmlaffiliation{goo}{Google Research}\n", "\\icmlaffiliation{jag}{Jagiellonian University}\n", "\\icmlcorrespondingauthor{Neil Houlsby}{neilhoulsby@google.com}\n", "\\icmlkeywords{NLP, Transfer Learning}\n", "\\vskip 0.3in\n", "]\n", "\\printAffiliationsAndNotice{\\icmlEqualContribution} \n", "\\begin{abstract}\n", "\\end{abstract}\n", "\\section{Introduction}\n", "Transfer from pre-trained models yields strong performance on many NLP tasks~\\citep{dai2015,howard2018universal,radford2018improving}.\n", "BERT, a Transformer network trained on large text corpora with an\n", "unsupervised loss, attained state-of-the-art performance on text classification\n", "and extractive question answering~\\citep{devlin2018bert}.\n", "In this paper we address the online setting, where tasks arrive in a stream.\n", "The goal is to build a system that performs well on all of them, but without training an entire new model for every new task.\n", "A high degree of sharing between tasks is particularly useful for applications such as cloud services,\n", "where models need to be trained to solve many tasks that arrive from customers in sequence.\n", "For this, we propose a transfer learning strategy that yields \\emph{compact} and \\emph{extensible} downstream models.\n", "Compact models are those that solve many tasks using a small number of additional parameters per task.\n", "Extensible models can be trained incrementally to solve new tasks, without forgetting previous ones.\n", "Our method yields a such models without sacrificing performance.\n", "\\begin{figure}[t]\n", "\\centering\n", "\\caption{\n", "\\label{fig:glue_summary_results}\n", "\\end{figure}\n", "The two most common transfer learning techniques in NLP are feature-based transfer and fine-tuning.\n", "Instead, we present an alternative transfer method based on adapter modules~\\citep{rebuffi2017}.\n", "Features-based transfer involves pre-training real-valued embeddings vectors.\n", "These embeddings may be at the word~\\citep{mikolov2013}, sentence~\\citep{cer2019}, or paragraph level~\\citep{le2014}.\n", "The embeddings are then fed to custom downstream models.\n", "Fine-tuning involves copying the weights from a pre-trained network and tuning them on the downstream task.\n", "Recent work shows that fine-tuning often enjoys better performance than feature-based transfer~\\citep{howard2018universal}.\n", "Both feature-based transfer and fine-tuning require a new set of weights for each task.\n", "Fine-tuning is more parameter efficient if the lower layers of a network are shared between tasks.\n", "However, our proposed adapter tuning method is even more parameter efficient.\n", "The \\emph{x}-axis shows the number of parameters trained per task;\n", "this corresponds to the marginal increase in the model size required to solve each additional task.\n", "Adapter-based tuning requires training two orders of magnitude fewer parameters to fine-tuning, while attaining similar performance.\n", "Adapters are new modules added between layers of a pre-trained network.\n", "Adapter-based tuning differs from feature-based transfer and fine-tuning in the following way.\n", "Consider a function (neural network) with parameters $\\bm w$: $\\phi_{\\bm w}(\\bm x)$.\n", "Feature-based transfer composes $\\phi_{\\bm w}$ with a new function, $\\chi_{\\bm v}$, to yield $\\chi_{\\bm v}(\\phi_{\\bm w}(\\bm x))$.\n", "Only the new, task-specific, parameters, $\\bm v$, are then trained.\n", "Fine-tuning involves adjusting the original parameters, $\\bm w$, for each new task, limiting compactness.\n", "For adapter tuning, a new function, $\\psi_{\\bm w, \\bm v}(\\bm x)$, is defined, where parameters $\\bm w$ are copied over from pre-training.\n", "The initial parameters $\\bm v_0$ are set such that the new function resembles the original: $\\psi_{\\bm w, \\bm v_0}(\\bm x) \\approx \\phi_{\\bm w}(\\bm x)$.\n", "During training, only $\\bm v$ are tuned.\n", "For deep networks, defining $\\psi_{\\bm w, \\bm v}$ typically involves adding new layers to the original network, $\\phi_{\\bm w}$.\n", "If one chooses $|\\bm v|\\ll|\\bm w|$, the resulting model requires $\\sim|\\bm w|$ parameters for many tasks.\n", "Since $\\bm w$ is fixed, the model can be extended to new tasks without affecting previous ones.\n", "Adapter-based tuning relates to \\emph{multi-task} and \\emph{continual} learning.\n", "Multi-task learning also results in compact models.\n", "However, multi-task learning requires simultaneous access to all tasks, which adapter-based tuning does not require.\n", "Continual learning systems aim to learn from an endless stream of tasks.\n", "This paradigm is challenging because networks forget previous tasks after re-training~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n", "Adapters differ in that the tasks do not interact and the shared parameters are frozen.\n", "This means that the model has perfect memory of previous tasks using a small number of task-specific parameters.\n", "The key innovation is to design an effective adapter module and its integration with the base model.\n", "We propose a simple yet effective, bottleneck architecture.\n", "but uses only 3\\\n", "In summary, adapter-based tuning yields a single, extensible, model that attains near state-of-the-art performance in text classification.\n", "\\section{Adapter tuning for NLP}\n", "\\begin{SCfigure*}\n", "\\begin{tabular}{cc}\n", " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_insertion.pdf}&\n", " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_arch.pdf}\n", "\\end{tabular}\n", " \\caption{\n", " Architecture of the adapter module and its integration with the Transformer.\n", " after the projection following multi-headed attention and after the two feed-forward layers.\n", " \\textbf{Right:} The adapter consists of a bottleneck which contains few parameters relative to the attention and feedforward layers in the original model.\n", " The adapter also contains a skip-connection.\n", " and the final classification layer (not shown in the figure).\n", " \\label{fig:adapters_transformer}}\n", "\\end{SCfigure*}\n", "We present a strategy for tuning a large text model on several downstream tasks.\n", "Our strategy has three key properties:\n", "(i) it attains good performance,\n", "(ii) it permits training on tasks sequentially, that is, it does not require simultaneous access to all datasets,\n", "and (iii) it adds only a small number of additional parameters per task.\n", "These properties are especially useful in the context of cloud services,\n", "where many models need to be trained on a series of downstream tasks, so a high degree of sharing is desirable.\n", "To achieve these properties, we propose a new bottleneck adapter module.\n", "Tuning with adapter modules involves adding a small number of new parameters to a model, which are trained on the downstream task~\\citep{rebuffi2017}.\n", "When performing vanilla fine-tuning of deep networks, a modification is made to the top layer of the network.\n", "This is required because the label spaces and losses for the upstream and downstream tasks differ.\n", "Adapter modules perform more general architectural modifications to re-purpose a pre-trained network for a downstream task.\n", "In particular, the adapter tuning strategy involves injecting new layers into the original network.\n", "The weights of the original network are untouched, whilst the new adapter layers are initialized at random.\n", "In standard fine-tuning, the new top-layer and the original weights are co-trained.\n", "In contrast, in adapter-tuning, the parameters of the original network are frozen and therefore may be shared by many tasks.\n", "Adapter modules have two main features: a small number of parameters, and a near-identity initialization.\n", "The adapter modules need to be small compared to the layers of the original network.\n", "This means that the total model size grows relatively slowly when more tasks are added.\n", "we investigate this empirically in Section~\\ref{sec:discussion}.\n", "During training, the adapters may then be activated to change the distribution of activations throughout the network.\n", "The adapter modules may also be ignored if not required;\n", "\\subsection{Instantiation for Transformer Networks\\label{sec:bottleneckadapter}}\n", "We instantiate adapter-based tuning for text Transformers.\n", "These models attain state-of-the-art performance in many NLP tasks,\n", "including translation, extractive QA, and text classification problems~\\citep{vaswani2017,radford2018improving,devlin2018bert}.\n", "We consider the standard Transformer architecture, as proposed in~\\citet{vaswani2017}.\n", "Adapter modules present many architectural choices.\n", "We provide a simple design that attains good performance.\n", "We experimented with a number of more complex designs, see Section~\\ref{sec:discussion},\n", "Figure~\\ref{fig:adapters_transformer} shows our adapter architecture, and its application it to the Transformer.\n", "Each layer of the Transformer contains two primary sub-layers: an attention layer and a feedforward layer.\n", "Both layers are followed immediately by a projection that maps the features size back to the size of layer's input.\n", "A skip-connection is applied across each of the sub-layers.\n", "The output of each sub-layer is fed into layer normalization.\n", "We insert two serial adapters after each of these sub-layers.\n", "The adapter is always applied directly to the output of the sub-layer, after the projection back to the input size,\n", "but before adding the skip connection back.\n", "The output of the adapter is then passed directly into the following layer normalization.\n", "To limit the number of parameters, we propose a bottleneck architecture.\n", "The adapters first project the original $d$-dimensional features into a smaller dimension, $m$, apply a nonlinearity, then project back to $d$ dimensions.\n", "The total number of parameters added per layer, including biases, is $2md+d+m$.\n", "in practice, we use around $0.5-8\\\n", "The adapter module itself has a skip-connection internally.\n", "the module is initialized to an approximate identity function.\n", "This technique, similar to conditional batch normalization~\\citep{de2017modulating},\n", "FiLM~\\citep{perez2018}, and self-modulation~\\citep{chen2019}, also yields parameter-efficient adaptation of a network; with only $2d$ parameters per layer.\n", "see Section~\\ref{sec:param_efficiency}.\n" ], "output": { "What experiments do you suggest doing?": [ "1. Adapter performance on GLUE: The authors should evaluate the adapter performance on common and widely used benchmarks such as GLUE. They should compare the full finetuned base model as the baseline. They can test both using a fixed adapter size (number of units in the bottleneck), and selecting the best size per task from a set of adapter sizes, e.g., {8, 64, 256}. They can re-run multiple times with different random seeds and select the best model on the validation set.", "2. Adapter performance on more tasks: The authors can evaluate the proposed adapter on more tasks that are publicly available. Besides the full finetuned base model, they can also compare with strong baselines such as the models searched by single-task Neural AutoML algorithm.", "3. Parameter/Performance trade-off: The authors should consider different adapter sizes and compare to two baselines: (i) Fine-tuning of only the top k layers of the base model. (ii) Tuning only the layer normalization parameters.", "4. Evaluating adapters on more types of tasks: The authors should evaluate adapters on more types of tasks. For example, if previous tasks are text classification tasks, the authors should also evaluate adapters on another type of tasks such as question answering.", "5. Adapter influence ablation experiments: The authors should remove some trained adapters and re-evaluate the model (without re-training) on tasks. They should report the performance of removing each single layer\u2019s adapters and the performance of removing adapters from different continuous layer spans.", "6. Effect of initialization scale: The authors should report the performance of the model using adapters with different initial weight magnitudes. For example, test standard deviations in a certain interval such as [10^-7, 1]", "7. Robustness of adapters to the number of neurons: The authors should report the mean validation accuracy across the previous tasks when using different adapter sizes." ], "Why do you suggest these experiments?": [ "1. To prove the effectiveness of the proposed adapter module.", "2. To further validate that adapter yields compact, performant, models.", "3. The adapter size controls the parameter efficiency, smaller adapters introduce fewer parameters, at a possible cost to performance. This experiment is to explore this trade-off. Additionally, the comparison with the baselines can also show the effectiveness of adapters across a range of sizes fewer than fine-tuning.", "4. To confirm that adapters work on different types of tasks.", "5. To determine which adapters are influential.", "6. To analyze the impact of the initialization scale on the performance.", "7. To investigate robustness of adapters to the number of neurons." ] }, "paper_info": { "title": "Parameter-Efficient Transfer Learning for NLP", "authors": [ "Neil Houlsby", "Andrei Giurgiu", "Stanislaw Jastrzebski", "Bruna Morrone", "Quentin de Laroussilhe", "Andrea Gesmundo", "Mona Attariyan", "Sylvain Gelly" ], "abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in\nNLP. However, in the presence of many downstream tasks, fine-tuning is\nparameter inefficient: an entire new model is required for every task. As an\nalternative, we propose transfer with adapter modules. Adapter modules yield a\ncompact and extensible model; they add only a few trainable parameters per\ntask, and new tasks can be added without revisiting previous ones. The\nparameters of the original network remain fixed, yielding a high degree of\nparameter sharing. To demonstrate adapter's effectiveness, we transfer the\nrecently proposed BERT Transformer model to 26 diverse text classification\ntasks, including the GLUE benchmark. Adapters attain near state-of-the-art\nperformance, whilst adding only a few parameters per task. On GLUE, we attain\nwithin 0.4% of the performance of full fine-tuning, adding only 3.6% parameters\nper task. By contrast, fine-tuning trains 100% of the parameters per task.", "comments": null }, "raw_data": { "context_before_exp": [ "\\documentclass{article}\n", "\n", "\n", "\\usepackage{microtype}\n", "\\usepackage{graphicx}\n", "\\usepackage{subfigure}\n", "\\usepackage{capt-of}\n", "\\usepackage{booktabs} \n", "\n", "\n", "\n", "\n", "\n", "\\usepackage{hyperref}\n", "\n", "\n", "\\usepackage{amssymb}\n", "\\usepackage{bm}\n", "\\usepackage{enumitem}\n", "\\usepackage{pbox}\n", "\\usepackage{adjustbox}\n", "\\usepackage[T1]{fontenc}\n", "\\usepackage{todonotes}\n", "\\usepackage{sidecap}\n", "\\sidecaptionvpos{figure}{c}\n", "\n", "\n", "\\newcommand{\\theHalgorithm}{\\arabic{algorithm}}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\usepackage[accepted]{icml2019}\n", "\n", "\n", "\n", "\n", "\\icmltitlerunning{Parameter-Efficient Transfer Learning for NLP}\n", "\n", "\\begin{document}\n", "\n", "\\twocolumn[\n", "\\icmltitle{Parameter-Efficient Transfer Learning for NLP}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\icmlsetsymbol{equal}{*}\n", "\n", "\\begin{icmlauthorlist}\n", "\\icmlauthor{Neil Houlsby}{goo}\n", "\\icmlauthor{Andrei Giurgiu}{goo,equal}\n", "\\icmlauthor{Stanis\\l{}aw Jastrz\\c{e}bski}{jag,equal}\n", "\\icmlauthor{Bruna Morrone}{goo}\n", "\\icmlauthor{Quentin de Laroussilhe}{goo}\n", "\\icmlauthor{Andrea Gesmundo}{goo}\n", "\\icmlauthor{Mona Attariyan}{goo}\n", "\\icmlauthor{Sylvain Gelly}{goo}\n", "\\end{icmlauthorlist}\n", "\n", "\\icmlaffiliation{goo}{Google Research}\n", "\\icmlaffiliation{jag}{Jagiellonian University}\n", "\n", "\\icmlcorrespondingauthor{Neil Houlsby}{neilhoulsby@google.com}\n", "\n", "\n", "\n", "\n", "\\icmlkeywords{NLP, Transfer Learning}\n", "\n", "\\vskip 0.3in\n", "]\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\printAffiliationsAndNotice{\\icmlEqualContribution} \n", "\n", "\\begin{abstract}\n", "Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to $26$ diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within $0.4\\\n", "\\end{abstract}\n", "\\section{Introduction}\n", "\n", "Transfer from pre-trained models yields strong performance on many NLP tasks~\\citep{dai2015,howard2018universal,radford2018improving}.\n", "BERT, a Transformer network trained on large text corpora with an\n", "unsupervised loss, attained state-of-the-art performance on text classification\n", "and extractive question answering~\\citep{devlin2018bert}.\n", "\n", "In this paper we address the online setting, where tasks arrive in a stream.\n", "The goal is to build a system that performs well on all of them, but without training an entire new model for every new task.\n", "A high degree of sharing between tasks is particularly useful for applications such as cloud services,\n", "where models need to be trained to solve many tasks that arrive from customers in sequence.\n", "For this, we propose a transfer learning strategy that yields \\emph{compact} and \\emph{extensible} downstream models.\n", "Compact models are those that solve many tasks using a small number of additional parameters per task.\n", "Extensible models can be trained incrementally to solve new tasks, without forgetting previous ones.\n", "Our method yields a such models without sacrificing performance.\n", "\n", "\\begin{figure}[t]\n", "\\centering\n", "\\includegraphics[width=0.95\\linewidth]{figures/glue_results.pdf}\n", "\\caption{\n", "Trade-off between accuracy and number of trained task-specific parameters, for adapter tuning and fine-tuning.\n", "The \\emph{y}-axis is normalized by the performance of full fine-tuning, details in Section~\\ref{sec:experiments}.\n", "The curves show the $20$th, $50$th, and $80$th performance percentiles across nine tasks from the GLUE benchmark.\n", "Adapter-based tuning attains a similar performance to full fine-tuning with two orders of magnitude fewer trained parameters.}\n", "\\label{fig:glue_summary_results}\n", "\\end{figure}\n", "\n", "The two most common transfer learning techniques in NLP are feature-based transfer and fine-tuning.\n", "Instead, we present an alternative transfer method based on adapter modules~\\citep{rebuffi2017}.\n", "Features-based transfer involves pre-training real-valued embeddings vectors.\n", "These embeddings may be at the word~\\citep{mikolov2013}, sentence~\\citep{cer2019}, or paragraph level~\\citep{le2014}.\n", "The embeddings are then fed to custom downstream models.\n", "Fine-tuning involves copying the weights from a pre-trained network and tuning them on the downstream task.\n", "Recent work shows that fine-tuning often enjoys better performance than feature-based transfer~\\citep{howard2018universal}.\n", "\n", "Both feature-based transfer and fine-tuning require a new set of weights for each task.\n", "Fine-tuning is more parameter efficient if the lower layers of a network are shared between tasks.\n", "However, our proposed adapter tuning method is even more parameter efficient.\n", "Figure~\\ref{fig:glue_summary_results} demonstrates this trade-off.\n", "The \\emph{x}-axis shows the number of parameters trained per task;\n", "this corresponds to the marginal increase in the model size required to solve each additional task.\n", "Adapter-based tuning requires training two orders of magnitude fewer parameters to fine-tuning, while attaining similar performance.\n", "\n", "Adapters are new modules added between layers of a pre-trained network.\n", "Adapter-based tuning differs from feature-based transfer and fine-tuning in the following way.\n", "Consider a function (neural network) with parameters $\\bm w$: $\\phi_{\\bm w}(\\bm x)$.\n", "Feature-based transfer composes $\\phi_{\\bm w}$ with a new function, $\\chi_{\\bm v}$, to yield $\\chi_{\\bm v}(\\phi_{\\bm w}(\\bm x))$.\n", "Only the new, task-specific, parameters, $\\bm v$, are then trained.\n", "Fine-tuning involves adjusting the original parameters, $\\bm w$, for each new task, limiting compactness.\n", "For adapter tuning, a new function, $\\psi_{\\bm w, \\bm v}(\\bm x)$, is defined, where parameters $\\bm w$ are copied over from pre-training.\n", "The initial parameters $\\bm v_0$ are set such that the new function resembles the original: $\\psi_{\\bm w, \\bm v_0}(\\bm x) \\approx \\phi_{\\bm w}(\\bm x)$.\n", "During training, only $\\bm v$ are tuned.\n", "For deep networks, defining $\\psi_{\\bm w, \\bm v}$ typically involves adding new layers to the original network, $\\phi_{\\bm w}$.\n", "If one chooses $|\\bm v|\\ll|\\bm w|$, the resulting model requires $\\sim|\\bm w|$ parameters for many tasks.\n", "Since $\\bm w$ is fixed, the model can be extended to new tasks without affecting previous ones.\n", "\n", "Adapter-based tuning relates to \\emph{multi-task} and \\emph{continual} learning.\n", "Multi-task learning also results in compact models.\n", "However, multi-task learning requires simultaneous access to all tasks, which adapter-based tuning does not require.\n", "Continual learning systems aim to learn from an endless stream of tasks.\n", "This paradigm is challenging because networks forget previous tasks after re-training~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n", "Adapters differ in that the tasks do not interact and the shared parameters are frozen.\n", "This means that the model has perfect memory of previous tasks using a small number of task-specific parameters.\n", "\n", "We demonstrate on a large and diverse set of text classification tasks that adapters yield parameter-efficient tuning for NLP.\n", "The key innovation is to design an effective adapter module and its integration with the base model.\n", "We propose a simple yet effective, bottleneck architecture.\n", "On the GLUE benchmark, our strategy almost matches the performance of the fully fine-tuned BERT,\n", "but uses only 3\\\n", "We observe similar results on a further $17$ public text datasets, and SQuAD extractive question answering.\n", "In summary, adapter-based tuning yields a single, extensible, model that attains near state-of-the-art performance in text classification.\n", "\\section{Adapter tuning for NLP}\n", "\n", "\\begin{SCfigure*}\n", "\\begin{tabular}{cc}\n", " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_insertion.pdf}&\n", " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_arch.pdf}\n", "\\end{tabular}\n", " \\caption{\n", " Architecture of the adapter module and its integration with the Transformer.\n", " \\textbf{Left:} We add the adapter module twice to each Transformer layer:\n", " after the projection following multi-headed attention and after the two feed-forward layers.\n", " \\textbf{Right:} The adapter consists of a bottleneck which contains few parameters relative to the attention and feedforward layers in the original model.\n", " The adapter also contains a skip-connection.\n", " During adapter tuning, the green layers are trained on the downstream data, this includes the adapter, the layer normalization parameters,\n", " and the final classification layer (not shown in the figure).\n", " \\label{fig:adapters_transformer}}\n", "\\end{SCfigure*}\n", "\n", "We present a strategy for tuning a large text model on several downstream tasks.\n", "Our strategy has three key properties:\n", "(i) it attains good performance,\n", "(ii) it permits training on tasks sequentially, that is, it does not require simultaneous access to all datasets,\n", "and (iii) it adds only a small number of additional parameters per task.\n", "These properties are especially useful in the context of cloud services,\n", "where many models need to be trained on a series of downstream tasks, so a high degree of sharing is desirable.\n", "\n", "To achieve these properties, we propose a new bottleneck adapter module.\n", "Tuning with adapter modules involves adding a small number of new parameters to a model, which are trained on the downstream task~\\citep{rebuffi2017}.\n", "When performing vanilla fine-tuning of deep networks, a modification is made to the top layer of the network.\n", "This is required because the label spaces and losses for the upstream and downstream tasks differ.\n", "Adapter modules perform more general architectural modifications to re-purpose a pre-trained network for a downstream task.\n", "In particular, the adapter tuning strategy involves injecting new layers into the original network.\n", "The weights of the original network are untouched, whilst the new adapter layers are initialized at random.\n", "In standard fine-tuning, the new top-layer and the original weights are co-trained.\n", "In contrast, in adapter-tuning, the parameters of the original network are frozen and therefore may be shared by many tasks.\n", "\n", "Adapter modules have two main features: a small number of parameters, and a near-identity initialization.\n", "The adapter modules need to be small compared to the layers of the original network.\n", "This means that the total model size grows relatively slowly when more tasks are added.\n", "A near-identity initialization is required for stable training of the adapted model;\n", "we investigate this empirically in Section~\\ref{sec:discussion}.\n", "By initializing the adapters to a near-identity function, original network is unaffected when training starts.\n", "During training, the adapters may then be activated to change the distribution of activations throughout the network.\n", "The adapter modules may also be ignored if not required;\n", "in Section~\\ref{sec:discussion} we observe that some adapters have more influence on the network than others.\n", "We also observe that if the initialization deviates too far from the identity function, the model may fail to train.\n", "\n", "\\subsection{Instantiation for Transformer Networks\\label{sec:bottleneckadapter}}\n", "\n", "We instantiate adapter-based tuning for text Transformers.\n", "These models attain state-of-the-art performance in many NLP tasks,\n", "including translation, extractive QA, and text classification problems~\\citep{vaswani2017,radford2018improving,devlin2018bert}.\n", "We consider the standard Transformer architecture, as proposed in~\\citet{vaswani2017}.\n", "\n", "Adapter modules present many architectural choices.\n", "We provide a simple design that attains good performance.\n", "We experimented with a number of more complex designs, see Section~\\ref{sec:discussion},\n", "but we found the following strategy performed as well as any other that we tested, across many datasets.\n", "\n", "Figure~\\ref{fig:adapters_transformer} shows our adapter architecture, and its application it to the Transformer.\n", "Each layer of the Transformer contains two primary sub-layers: an attention layer and a feedforward layer.\n", "Both layers are followed immediately by a projection that maps the features size back to the size of layer's input.\n", "A skip-connection is applied across each of the sub-layers.\n", "The output of each sub-layer is fed into layer normalization.\n", "We insert two serial adapters after each of these sub-layers.\n", "The adapter is always applied directly to the output of the sub-layer, after the projection back to the input size,\n", "but before adding the skip connection back.\n", "The output of the adapter is then passed directly into the following layer normalization.\n", "\n", "To limit the number of parameters, we propose a bottleneck architecture.\n", "The adapters first project the original $d$-dimensional features into a smaller dimension, $m$, apply a nonlinearity, then project back to $d$ dimensions.\n", "The total number of parameters added per layer, including biases, is $2md+d+m$.\n", "By setting $m\\ll d$, we limit the number of parameters added per task;\n", "in practice, we use around $0.5-8\\\n", "The bottleneck dimension, $m$, provides a simple means to trade-off performance with parameter efficiency.\n", "The adapter module itself has a skip-connection internally.\n", "With the skip-connection, if the parameters of the projection layers are initialized to near-zero,\n", "the module is initialized to an approximate identity function.\n", "\n", "Alongside the layers in the adapter module, we also train new layer normalization parameters per task.\n", "This technique, similar to conditional batch normalization~\\citep{de2017modulating},\n", "FiLM~\\citep{perez2018}, and self-modulation~\\citep{chen2019}, also yields parameter-efficient adaptation of a network; with only $2d$ parameters per layer.\n", "However, training the layer normalization parameters alone is insufficient for good performance,\n", "see Section~\\ref{sec:param_efficiency}.\n" ], "context_after_exp": [ "\\section{Experiments\\label{sec:experiments}}\n", "\n", "We show that adapters achieve parameter efficient transfer for text tasks.\n", "On the GLUE benchmark~\\citep{wang2018glue},\n", "adapter tuning is within $0.4\\\n", "We confirm this result on a further $17$ public classification tasks and SQuAD question answering.\n", "Analysis shows that adapter-based tuning automatically focuses on the higher layers of the network.\n", "\n", "\\subsection{Experimental Settings}\n", "\n", "We use the public, pre-trained BERT Transformer network as our base model.\n", "To perform classification with BERT, we follow the approach in~\\citet{devlin2018bert}.\n", "The first token in each sequence is a special ``classification token''.\n", "We attach a linear layer to the embedding of this token to predict the class label.\n", "\n", "Our training procedure also follows~\\citet{devlin2018bert}.\n", "We optimize using Adam~\\citep{kingma2014adam},\n", "whose learning rate is increased linearly over the first $10\\\n", "All runs are trained on $4$ Google Cloud TPUs with a batch size of $32$.\n", "For each dataset and algorithm, we run a hyperparameter sweep and select the best model according to accuracy on the validation set.\n", "For the GLUE tasks, we report the test metrics provided by the submission website\\footnote{\\url{https://gluebenchmark.com/}}.\n", "For the other classification tasks we report test-set accuracy.\n", "\n", "We compare to fine-tuning, the current standard for transfer of large pre-trained models,\n", "and the strategy successfully used by BERT.\n", "For $N$ tasks, full fine-tuning requires $N{\\times}$ the number of parameters of the pre-trained model.\n", "Our goal is to attain performance equal to fine-tuning, but with fewer total parameters, ideally near to $1{\\times}$.\n", "\n", "\\subsection{GLUE benchmark\\label{sec:glue}}\n", "\n", "\\begin{table*}[t]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{l|ll|rrrrrrrrr|r}\n", "\\toprule\n", "{} & \\pbox{3cm}{Total num\\\\ params} & \\pbox{3cm}{Trained \\\\ params / task} & CoLA & SST & MRPC & STS-B & QQP & MNLI\\textsubscript{m} & MNLI\\textsubscript{mm} & QNLI & RTE & Total \\\\\n", "\\midrule\n", "BERT\\textsubscript{LARGE} & $9.0\\times$ & $100\\\n", "Adapters ($8$-$256$) & $1.3\\times$ & $3.6\\\n", "Adapters ($64$) & $1.2\\times$ & $2.1\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{\n", "Results on GLUE test sets scored using the GLUE evaluation server.\n", "MRPC and QQP are evaluated using F1 score.\n", "STS-B is evaluated using Spearman's correlation coefficient.\n", "CoLA is evaluated using Matthew's Correlation.\n", "The other tasks are evaluated using accuracy.\n", "Adapter tuning achieves comparable overall score ($80.0$) to full fine-tuning ($80.4$) using $1.3\\times$ parameters in total, compared to $9\\times$.\n", "Fixing the adapter size to $64$ leads to a slightly decreased overall score of $79.6$ and slightly smaller model.\n", "\\label{tab:glue}}\n", "\\end{table*}\n", "\n", "We first evaluate on GLUE.\\footnote{\n", "We omit WNLI as in~\\citet{devlin2018bert} because the no current algorithm beats the baseline of predicting the majority class.}\n", "For these datasets, we transfer from the pre-trained BERT\\textsubscript{LARGE} model,\n", "which contains $24$ layers, and a total of $330$M parameters, see~\\citet{devlin2018bert} for details.\n", "We perform a small hyperparameter sweep for adapter tuning:\n", "We sweep learning rates in $\\{3 \\cdot 10^{-5}, 3 \\cdot 10^{-4}, 3 \\cdot 10^{-3}\\}$, and number of epochs in $\\{3, 20\\}$.\n", "We test both using a fixed adapter size (number of units in the bottleneck),\n", "and selecting the best size per task from $\\{8, 64, 256\\}$.\n", "The adapter size is the only adapter-specific hyperparameter that we tune.\n", "Finally, due to training instability, we re-run $5$ times with different random seeds and select the best model on the validation set.\n", "\n", "Table~\\ref{tab:glue} summarizes the results.\n", "Adapters achieve a mean GLUE score of $80.0$, compared to $80.4$ achieved by full fine-tuning.\n", "The optimal adapter size varies per dataset. For example, $256$ is chosen for MNLI, whereas for the smallest dataset, RTE, $8$ is chosen.\n", "Restricting always to size $64$, leads to a small decrease in average accuracy to $79.6$.\n", "To solve all of the datasets in Table~\\ref{tab:glue}, fine-tuning requires $9\\times$ the total number of BERT parameters.\\footnote{\n", "We treat MNLI\\textsubscript{m} and MNLI\\textsubscript{mm} as separate tasks with individually tuned hyperparameters.\n", "However, they could be combined into one model, leaving $8\\times$ overall.}\n", "In contrast, adapters require only $1.3\\times$ parameters.\n", "\n", "\\subsection{Additional Classification Tasks}\n", "\n", "\\renewcommand{\\arraystretch}{0.9}\n", "\\begin{table*}[t]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{l|rrrrr}\n", "\\toprule\n", "Dataset &\n", "\\pbox{5cm}{No BERT\\\\baseline} &\n", "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Fine-tune} &\n", "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Variable FT} &\n", "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Adapters} \\\\\n", "\\midrule\n", "20 newsgroups & $ 91.1 $ & $ 92.8 \\pm 0.1 $ & $ 92.8 \\pm 0.1 $ & $91.7 \\pm 0.2$ \\\\\n", "Crowdflower airline & $ 84.5 $ & $ 83.6 \\pm 0.3 $ & $ 84.0 \\pm 0.1 $ & $ 84.5 \\pm 0.2 $ \\\\\n", "Crowdflower corporate messaging & $ 91.9 $ & $ 92.5 \\pm 0.5 $ & $ 92.4 \\pm 0.6 $ & $ 92.9 \\pm 0.3 $ \\\\\n", "Crowdflower disasters & $ 84.9 $ & $ 85.3 \\pm 0.4 $ & $ 85.3 \\pm 0.4 $ & $ 84.1 \\pm 0.2 $ \\\\\n", "Crowdflower economic news relevance & $ 81.1 $ & $ 82.1 \\pm 0.0 $ & $ 78.9 \\pm 2.8 $ & $ 82.5 \\pm 0.3 $ \\\\\n", "Crowdflower emotion & $ 36.3 $ & $ 38.4 \\pm 0.1 $ & $ 37.6 \\pm 0.2 $ & $ 38.7 \\pm 0.1 $ \\\\\n", "Crowdflower global warming & $ 82.7 $ & $ 84.2 \\pm 0.4\n", " $ & $ 81.9 \\pm 0.2 $ & $ 82.7 \\pm 0.3 $ \\\\\n", "Crowdflower political audience & $ 81.0 $ & $ 80.9 \\pm 0.3 $ & $ 80.7 \\pm 0.8\n", " $ & $ 79.0 \\pm 0.5 $ \\\\\n", "Crowdflower political bias & $ 76.8 $ & $ 75.2 \\pm 0.9 $ & $ 76.5 \\pm 0.4 $ & $ 75.9 \\pm 0.3 $ \\\\\n", "Crowdflower political message & $ 43.8 $ & $ 38.9 \\pm 0.6 $ & $ 44.9 \\pm 0.6 $ & $ 44.1 \\pm 0.2 $ \\\\\n", "Crowdflower primary emotions & $ 33.5 $ & $ 36.9 \\pm 1.6 $ & $ 38.2 \\pm 1.0 $ & $ 33.9 \\pm 1.4 $ \\\\\n", "Crowdflower progressive opinion & $ 70.6 $ & $ 71.6 \\pm 0.5 $ & $ 75.9 \\pm 1.3 $ & $ 71.7 \\pm 1.1 $ \\\\\n", "Crowdflower progressive stance & $ 54.3 $ & $ 63.8 \\pm 1.0 $ & $ 61.5 \\pm 1.3 $ & $ 60.6 \\pm 1.4 $ \\\\\n", "Crowdflower US economic performance & $ 75.6 $ & $ 75.3 \\pm 0.1 $ & $ 76.5 \\pm 0.4 $ & $ 77.3 \\pm 0.1 $ \\\\\n", "Customer complaint database & $ 54.5 $ & $ 55.9 \\pm 0.1 $ & $ 56.4 \\pm 0.1 $ & $55.4 \\pm 0.1$ \\\\\n", "News aggregator dataset & $ 95.2 $ & $ 96.3 \\pm 0.0 $ & $ 96.5 \\pm 0.0 $ & $ 96.2 \\pm 0.0 $ \\\\\n", "SMS spam collection & $ 98.5 $ & $ 99.3 \\pm 0.2 $ & $ 99.3 \\pm 0.2 $ & $ 95.1 \\pm 2.2 $ \\\\\n", "\\midrule\n", "Average & $72.7$ & $73.7$ & $74.0$ & $73.3$ \\\\\n", "\\midrule\n", "Total number of params & \\textemdash & $17\\times$ & $9.9\\times$ & $1.19\\times$ \\\\\n", "Trained params/task & \\textemdash & $100\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{\n", "Test accuracy for additional classification tasks.\n", "In these experiments we transfer from the BERT\\textsubscript{BASE} model.\n", "For each task and algorithm, the model with the best validation set accuracy is chosen.\n", "We report the mean test accuracy and s.e.m. across runs with different random seeds.}\n", "\\label{tab:hub_results}\n", "\\vskip-3mm\n", "\\end{table*}\n", "\\renewcommand{\\arraystretch}{1.0}\n", "\n", "To further validate that adapters yields compact, performant, models, we test on additional, publicly available, text classification tasks.\n", "This suite contains a diverse set of tasks:\n", "The number of training examples ranges from $900$ to $330$k,\n", "the number of classes ranges from $2$ to $157$,\n", "and the average text length ranging from $57$ to $1.9$k characters.\n", "Statistics and references for all of the datasets are in the appendix.\n", "\n", "For these datasets, we use a batch size of $32$.\n", "The datasets are diverse, so we sweep a wide range of learning rates:\n", "$\\{1 \\cdot 10^{-5}, 3 \\cdot 10^{-5}, 1 \\cdot 10^{-4}, 3 \\cdot 10^{-3}\\}$.\n", "Due to the large number of datasets, we select the number of training epochs from the set $\\{20, 50, 100\\}$ manually, from inspection of the validation set learning curves.\n", "We select the optimal values for both fine-tuning and adapters; the exact values are in the appendix.\n", "\n", "We test adapters sizes in $\\{2, 4, 8, 16, 32, 64\\}$.\n", "Since some of the datasets are small, fine-tuning the entire network may be sub-optimal.\n", "Therefore, we run an additional baseline: variable fine-tuning.\n", "For this, we fine-tune only the top $n$ layers, and freeze the remainder.\n", "We sweep $n\\in\\{1,2,3,5,7,9,11,12\\}$.\n", "In these experiments, we use the BERT\\textsubscript{BASE} model with $12$ layers,\n", "therefore, variable fine-tuning subsumes full fine-tuning when $n=12$.\n", "\n", "\\begin{figure*}[t]\n", "\\centering\n", "\\vskip-1mm\n", "\\begin{tabular}{cc}\n", "GLUE (BERT\\textsubscript{LARGE}) & Additional Tasks (BERT\\textsubscript{BASE}) \\\\\n", "\\includegraphics[width=0.45\\textwidth]{figures/glue_results_b.pdf}&\n", "\\includegraphics[width=0.45\\textwidth]{figures/extra_tasks_plot.pdf}\n", "\\end{tabular}\n", "\\vskip-2mm\n", "\\caption{\n", "Accuracy versus the number of trained parameters, aggregated across tasks.\n", "We compare adapters of different sizes (orange) with fine-tuning the top $n$ layers, for varying $n$ (blue).\n", "The lines and shaded areas indicate the $20$th, $50$th, and $80$th percentiles across tasks.\n", "For each task and algorithm, the best model is selected for each point along the curve.\n", "For GLUE, the validation set accuracy is reported.\n", "For the additional tasks, we report the test-set accuracies.\n", "To remove the intra-task variance in scores,\n", "we normalize the scores for each model and task by subtracting the performance of full fine-tuning on the corresponding task.\n", "}\n", "\\label{fig:tradeoff_alltasks}\n", "\\end{figure*}\n", "\n", "\\begin{figure*}[h!]\n", "\\centering\n", "\\vskip-1mm\n", "\\begin{tabular}{cc}\n", "MNLI\\textsubscript{m}(BERT\\textsubscript{BASE}) & CoLA (BERT\\textsubscript{BASE}) \\\\\n", "\\includegraphics[width=0.45\\linewidth]{figures/param_efficiency_mnli.pdf}&\n", "\\includegraphics[width=0.45\\linewidth]{figures/param_efficiency_cola.pdf}\n", "\\end{tabular}\n", "\\caption{\n", "Validation set accuracy versus number of trained parameters for three methods:\n", "(i) Adapter tuning with an adapter sizes $2^n$ for $n=0 \\ldots 9$ (orange).\n", "(ii) Fine-tuning the top $k$ layers for $k=1\\ldots 12$ (blue).\n", "(iii) Tuning the layer normalization parameters only (green).\n", "Error bars indicate $\\pm 1$ s.e.m. across three random seeds.}\n", "\\label{fig:tradeoff_glue}\n", "\\vskip-3mm\n", "\\end{figure*}\n", "\n", "Unlike the GLUE tasks, there is no comprehensive set of state-of-the-art numbers for this suite of tasks.\n", "Therefore, to confirm that our BERT-based models are competitive, we collect our own benchmark performances.\n", "For this, we run a large-scale hyperparameter search over standard network topologies.\n", "Specifically, we run the single-task Neural AutoML algorithm, similar to~\\citet{zoph2017,wong2018transferautoml}.\n", "This algorithm searches over a space of feedforward and convolutional networks,\n", "stacked on pre-trained text embeddings modules publicly available via TensorFlow Hub\\footnote{\\url{https://www.tensorflow.org/hub}}.\n", "The embeddings coming from the TensorFlow Hub modules may be frozen or fine-tuned.\n", "The full search space is described in the appendix.\n", "For each task, we run AutoML for one week on CPUs, using $30$ machines.\n", "In this time the algorithm explores over $10$k models on average per task.\n", "We select the best final model for each task according to validation set accuracy.\n", "\n", "The results for the AutoML benchmark (``no BERT baseline''), fine-tuning, variable fine-tuning, and adapter-tuning are reported in Table~\\ref{tab:hub_results}.\n", "The AutoML baseline demonstrates that the BERT models are competitive.\n", "This baseline explores thousands of models, yet the BERT models perform better on average.\n", "We see similar pattern of results to GLUE.\n", "The performance of adapter-tuning is close to full fine-tuning ($0.4\\\n", "Fine-tuning requires $17\\times$ the number of parameters to BERT\\textsubscript{BASE} to solve all tasks.\n", "Variable fine-tuning performs slightly better than fine-tuning, whilst training fewer layers.\n", "The optimal setting of variable fine-tuning results in training $52\\\n", "Adapters, however, offer a much more compact model.\n", "They introduce $1.14\\\n", "\n", "\\subsection{Parameter/Performance trade-off\\label{sec:param_efficiency}}\n", "\n", "The adapter size controls the parameter efficiency, smaller adapters introduce fewer parameters, at a possible cost to performance.\n", "To explore this trade-off, we consider different adapter sizes, and compare to two baselines:\n", "(i) Fine-tuning of only the top $k$ layers of BERT\\textsubscript{BASE}.\n", "(ii) Tuning only the layer normalization parameters.\n", "The learning rate is tuned using the range presented in Section~\\ref{sec:glue}.\n", "\n", "Figure~\\ref{fig:tradeoff_alltasks} shows the parameter/performance trade-off aggregated over all classification tasks in each suite (GLUE and ``additional'').\n", "On GLUE, performance decreases dramatically when fewer layers are fine-tuned.\n", "Some of the additional tasks benefit from training fewer layers, so performance of fine-tuning decays much less.\n", "In both cases, adapters yield good performance across a range of sizes two orders of magnitude fewer than fine-tuning.\n", "\n", "Figure~\\ref{fig:tradeoff_glue} shows more details for two GLUE tasks: MNLI\\textsubscript{m} and CoLA.\n", "Tuning the top layers trains more task-specific parameters for all $k>2$.\n", "When fine-tuning using a comparable number of task-specific parameters, the performance decreases substantially compared to adapters.\n", "For instance, fine-tuning just the top layer yields approximately $9$M trainable parameters and $77.8 \\\n", "In contrast, adapter tuning with size $64$ yields approximately $2$M trainable parameters and $83.7\\\n", "For comparison, full fine-tuning attains $84.4 \\\n", "We observe a similar trend on CoLA.\n", "\n", "As a further comparison, we tune the parameters of layer normalization alone.\n", "These layers only contain point-wise additions and multiplications, so introduce very few trainable parameters: $40$k for BERT\\textsubscript{BASE}.\n", "However this strategy performs poorly: performance decreases by approximately $3.5\\\n", "\n", "To summarize, adapter tuning is highly parameter-efficient, and produces a compact model with a strong performance, comparable to full fine-tuning.\n", "Training adapters with sizes $0.5-5\\\n", "performance is within $1\\\n", "\n", "\\subsection{SQuAD Extractive Question Answering}\n", "\n", "\\begin{figure}[t]\n", "\\centering\n", "\\vskip-2mm\n", "\\includegraphics[width=0.9\\linewidth]{figures/squad_adapters_baseline.pdf}\n", "\\caption{\n", "Validation accuracy versus the number of trained parameters for SQuAD v1.1.\n", "Error bars indicate the s.e.m. across three seeds, using the best hyperparameters.\n", "}\n", "\\label{fig:squad}\n", "\\vskip-5mm\n", "\\end{figure}\n", "\n", "Finally, we confirm that adapters work on tasks other than classification by running on SQuAD v1.1~\\citep{rajpurkar2018}.\n", "Given a question and Wikipedia paragraph, this task requires selecting the answer span to the question from the paragraph.\n", "Figure~\\ref{fig:squad} displays the parameter/performance trade-off of fine-tuning and adapters on the SQuAD validation set.\n", "For fine-tuning, we sweep the number of trained layers, learning rate in $\\{3\\cdot 10^{-5}, 5\\cdot 10^{-5}, 1\\cdot 10^{-4}\\}$, and number of epochs in $\\{2,3,5\\}$.\n", "For adapters, we sweep the adapter size, learning rate in $\\{3\\cdot 10^{-5}, 1\\cdot 10^{-4}, 3\\cdot 10^{-4}, 1\\cdot 10^{-3}\\}$, and number of epochs in $\\{3,10,20\\}$.\n", "As for classification, adapters attain performance comparable to full fine-tuning, while training many fewer parameters.\n", "Adapters of size $64$ ($2\\\n", "SQuAD performs well even with very small adapters, those of size $2$ ($0.1\\\n", "\n", "\\subsection{Analysis and Discussion\\label{sec:discussion}}\n", "\n", "We perform an ablation to determine which adapters are influential.\n", "For this, we remove some trained adapters and re-evaluate the model (without re-training) on the validation set.\n", "Figure~\\ref{fig:ablation_and_init} shows the change in performance when removing adapters from all continuous layer spans.\n", "The experiment is performed on BERT\\textsubscript{BASE} with adapter size $64$ on MNLI and CoLA.\n", "\n", "First, we observe that removing any single layer's adapters has only a small impact on performance.\n", "The elements on the heatmaps' diagonals show the performances of removing adapters from single layers, where largest performance drop is $2\\\n", "In contrast, when all of the adapters are removed from the network,\n", "the performance drops substantially:\n", "to $37\\\n", "This indicates that although each adapter has a small influence on the overall network, the overall effect is large.\n", "\n", "Second, Figure~\\ref{fig:ablation_and_init} suggests that adapters on the lower layers have a smaller impact than the higher-layers.\n", "Removing the adapters from the layers $0-4$ on MNLI barely affects performance.\n", "This indicates that adapters perform well because they automatically prioritize higher layers.\n", "Indeed, focusing on the upper layers is a popular strategy in fine-tuning~\\citep{howard2018universal}.\n", "One intuition is that the lower layers extract lower-level features that are shared among tasks, while the\n", "higher layers build features that are unique to different tasks.\n", "This relates to our observation that for some tasks, fine-tuning only the top layers outperforms full fine-tuning, see Table~\\ref{tab:hub_results}.\n", "\n", "Next, we investigate the robustness of the adapter modules to the number of neurons and initialization scale.\n", "In our main experiments the weights in the adapter module were drawn from\n", "a zero-mean Gaussian with standard deviation $10^{-2}$, truncated to two standard deviations.\n", "To analyze the impact of the initialization scale on the performance, we test standard deviations in the interval $[10^{-7},1]$.\n", "Figure~\\ref{fig:ablation_and_init} summarizes the results.\n", "We observe that on both datasets, the performance of adapters is robust for standard deviations below $10^{-2}$.\n", "However, when the initialization is too large, performance degrades, more substantially on CoLA.\n", "\n", "To investigate robustness of adapters to the number of neurons, we re-examine the experimental data from Section~\\ref{sec:glue}.\n", "We find that the quality of the model across adapter sizes is stable,\n", "and a fixed adapter size across all the tasks could be used with small detriment to performance.\n", "For each adapter size we calculate the mean validation accuracy across the eight\n", "classification tasks by selecting the optimal learning rate and number of epochs\\footnote{\n", "We treat here MNLI\\textsubscript{m} and MNLI\\textsubscript{mm} as separate tasks.\n", "For consistency, for all datasets we use accuracy metric and exclude the regression STS-B task.}.\n", "For adapter sizes $8$, $64$, and $256$, the mean validation accuracies are $86.2\\\n", "This message is further corroborated by Figures~\\ref{fig:tradeoff_glue} and~\\ref{fig:squad},\n", "which show a stable performance across a few orders of magnitude.\n", "\n", "Finally, we tried a number of extensions to the adapter's architecture\n", "that did not yield a significant boost in performance.\n", "We document them here for completeness.\n", "We experimented with\n", "(i) adding a batch/layer normalization to the adapter,\n", "(ii) increasing the number of layers per adapter,\n", "(iii) different activation functions, such as tanh,\n", "(iv) inserting adapters only inside the attention layer,\n", "(v) adding adapters in parallel to the main layers, and possibly with a multiplicative interaction.\n", "In all cases we observed the resulting performance to be similar to the bottleneck proposed in Section~\\ref{sec:bottleneckadapter}.\n", "Therefore, due to its simplicity and strong performance, we recommend the original adapter architecture.\n", "\n", "\\begin{figure*}\n", "\\centering\n", "\\begin{tabular}[t]{ccc}\n", "MNLI\\textsubscript{m} & CoLA & \\\\\n", "\\includegraphics[height=3.7cm]{figures/mnli_ablation_heatmap.pdf}&\n", "\\includegraphics[height=3.7cm]{figures/cola_ablation_heatmap.pdf}&\n", "\\raisebox{-0.2cm}{\\includegraphics[height=3.9cm]{figures/init_study.pdf}}\n", "\\end{tabular}\n", "\\caption{\n", "\\textbf{Left, Center:}\n", "Ablation of trained adapters from continuous layer spans.\n", "The heatmap shows the relative decrease in validation accuracy to the fully trained adapted model.\n", "The \\emph{y} and \\emph{x} axes indicate the first and last layers ablated (inclusive), respectively.\n", "The diagonal cells, highlighted in green, indicate ablation of a single layer's adapters.\n", "The cell in the top-right indicates ablation of all adapters.\n", "Cells in the lower triangle are meaningless, and are set to $0\\\n", "\\textbf{Right:}\n", "Performance of BERT\\textsubscript{BASE} using adapters with different initial weight magnitudes.\n", "The \\emph{x}-axis is the standard deviation of the initialization distribution.\n", "}\n", "\\label{fig:ablation_and_init}\n", "\\vskip-4mm\n", "\\end{figure*}\n", "\\section{Related Work}\n", "\n", "\\paragraph{Pre-trained text representations}\n", "Pre-trained textual representations are widely used to improve performance on NLP tasks.\n", "These representations are trained on large corpora (usually unsupervised), and fed as features to downstream models.\n", "In deep networks, these features may also be fine-tuned on the downstream task.\n", "Brown clusters, trained on distributional information, are a classic example of pre-trained representations~\\citep{brown1992}.\n", "\\citet{turian2010} show that pre-trained embeddings of words outperform those trained from scratch.\n", "Since deep-learning became popular, word embeddings have been widely used, and many training strategies have arisen~\\citep{mikolov2013,pennington2014,bojanowski2017enriching}.\n", "Embeddings of longer texts, sentences and paragraphs, have also been developed~\\citep{le2014,kiros2015,conneau2017,cer2019}.\n", "\n", "To encode context in these representations, features are extracted from internal representations of sequence models,\n", "such as MT systems~\\citep{mccann2017}, and BiLSTM language models, as used in ELMo~\\citep{peters2018}.\n", "As with adapters, ELMo exploits the layers other than the top layer of a pre-trained network.\n", "However, this strategy only \\emph{reads} from the inner layers.\n", "In contrast, adapters \\emph{write} to the inner layers, re-configuring the processing of features through the entire network.\n", "\n", "\\paragraph{Fine-tuning}\n", "Fine-tuning an entire pre-trained model has become a popular alternative to features~\\citep{dai2015,howard2018universal,radford2018improving}\n", "In NLP, the upstream model is usually a neural language model~\\citep{bengio2003}.\n", "Recent state-of-the-art results on question answering~\\citep{rajpurkar2016} and text classification~\\citep{wang2018glue} have been attained by fine-tuning a Transformer network~\\citep{vaswani2017} with a Masked Language Model loss~\\citep{devlin2018bert}.\n", "Performance aside, an advantage of fine-tuning is that it does not require task-specific model design, unlike representation-based transfer.\n", "However, vanilla fine-tuning does require a new set of network weights for every new task.\n", "\n", "\\paragraph{Multi-task Learning}\n", "Multi-task learning (MTL) involves training on tasks simultaneously.\n", "Early work shows that sharing network parameters across tasks exploits task regularities, yielding improved performance~\\citep{caruana1997}.\n", "The authors share weights in lower layers of a network, and use specialized higher layers.\n", "Many NLP systems have exploited MTL.\n", "Some examples include: text processing systems (part of speech, chunking, named entity recognition, etc.)~\\citep{collobert2008}, multilingual models~\\citep{huang2013cross}, semantic parsing~\\citep{peng2017}, machine translation~\\citep{johnson2017}, and question answering~\\citep{choi2017}.\n", "MTL yields a single model to solve all problems.\n", "However, unlike our adapters, MTL requires simultaneous access to the tasks during training.\n", "\n", "\\paragraph{Continual Learning}\n", "As an alternative to simultaneous training, continual, or lifelong, learning aims to learn from a sequence of tasks~\\citep{thrun1998}.\n", "However, when re-trained, deep networks tend to forget how to perform previous tasks; a challenge termed catastrophic forgetting~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n", "Techniques have been proposed to mitigate forgetting~\\citep{kirkpatrick2017overcoming,zenke2017continual}, however, unlike for adapters, the memory is imperfect.\n", "Progressive Networks avoid forgetting by instantiating a new network ``column'' for each task~\\citep{rusu2016progressive}.\n", "However, the number of parameters grows linearly with the number of tasks,\n", "since adapters are very small, our models scale much more favorably.\n", "\n", "\\paragraph{Transfer Learning in Vision}\n", "Fine-tuning models pre-trained on ImageNet~\\citep{deng2009} is ubiquitous when building image recognition models~\\citep{yosinski2014,huh2016makes}.\n", "This technique attains state-of-the-art performance on many vision tasks, including classification~\\citep{kornblith2018better}, fine-grained classifcation~\\citep{hermans2017}, segmentation~\\citep{long2015}, and detection~\\citep{girshick2014}.\n", "In vision, convolutional adapter modules have been studied~\\citep{rebuffi2017,rebuffi2018,rosenfeld2018incremental}.\n", "These works perform incremental learning in multiple domains by adding small convolutional layers to a ResNet~\\citep{he2016} or VGG net~\\citep{simonyan2014very}.\n", "Adapter size is limited using $1\\times 1$ convolutions, whilst the original networks typically use $3\\times 3$.\n", "This yields $11\\\n", "Since the kernel size cannot be further reduced other weight compression techniques must be used to attain further savings.\n", "Our bottleneck adapters can be much smaller, and still perform well.\n", "\n", "Concurrent work explores similar ideas for BERT~\\citep{stickland2019bert}.\n", "The authors introduce Projected Attention Layers (PALs), small layers with a similar role to our adapters.\n", "The main differences are i) \\citet{stickland2019bert} use a different architecture,\n", "and ii) they perform multitask training, jointly fine-tuning BERT on all GLUE tasks.\n", "\\citet{semnani2019} perform an emprical comparison of our bottleneck Adpaters and PALs on SQuAD v2.0~\\citep{rajpurkar2018}.\n", "\n", "\\subsubsection*{Acknowledgments}\n", "We would like to thank Andrey Khorlin, Lucas Beyer,\n", "No\\'e Lutz, and Jeremiah Harmsen for useful comments and discussions.\n", "\n", "\\bibliography{nlp}\n", "\\bibliographystyle{icml2019}\n", "\n", "\n", "\\clearpage\n", "\n", "\\appendix\n", "\\onecolumn\n", "\n", "\\icmltitle{Supplementary Material for\\\\Parameter-Efficient Transfer Learning for NLP}\n", "\n", "\\section{Additional Text Classification Tasks}\n", "\\label{appendix:hub_stats}\n", "\n", "\n", "\\begin{table*}[ht]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{l|rrrrrr}\n", "\\toprule\n", "Dataset & Train examples & Validation examples & Test examples & Classes & Avg text length & Reference \\\\\n", "\\midrule\n", "20 newsgroups & $15076$ & $1885$ & $1885$ & $20$ & $1903$ & \\citep{Lang95} \\\\\n", "Crowdflower airline & $11712$ & $1464$ & $1464$ & $3$ & $104$ & crowdflower.com \\\\\n", "Crowdflower corporate messaging & $2494$ & $312$ & $312$ & $4$ & $121$ & crowdflower.com \\\\\n", "Crowdflower disasters & $8688$ & $1086$ & $1086$ & $2$ & $101$ & crowdflower.com \\\\\n", "Crowdflower economic news relevance & $6392$ & $799$ & $800$ & $2$ & $1400$ & crowdflower.com \\\\\n", "Crowdflower emotion & $32000$ & $4000$ & $4000$ & $13$ & $73$ & crowdflower.com \\\\\n", "Crowdflower global warming & $3380$ & $422$ & $423$ & $2$ & $112$ & crowdflower.com \\\\\n", "Crowdflower political audience & $4000$ & $500$ & $500$ & $2$ & $205$ & crowdflower.com \\\\\n", "Crowdflower political bias & $4000$ & $500$ & $500$ & $2$ & $205$ & crowdflower.com \\\\\n", "Crowdflower political message & $4000$ & $500$ & $500$ & $9$ & $205$ & crowdflower.com \\\\\n", "Crowdflower primary emotions & $2019$ & $252$ & $253$ & $18$ & $87$ & crowdflower.com \\\\\n", "Crowdflower progressive opinion & $927$ & $116$ & $116$ & $3$ & $102$ & crowdflower.com \\\\\n", "Crowdflower progressive stance & $927$ & $116$ & $116$ & $4$ & $102$ & crowdflower.com \\\\\n", "Crowdflower US economic performance & $3961$ & $495$ & $496$ & $2$ & $305$ & crowdflower.com \\\\\n", "Customer complaint database & $146667$ & $18333$ & $18334$ & $157$ & $1046$ & catalog.data.gov \\\\\n", "News aggregator dataset & $338349$ & $42294$ & $42294$ & $4$ & $57$ & \\citep{Lichman:2013}\\\\\n", "SMS spam collection & $4459$ & $557$ & $558$ & $2$ & $81$ & \\citep{Almeida:2011:CSS:2034691.2034742}\\\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{Statistics and references for the additional text classification tasks.}\n", "\\label{tab:hub_stats}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[ht]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{l|rr}\n", "\\toprule\n", "Dataset & Epochs (Fine-tune) & Epochs (Adapters) \\\\\n", "\\midrule\n", "20 newsgroups & $50$ & $50$ \\\\\n", "Crowdflower airline & $50$ & $20$ \\\\\n", "Crowdflower corporate messaging & $100$ & $50$ \\\\\n", "Crowdflower disasters & $50$ & $50$ \\\\\n", "Crowdflower economic news relevance & $20$ & $20$ \\\\\n", "Crowdflower emotion & $20$ & $20$ \\\\\n", "Crowdflower global warming & $100$ & $50$ \\\\\n", "Crowdflower political audience & $50$ & $20$ \\\\\n", "Crowdflower political bias & $50$ & $50$ \\\\\n", "Crowdflower political message & $50$ & $50$ \\\\\n", "Crowdflower primary emotions & $100$ & $100$ \\\\\n", "Crowdflower progressive opinion & $100$ & $100$ \\\\\n", "Crowdflower progressive stance & $100$ & $100$ \\\\\n", "Crowdflower US economic performance & $100$ & $20$ \\\\\n", "Customer complaint database & $20$ & $20$ \\\\\n", "News aggregator dataset & $20$ & $20$ \\\\\n", "SMS spam collection & $50$ & $20$ \\\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{Number of training epochs selected for the additional classification tasks.}\n", "\\label{tab:hub_epochs}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[ht]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{ll}\n", "\\toprule\n", "\\bf Parameter & \\bf Search Space \\\\\n", "\\midrule\n", "1) Input embedding modules & Refer to Table~\\ref{tab:hub_embeddings_text} \\\\\n", "2) Fine-tune input embedding module & \\{True, False\\} \\\\\n", "3) Lowercase text & \\{True, False\\} \\\\\n", "4) Remove non alphanumeric text & \\{True, False\\} \\\\\n", "5) Use convolution & \\{True, False\\} \\\\\n", "6) Convolution activation & \\{relu, relu6, leaky relu, swish, sigmoid, tanh\\} \\\\\n", "7) Convolution batch norm & \\{True, False\\} \\\\\n", "8) Convolution max ngram length & \\{2, 3\\} \\\\\n", "9) Convolution dropout rate & [0.0, 0.4] \\\\\n", "10) Convolution number of filters & [50, 200]\\\\\n", "11) Convolution embedding dropout rate & [0.0, 0.4] \\\\\n", "12) Number of hidden layers & \\{0, 1, 2, 3, 5\\} \\\\\n", "13) Hidden layers size & \\{64, 128, 256\\} \\\\\n", "14) Hidden layers activation & \\{relu, relu6, leaky relu, swish, sigmoid, tanh\\} \\\\\n", "15) Hidden layers normalization & \\{none, batch norm, layer norm\\} \\\\\n", "16) Hidden layers dropout rate & \\{0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5\\} \\\\\n", "17) Deep tower learning rate & \\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\\} \\\\\n", "18) Deep tower regularization weight & \\{0.0, 0.0001, 0.001, 0.01\\} \\\\\n", "19) Wide tower learning rate & \\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\\} \\\\\n", "20) Wide tower regularization weight & \\{0.0, 0.0001, 0.001, 0.01\\} \\\\\n", "21) Number of training samples & \\{1e5, 2e5, 5e5, 1e6, 2e6\\} \\\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{The search space of baseline models for the additional text classification tasks.}\n", "\\label{tab:hub_ss}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[ht]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{llllll}\n", "\\bf ID & \\bf Dataset &\\bf Embed & \\bf Vocab. & \\bf Training & \\bf\n", "\n", "TensorFlow Hub Handles \\\\\n", "\\bf & \\bf size & \\bf dim. & \\bf size & \\bf algorithm &\n", "Prefix: \\texttt{https://tfhub.dev/google/}\\\\\n", "& (tokens) & & & \\\\\n", "\\hline \\\\\n", "English-small & 7B & 50 & 982k & Lang. model &\n", "\n", "\\texttt{nnlm-en-dim50-with-normalization/1} \\\\\n", "English-big & 200B & 128 & 999k & Lang. model &\n", "\n", "\\texttt{nnlm-en-dim128-with-normalization/1} \\\\\n", "English-wiki-small & 4B & 250 & 1M & Skipgram &\n", "\n", "\\texttt{Wiki-words-250-with-normalization/1} \\\\\n", "English-wiki-big & 4B & 500 & 1M & Skipgram &\n", "\n", "\\texttt{Wiki-words-500-with-normalization/1} \\\\\n", "Universal-sentence-encoder & - & 512 & - & \\citep{cer2018universal} &\n", "\n", "\\texttt{universal-sentence-encoder/2} \\\\\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{Options for text input embedding modules.\n", "These are pre-trained text embedding tables.\n", "\n", "\n", "We provide the handle for the modules that are publicly distributed via the TensorFlow Hub service (\\texttt{https://www.tensorflow.org/hub}).}\n", "\\label{tab:hub_embeddings_text}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[ht]\n", "\\centering\n", "\\begin{adjustbox}{max width=\\textwidth}\n", "\\begin{tabular}{l|lllllllllllllllllllll}\n", "\\toprule\n", "Dataset & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21\\\\\n", "\\midrule\n", "20 newsgroups & Universal-sentence-encoder & False & True & True & False & relu6 & False & 2 & 0.37 & 94 & 0.38 & 1 & 128 & leaky relu & batch norm & 0.5 & 0.5 & 0 & 0.05 & 0.0001 & 1000000 \\\\\n", "Crowdflower airline & English-big & False & False & False & True & leaky relu & False & 3 & 0.36 & 200 & 0.07 & 0 & 128 & tanh & layer norm & 0.4 & 0.1 & 0.001 & 0.05 & 0.001 & 200000 \\\\\n", "Crowdflower corporate messaging & English-big & False & False & True & True & tanh & True & 3 & 0.40 & 56 & 0.40 & 1 & 64 & tanh & batch norm & 0.5 & 0.5 & 0.001 & 0.01 & 0 & 200000 \\\\\n", "Crowdflower disasters & Universal-sentence-encoder & True & True & False & True & swish & True & 3 & 0.27 & 52 & 0.22 & 0 & 64 & relu & none & 0.2 & 0.005 & 0.0001 & 0.005 & 0.01 & 500000 \\\\\n", "Crowdflower economic news relevance & Universal-sentence-encoder & True & True & False & False & leaky relu & False & 2 & 0.27 & 63 & 0.04 & 3 & 128 & swish & layer norm & 0.2 & 0.01 & 0.01 & 0.001 & 0 & 100000 \\\\\n", "Crowdflower emotion & Universal-sentence-encoder & False & True & False & False & relu6 & False & 3 & 0.35 & 132 & 0.34 & 1 & 64 & tanh & none & 0.05 & 0.05 & 0 & 0.05 & 0 & 200000 \\\\\n", "Crowdflower global warming & Universal-sentence-encoder & False & True & True & False & swish & False & 3 & 0.39 & 200 & 0.36 & 1 & 128 & leaky relu & batch norm & 0.4 & 0.05 & 0 & 0.001 & 0.001 & 1000000 \\\\\n", "Crowdflower political audience & English-small & True & False & True & True & relu & False & 3 & 0.11 & 98 & 0.07 & 0 & 64 & relu & none & 0.5 & 0.05 & 0.001 & 0.001 & 0 & 100000 \\\\\n", "Crowdflower political bias & English-big & False & True & True & False & swish & False & 3 & 0.12 & 81 & 0.30 & 0 & 64 & relu6 & none & 0 & 0.01 & 0 & 0.005 & 0.01 & 200000 \\\\\n", "Crowdflower political message & Universal-sentence-encoder & False & False & True & False & swish & True & 2 & 0.36 & 57 & 0.35 & 0 & 64 & tanh & none & 0.5 & 0.01 & 0.001 & 0.005 & 0 & 200000 \\\\\n", "Crowdflower primary emotions & English-big & False & True & True & True & swish & False & 3 & 0.40 & 191 & 0.03 & 0 & 256 & relu6 & none & 0.5 & 0.1 & 0.001 & 0.05 & 0 & 200000 \\\\\n", "Crowdflower progressive opinion & English-big & True & False & True & True & relu6 & False & 3 & 0.40 & 199 & 0.28 & 0 & 128 & relu & batch norm & 0.3 & 0.1 & 0.01 & 0.005 & 0.001 & 200000 \\\\\n", "Crowdflower progressive stance & Universal-sentence-encoder & True & False & True & False & relu & True & 3 & 0.01 & 195 & 0.00 & 2 & 256 & tanh & layer norm & 0.4 & 0.005 & 0 & 0.005 & 0.0001 & 500000 \\\\\n", "Crowdflower us economic performance & English-big & True & False & True & True & tanh & True & 2 & 0.31 & 53 & 0.24 & 1 & 256 & leaky relu & batch norm & 0.3 & 0.05 & 0.0001 & 0.001 & 0.0001 & 100000 \\\\\n", "Customer complaint database & English-big & True & False & False & False & tanh & False & 2 & 0.03 & 69 & 0.10 & 1 & 256 & leaky relu & layer norm & 0.1 & 0.05 & 0.0001 & 0.05 & 0.001 & 1000000 \\\\\n", "News aggregator dataset & Universal-sentence-encoder & False & True & True & False & sigmoid & True & 2 & 0.00 & 156 & 0.29 & 3 & 256 & relu & batch norm & 0.05 & 0.05 & 0 & 0.5 & 0.0001 & 1000000 \\\\\n", "Sms spam collection & English-wiki-small & True & True & True & True & leaky relu & False & 3 & 0.20 & 54 & 0.00 & 1 & 128 & leaky relu & batch norm & 0 & 0.1 & 0 & 0.05 & 0.01 & 1000000 \\\\\n", "\\bottomrule\n", "\\end{tabular}\n", "\\end{adjustbox}\n", "\\caption{Search space parameters (see Table~\\ref{tab:hub_ss}) for the AutoML baseline models that were selected.}\n", "\\label{tab:hub_model_params}\n", "\\end{table*}\n", "\n", "\n", "\\section{Learning Rate Robustness}\n", "\n", "\n", "\\begin{figure}[t]\n", "\\centering\n", "\\includegraphics[width=0.5\\linewidth]{figures/squad_adapters_lr.pdf}\n", "\\caption{\n", "Best performing models at different learning rates.\n", "Error vars indicate the s.e.m. across three random seeds.\n", "}\n", "\\label{fig:lr}\n", "\\end{figure}\n", "\n", "We test the robustness of adapters and fine-tuning to the learning rate.\n", "We ran experiments with learning rates in the range $[2\\cdot 10^{-5},10^{-3}]$, and selected the best hyperparameters for each method at each learning rate.\n", "Figure~\\ref{fig:lr} shows the results.\n", "\n", "\\end{document}\n" ], "del_percentage": 0.12222 } }