Reza8848 commited on
Commit
837b615
·
1 Parent(s): 7d9021d

Track large files with Git LFS

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +4 -0
  2. Equation_Inference/equation.1049.json +3 -0
  3. Experiment_Design/1902.00751/1902.00751_source.tar.gz +3 -0
  4. Experiment_Design/1902.00751/data_text.json +1060 -0
  5. Experiment_Design/1902.00751/images/Adapter_arch.pdf +3 -0
  6. Experiment_Design/1902.00751/images/Adapter_insertion.pdf +3 -0
  7. Experiment_Design/1902.00751/images/cola_ablation_heatmap.pdf +3 -0
  8. Experiment_Design/1902.00751/images/extra_tasks_plot.pdf +3 -0
  9. Experiment_Design/1902.00751/images/glue_results.pdf +3 -0
  10. Experiment_Design/1902.00751/images/glue_results_b.pdf +3 -0
  11. Experiment_Design/1902.00751/images/init_study.pdf +3 -0
  12. Experiment_Design/1902.00751/images/mnli_ablation_heatmap.pdf +3 -0
  13. Experiment_Design/1902.00751/images/param_efficiency_cola.pdf +3 -0
  14. Experiment_Design/1902.00751/images/param_efficiency_mnli.pdf +3 -0
  15. Experiment_Design/1902.00751/images/squad_adapters_baseline.pdf +3 -0
  16. Experiment_Design/1902.00751/images/squad_adapters_lr.pdf +3 -0
  17. Experiment_Design/1902.00751/images/used/Adapter_arch_page_1.png +3 -0
  18. Experiment_Design/1902.00751/images/used/Adapter_insertion_page_1.png +3 -0
  19. Experiment_Design/1906.01502/1906.01502_source.tar.gz +3 -0
  20. Experiment_Design/1906.01502/data_text.json +691 -0
  21. Experiment_Design/1906.01502/images/NearestNeighborCrossLing_ru.png +3 -0
  22. Experiment_Design/1906.01502/images/ner_overlap.png +3 -0
  23. Experiment_Design/1906.01502/images/wals_nofixed_bars.png +3 -0
  24. Experiment_Design/1906.03158/1906.03158_source.tar.gz +3 -0
  25. Experiment_Design/1906.03158/data_text.json +0 -0
  26. Experiment_Design/1906.03158/images/cls_basic.pdf +3 -0
  27. Experiment_Design/1906.03158/images/cls_basic.svg +1 -0
  28. Experiment_Design/1906.03158/images/entity_mention_pooling.pdf +3 -0
  29. Experiment_Design/1906.03158/images/fewrel_limit_examples.pdf +3 -0
  30. Experiment_Design/1906.03158/images/fewrel_limit_examples_10x1.pdf +3 -0
  31. Experiment_Design/1906.03158/images/fewrel_limit_examples_5x1.pdf +3 -0
  32. Experiment_Design/1906.03158/images/fewrel_limit_relations.pdf +3 -0
  33. Experiment_Design/1906.03158/images/fewrel_limit_relations_10x1.pdf +3 -0
  34. Experiment_Design/1906.03158/images/fewrel_limit_relations_5x1.pdf +3 -0
  35. Experiment_Design/1906.03158/images/fewshot_training.pdf +3 -0
  36. Experiment_Design/1906.03158/images/markers_and_cls.pdf +3 -0
  37. Experiment_Design/1906.03158/images/markers_and_entity_mentions.pdf +3 -0
  38. Experiment_Design/1906.03158/images/markers_and_markers_pooling.pdf +3 -0
  39. Experiment_Design/1906.03158/images/mtb_train_progress.pdf +3 -0
  40. Experiment_Design/1906.03158/images/positional_embeddings.pdf +3 -0
  41. Experiment_Design/1906.03158/images/positional_embeddings_old.pdf +3 -0
  42. Experiment_Design/1906.03158/images/supervised_training.pdf +3 -0
  43. Experiment_Design/1906.03158/images/used/cls_basic_page_1.png +3 -0
  44. Experiment_Design/1906.03158/images/used/entity_mention_pooling_page_1.png +3 -0
  45. Experiment_Design/1906.03158/images/used/fewshot_training_page_1.png +3 -0
  46. Experiment_Design/1906.03158/images/used/markers_and_cls_page_1.png +3 -0
  47. Experiment_Design/1906.03158/images/used/markers_and_entity_mentions_page_1.png +3 -0
  48. Experiment_Design/1906.03158/images/used/markers_and_markers_pooling_page_1.png +3 -0
  49. Experiment_Design/1906.03158/images/used/positional_embeddings_page_1.png +3 -0
  50. Experiment_Design/1906.03158/images/used/supervised_training_page_1.png +3 -0
.gitattributes CHANGED
@@ -56,3 +56,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
56
  # Video files - compressed
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
56
  # Video files - compressed
57
  *.mp4 filter=lfs diff=lfs merge=lfs -text
58
  *.webm filter=lfs diff=lfs merge=lfs -text
59
+ *.pdf filter=lfs diff=lfs merge=lfs -text
60
+ Experiment_Design/* filter=lfs diff=lfs merge=lfs -text
61
+ Equation_Inference/equation.1049.json filter=lfs diff=lfs merge=lfs -text
62
+ Paper_Weakness/* filter=lfs diff=lfs merge=lfs -text
Equation_Inference/equation.1049.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cd3a89c0cd35af0033d4e6e2d31d0746474c34c3234ea6bc2a7a58711373dbd
3
+ size 58394780
Experiment_Design/1902.00751/1902.00751_source.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d64c032ac7b6141bba5eb905c3012ffa8b955248dbe30ad41bf530cea1776746
3
+ size 554198
Experiment_Design/1902.00751/data_text.json ADDED
@@ -0,0 +1,1060 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "1902.00751",
3
+ "annotator": "jiangshu",
4
+ "input": [
5
+ "\\documentclass{article}\n",
6
+ "\\usepackage{microtype}\n",
7
+ "\\usepackage{graphicx}\n",
8
+ "\\usepackage{subfigure}\n",
9
+ "\\usepackage{capt-of}\n",
10
+ "\\usepackage{booktabs} \n",
11
+ "\\usepackage{hyperref}\n",
12
+ "\\usepackage{amssymb}\n",
13
+ "\\usepackage{bm}\n",
14
+ "\\usepackage{enumitem}\n",
15
+ "\\usepackage{pbox}\n",
16
+ "\\usepackage{adjustbox}\n",
17
+ "\\usepackage[T1]{fontenc}\n",
18
+ "\\usepackage{todonotes}\n",
19
+ "\\usepackage{sidecap}\n",
20
+ "\\sidecaptionvpos{figure}{c}\n",
21
+ "\\newcommand{\\theHalgorithm}{\\arabic{algorithm}}\n",
22
+ "\\usepackage[accepted]{icml2019}\n",
23
+ "\\icmltitlerunning{Parameter-Efficient Transfer Learning for NLP}\n",
24
+ "\\begin{document}\n",
25
+ "\\twocolumn[\n",
26
+ "\\icmltitle{Parameter-Efficient Transfer Learning for NLP}\n",
27
+ "\\icmlsetsymbol{equal}{*}\n",
28
+ "\\begin{icmlauthorlist}\n",
29
+ "\\icmlauthor{Neil Houlsby}{goo}\n",
30
+ "\\icmlauthor{Andrei Giurgiu}{goo,equal}\n",
31
+ "\\icmlauthor{Stanis\\l{}aw Jastrz\\c{e}bski}{jag,equal}\n",
32
+ "\\icmlauthor{Bruna Morrone}{goo}\n",
33
+ "\\icmlauthor{Quentin de Laroussilhe}{goo}\n",
34
+ "\\icmlauthor{Andrea Gesmundo}{goo}\n",
35
+ "\\icmlauthor{Mona Attariyan}{goo}\n",
36
+ "\\icmlauthor{Sylvain Gelly}{goo}\n",
37
+ "\\end{icmlauthorlist}\n",
38
+ "\\icmlaffiliation{goo}{Google Research}\n",
39
+ "\\icmlaffiliation{jag}{Jagiellonian University}\n",
40
+ "\\icmlcorrespondingauthor{Neil Houlsby}{[email protected]}\n",
41
+ "\\icmlkeywords{NLP, Transfer Learning}\n",
42
+ "\\vskip 0.3in\n",
43
+ "]\n",
44
+ "\\printAffiliationsAndNotice{\\icmlEqualContribution} \n",
45
+ "\\begin{abstract}\n",
46
+ "\\end{abstract}\n",
47
+ "\\section{Introduction}\n",
48
+ "Transfer from pre-trained models yields strong performance on many NLP tasks~\\citep{dai2015,howard2018universal,radford2018improving}.\n",
49
+ "BERT, a Transformer network trained on large text corpora with an\n",
50
+ "unsupervised loss, attained state-of-the-art performance on text classification\n",
51
+ "and extractive question answering~\\citep{devlin2018bert}.\n",
52
+ "In this paper we address the online setting, where tasks arrive in a stream.\n",
53
+ "The goal is to build a system that performs well on all of them, but without training an entire new model for every new task.\n",
54
+ "A high degree of sharing between tasks is particularly useful for applications such as cloud services,\n",
55
+ "where models need to be trained to solve many tasks that arrive from customers in sequence.\n",
56
+ "For this, we propose a transfer learning strategy that yields \\emph{compact} and \\emph{extensible} downstream models.\n",
57
+ "Compact models are those that solve many tasks using a small number of additional parameters per task.\n",
58
+ "Extensible models can be trained incrementally to solve new tasks, without forgetting previous ones.\n",
59
+ "Our method yields a such models without sacrificing performance.\n",
60
+ "\\begin{figure}[t]\n",
61
+ "\\centering\n",
62
+ "\\caption{\n",
63
+ "\\label{fig:glue_summary_results}\n",
64
+ "\\end{figure}\n",
65
+ "The two most common transfer learning techniques in NLP are feature-based transfer and fine-tuning.\n",
66
+ "Instead, we present an alternative transfer method based on adapter modules~\\citep{rebuffi2017}.\n",
67
+ "Features-based transfer involves pre-training real-valued embeddings vectors.\n",
68
+ "These embeddings may be at the word~\\citep{mikolov2013}, sentence~\\citep{cer2019}, or paragraph level~\\citep{le2014}.\n",
69
+ "The embeddings are then fed to custom downstream models.\n",
70
+ "Fine-tuning involves copying the weights from a pre-trained network and tuning them on the downstream task.\n",
71
+ "Recent work shows that fine-tuning often enjoys better performance than feature-based transfer~\\citep{howard2018universal}.\n",
72
+ "Both feature-based transfer and fine-tuning require a new set of weights for each task.\n",
73
+ "Fine-tuning is more parameter efficient if the lower layers of a network are shared between tasks.\n",
74
+ "However, our proposed adapter tuning method is even more parameter efficient.\n",
75
+ "The \\emph{x}-axis shows the number of parameters trained per task;\n",
76
+ "this corresponds to the marginal increase in the model size required to solve each additional task.\n",
77
+ "Adapter-based tuning requires training two orders of magnitude fewer parameters to fine-tuning, while attaining similar performance.\n",
78
+ "Adapters are new modules added between layers of a pre-trained network.\n",
79
+ "Adapter-based tuning differs from feature-based transfer and fine-tuning in the following way.\n",
80
+ "Consider a function (neural network) with parameters $\\bm w$: $\\phi_{\\bm w}(\\bm x)$.\n",
81
+ "Feature-based transfer composes $\\phi_{\\bm w}$ with a new function, $\\chi_{\\bm v}$, to yield $\\chi_{\\bm v}(\\phi_{\\bm w}(\\bm x))$.\n",
82
+ "Only the new, task-specific, parameters, $\\bm v$, are then trained.\n",
83
+ "Fine-tuning involves adjusting the original parameters, $\\bm w$, for each new task, limiting compactness.\n",
84
+ "For adapter tuning, a new function, $\\psi_{\\bm w, \\bm v}(\\bm x)$, is defined, where parameters $\\bm w$ are copied over from pre-training.\n",
85
+ "The initial parameters $\\bm v_0$ are set such that the new function resembles the original: $\\psi_{\\bm w, \\bm v_0}(\\bm x) \\approx \\phi_{\\bm w}(\\bm x)$.\n",
86
+ "During training, only $\\bm v$ are tuned.\n",
87
+ "For deep networks, defining $\\psi_{\\bm w, \\bm v}$ typically involves adding new layers to the original network, $\\phi_{\\bm w}$.\n",
88
+ "If one chooses $|\\bm v|\\ll|\\bm w|$, the resulting model requires $\\sim|\\bm w|$ parameters for many tasks.\n",
89
+ "Since $\\bm w$ is fixed, the model can be extended to new tasks without affecting previous ones.\n",
90
+ "Adapter-based tuning relates to \\emph{multi-task} and \\emph{continual} learning.\n",
91
+ "Multi-task learning also results in compact models.\n",
92
+ "However, multi-task learning requires simultaneous access to all tasks, which adapter-based tuning does not require.\n",
93
+ "Continual learning systems aim to learn from an endless stream of tasks.\n",
94
+ "This paradigm is challenging because networks forget previous tasks after re-training~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n",
95
+ "Adapters differ in that the tasks do not interact and the shared parameters are frozen.\n",
96
+ "This means that the model has perfect memory of previous tasks using a small number of task-specific parameters.\n",
97
+ "The key innovation is to design an effective adapter module and its integration with the base model.\n",
98
+ "We propose a simple yet effective, bottleneck architecture.\n",
99
+ "but uses only 3\\\n",
100
+ "In summary, adapter-based tuning yields a single, extensible, model that attains near state-of-the-art performance in text classification.\n",
101
+ "\\section{Adapter tuning for NLP}\n",
102
+ "\\begin{SCfigure*}\n",
103
+ "\\begin{tabular}{cc}\n",
104
+ " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_insertion.pdf}&\n",
105
+ " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_arch.pdf}\n",
106
+ "\\end{tabular}\n",
107
+ " \\caption{\n",
108
+ " Architecture of the adapter module and its integration with the Transformer.\n",
109
+ " after the projection following multi-headed attention and after the two feed-forward layers.\n",
110
+ " \\textbf{Right:} The adapter consists of a bottleneck which contains few parameters relative to the attention and feedforward layers in the original model.\n",
111
+ " The adapter also contains a skip-connection.\n",
112
+ " and the final classification layer (not shown in the figure).\n",
113
+ " \\label{fig:adapters_transformer}}\n",
114
+ "\\end{SCfigure*}\n",
115
+ "We present a strategy for tuning a large text model on several downstream tasks.\n",
116
+ "Our strategy has three key properties:\n",
117
+ "(i) it attains good performance,\n",
118
+ "(ii) it permits training on tasks sequentially, that is, it does not require simultaneous access to all datasets,\n",
119
+ "and (iii) it adds only a small number of additional parameters per task.\n",
120
+ "These properties are especially useful in the context of cloud services,\n",
121
+ "where many models need to be trained on a series of downstream tasks, so a high degree of sharing is desirable.\n",
122
+ "To achieve these properties, we propose a new bottleneck adapter module.\n",
123
+ "Tuning with adapter modules involves adding a small number of new parameters to a model, which are trained on the downstream task~\\citep{rebuffi2017}.\n",
124
+ "When performing vanilla fine-tuning of deep networks, a modification is made to the top layer of the network.\n",
125
+ "This is required because the label spaces and losses for the upstream and downstream tasks differ.\n",
126
+ "Adapter modules perform more general architectural modifications to re-purpose a pre-trained network for a downstream task.\n",
127
+ "In particular, the adapter tuning strategy involves injecting new layers into the original network.\n",
128
+ "The weights of the original network are untouched, whilst the new adapter layers are initialized at random.\n",
129
+ "In standard fine-tuning, the new top-layer and the original weights are co-trained.\n",
130
+ "In contrast, in adapter-tuning, the parameters of the original network are frozen and therefore may be shared by many tasks.\n",
131
+ "Adapter modules have two main features: a small number of parameters, and a near-identity initialization.\n",
132
+ "The adapter modules need to be small compared to the layers of the original network.\n",
133
+ "This means that the total model size grows relatively slowly when more tasks are added.\n",
134
+ "we investigate this empirically in Section~\\ref{sec:discussion}.\n",
135
+ "During training, the adapters may then be activated to change the distribution of activations throughout the network.\n",
136
+ "The adapter modules may also be ignored if not required;\n",
137
+ "\\subsection{Instantiation for Transformer Networks\\label{sec:bottleneckadapter}}\n",
138
+ "We instantiate adapter-based tuning for text Transformers.\n",
139
+ "These models attain state-of-the-art performance in many NLP tasks,\n",
140
+ "including translation, extractive QA, and text classification problems~\\citep{vaswani2017,radford2018improving,devlin2018bert}.\n",
141
+ "We consider the standard Transformer architecture, as proposed in~\\citet{vaswani2017}.\n",
142
+ "Adapter modules present many architectural choices.\n",
143
+ "We provide a simple design that attains good performance.\n",
144
+ "We experimented with a number of more complex designs, see Section~\\ref{sec:discussion},\n",
145
+ "Figure~\\ref{fig:adapters_transformer} shows our adapter architecture, and its application it to the Transformer.\n",
146
+ "Each layer of the Transformer contains two primary sub-layers: an attention layer and a feedforward layer.\n",
147
+ "Both layers are followed immediately by a projection that maps the features size back to the size of layer's input.\n",
148
+ "A skip-connection is applied across each of the sub-layers.\n",
149
+ "The output of each sub-layer is fed into layer normalization.\n",
150
+ "We insert two serial adapters after each of these sub-layers.\n",
151
+ "The adapter is always applied directly to the output of the sub-layer, after the projection back to the input size,\n",
152
+ "but before adding the skip connection back.\n",
153
+ "The output of the adapter is then passed directly into the following layer normalization.\n",
154
+ "To limit the number of parameters, we propose a bottleneck architecture.\n",
155
+ "The adapters first project the original $d$-dimensional features into a smaller dimension, $m$, apply a nonlinearity, then project back to $d$ dimensions.\n",
156
+ "The total number of parameters added per layer, including biases, is $2md+d+m$.\n",
157
+ "in practice, we use around $0.5-8\\\n",
158
+ "The adapter module itself has a skip-connection internally.\n",
159
+ "the module is initialized to an approximate identity function.\n",
160
+ "This technique, similar to conditional batch normalization~\\citep{de2017modulating},\n",
161
+ "FiLM~\\citep{perez2018}, and self-modulation~\\citep{chen2019}, also yields parameter-efficient adaptation of a network; with only $2d$ parameters per layer.\n",
162
+ "see Section~\\ref{sec:param_efficiency}.\n"
163
+ ],
164
+ "output": {
165
+ "What experiments do you suggest doing?": [
166
+ "1. Adapter performance on GLUE: The authors should evaluate the adapter performance on common and widely used benchmarks such as GLUE. They should compare the full finetuned base model as the baseline. They can test both using a fixed adapter size (number of units in the bottleneck), and selecting the best size per task from a set of adapter sizes, e.g., {8, 64, 256}. They can re-run multiple times with different random seeds and select the best model on the validation set.",
167
+ "2. Adapter performance on more tasks: The authors can evaluate the proposed adapter on more tasks that are publicly available. Besides the full finetuned base model, they can also compare with strong baselines such as the models searched by single-task Neural AutoML algorithm.",
168
+ "3. Parameter/Performance trade-off: The authors should consider different adapter sizes and compare to two baselines: (i) Fine-tuning of only the top k layers of the base model. (ii) Tuning only the layer normalization parameters.",
169
+ "4. Evaluating adapters on more types of tasks: The authors should evaluate adapters on more types of tasks. For example, if previous tasks are text classification tasks, the authors should also evaluate adapters on another type of tasks such as question answering.",
170
+ "5. Adapter influence ablation experiments: The authors should remove some trained adapters and re-evaluate the model (without re-training) on tasks. They should report the performance of removing each single layer\u2019s adapters and the performance of removing adapters from different continuous layer spans.",
171
+ "6. Effect of initialization scale: The authors should report the performance of the model using adapters with different initial weight magnitudes. For example, test standard deviations in a certain interval such as [10^-7, 1]",
172
+ "7. Robustness of adapters to the number of neurons: The authors should report the mean validation accuracy across the previous tasks when using different adapter sizes."
173
+ ],
174
+ "Why do you suggest these experiments?": [
175
+ "1. To prove the effectiveness of the proposed adapter module.",
176
+ "2. To further validate that adapter yields compact, performant, models.",
177
+ "3. The adapter size controls the parameter efficiency, smaller adapters introduce fewer parameters, at a possible cost to performance. This experiment is to explore this trade-off. Additionally, the comparison with the baselines can also show the effectiveness of adapters across a range of sizes fewer than fine-tuning.",
178
+ "4. To confirm that adapters work on different types of tasks.",
179
+ "5. To determine which adapters are influential.",
180
+ "6. To analyze the impact of the initialization scale on the performance.",
181
+ "7. To investigate robustness of adapters to the number of neurons."
182
+ ]
183
+ },
184
+ "paper_info": {
185
+ "title": "Parameter-Efficient Transfer Learning for NLP",
186
+ "authors": [
187
+ "Neil Houlsby",
188
+ "Andrei Giurgiu",
189
+ "Stanislaw Jastrzebski",
190
+ "Bruna Morrone",
191
+ "Quentin de Laroussilhe",
192
+ "Andrea Gesmundo",
193
+ "Mona Attariyan",
194
+ "Sylvain Gelly"
195
+ ],
196
+ "abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in\nNLP. However, in the presence of many downstream tasks, fine-tuning is\nparameter inefficient: an entire new model is required for every task. As an\nalternative, we propose transfer with adapter modules. Adapter modules yield a\ncompact and extensible model; they add only a few trainable parameters per\ntask, and new tasks can be added without revisiting previous ones. The\nparameters of the original network remain fixed, yielding a high degree of\nparameter sharing. To demonstrate adapter's effectiveness, we transfer the\nrecently proposed BERT Transformer model to 26 diverse text classification\ntasks, including the GLUE benchmark. Adapters attain near state-of-the-art\nperformance, whilst adding only a few parameters per task. On GLUE, we attain\nwithin 0.4% of the performance of full fine-tuning, adding only 3.6% parameters\nper task. By contrast, fine-tuning trains 100% of the parameters per task.",
197
+ "comments": null
198
+ },
199
+ "raw_data": {
200
+ "context_before_exp": [
201
+ "\\documentclass{article}\n",
202
+ "\n",
203
+ "\n",
204
+ "\\usepackage{microtype}\n",
205
+ "\\usepackage{graphicx}\n",
206
+ "\\usepackage{subfigure}\n",
207
+ "\\usepackage{capt-of}\n",
208
+ "\\usepackage{booktabs} \n",
209
+ "\n",
210
+ "\n",
211
+ "\n",
212
+ "\n",
213
+ "\n",
214
+ "\\usepackage{hyperref}\n",
215
+ "\n",
216
+ "\n",
217
+ "\\usepackage{amssymb}\n",
218
+ "\\usepackage{bm}\n",
219
+ "\\usepackage{enumitem}\n",
220
+ "\\usepackage{pbox}\n",
221
+ "\\usepackage{adjustbox}\n",
222
+ "\\usepackage[T1]{fontenc}\n",
223
+ "\\usepackage{todonotes}\n",
224
+ "\\usepackage{sidecap}\n",
225
+ "\\sidecaptionvpos{figure}{c}\n",
226
+ "\n",
227
+ "\n",
228
+ "\\newcommand{\\theHalgorithm}{\\arabic{algorithm}}\n",
229
+ "\n",
230
+ "\n",
231
+ "\n",
232
+ "\n",
233
+ "\n",
234
+ "\n",
235
+ "\\usepackage[accepted]{icml2019}\n",
236
+ "\n",
237
+ "\n",
238
+ "\n",
239
+ "\n",
240
+ "\\icmltitlerunning{Parameter-Efficient Transfer Learning for NLP}\n",
241
+ "\n",
242
+ "\\begin{document}\n",
243
+ "\n",
244
+ "\\twocolumn[\n",
245
+ "\\icmltitle{Parameter-Efficient Transfer Learning for NLP}\n",
246
+ "\n",
247
+ "\n",
248
+ "\n",
249
+ "\n",
250
+ "\n",
251
+ "\n",
252
+ "\n",
253
+ "\n",
254
+ "\n",
255
+ "\n",
256
+ "\n",
257
+ "\n",
258
+ "\n",
259
+ "\n",
260
+ "\\icmlsetsymbol{equal}{*}\n",
261
+ "\n",
262
+ "\\begin{icmlauthorlist}\n",
263
+ "\\icmlauthor{Neil Houlsby}{goo}\n",
264
+ "\\icmlauthor{Andrei Giurgiu}{goo,equal}\n",
265
+ "\\icmlauthor{Stanis\\l{}aw Jastrz\\c{e}bski}{jag,equal}\n",
266
+ "\\icmlauthor{Bruna Morrone}{goo}\n",
267
+ "\\icmlauthor{Quentin de Laroussilhe}{goo}\n",
268
+ "\\icmlauthor{Andrea Gesmundo}{goo}\n",
269
+ "\\icmlauthor{Mona Attariyan}{goo}\n",
270
+ "\\icmlauthor{Sylvain Gelly}{goo}\n",
271
+ "\\end{icmlauthorlist}\n",
272
+ "\n",
273
+ "\\icmlaffiliation{goo}{Google Research}\n",
274
+ "\\icmlaffiliation{jag}{Jagiellonian University}\n",
275
+ "\n",
276
+ "\\icmlcorrespondingauthor{Neil Houlsby}{[email protected]}\n",
277
+ "\n",
278
+ "\n",
279
+ "\n",
280
+ "\n",
281
+ "\\icmlkeywords{NLP, Transfer Learning}\n",
282
+ "\n",
283
+ "\\vskip 0.3in\n",
284
+ "]\n",
285
+ "\n",
286
+ "\n",
287
+ "\n",
288
+ "\n",
289
+ "\n",
290
+ "\n",
291
+ "\n",
292
+ "\n",
293
+ "\n",
294
+ "\n",
295
+ "\\printAffiliationsAndNotice{\\icmlEqualContribution} \n",
296
+ "\n",
297
+ "\\begin{abstract}\n",
298
+ "Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to $26$ diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within $0.4\\\n",
299
+ "\\end{abstract}\n",
300
+ "\\section{Introduction}\n",
301
+ "\n",
302
+ "Transfer from pre-trained models yields strong performance on many NLP tasks~\\citep{dai2015,howard2018universal,radford2018improving}.\n",
303
+ "BERT, a Transformer network trained on large text corpora with an\n",
304
+ "unsupervised loss, attained state-of-the-art performance on text classification\n",
305
+ "and extractive question answering~\\citep{devlin2018bert}.\n",
306
+ "\n",
307
+ "In this paper we address the online setting, where tasks arrive in a stream.\n",
308
+ "The goal is to build a system that performs well on all of them, but without training an entire new model for every new task.\n",
309
+ "A high degree of sharing between tasks is particularly useful for applications such as cloud services,\n",
310
+ "where models need to be trained to solve many tasks that arrive from customers in sequence.\n",
311
+ "For this, we propose a transfer learning strategy that yields \\emph{compact} and \\emph{extensible} downstream models.\n",
312
+ "Compact models are those that solve many tasks using a small number of additional parameters per task.\n",
313
+ "Extensible models can be trained incrementally to solve new tasks, without forgetting previous ones.\n",
314
+ "Our method yields a such models without sacrificing performance.\n",
315
+ "\n",
316
+ "\\begin{figure}[t]\n",
317
+ "\\centering\n",
318
+ "\\includegraphics[width=0.95\\linewidth]{figures/glue_results.pdf}\n",
319
+ "\\caption{\n",
320
+ "Trade-off between accuracy and number of trained task-specific parameters, for adapter tuning and fine-tuning.\n",
321
+ "The \\emph{y}-axis is normalized by the performance of full fine-tuning, details in Section~\\ref{sec:experiments}.\n",
322
+ "The curves show the $20$th, $50$th, and $80$th performance percentiles across nine tasks from the GLUE benchmark.\n",
323
+ "Adapter-based tuning attains a similar performance to full fine-tuning with two orders of magnitude fewer trained parameters.}\n",
324
+ "\\label{fig:glue_summary_results}\n",
325
+ "\\end{figure}\n",
326
+ "\n",
327
+ "The two most common transfer learning techniques in NLP are feature-based transfer and fine-tuning.\n",
328
+ "Instead, we present an alternative transfer method based on adapter modules~\\citep{rebuffi2017}.\n",
329
+ "Features-based transfer involves pre-training real-valued embeddings vectors.\n",
330
+ "These embeddings may be at the word~\\citep{mikolov2013}, sentence~\\citep{cer2019}, or paragraph level~\\citep{le2014}.\n",
331
+ "The embeddings are then fed to custom downstream models.\n",
332
+ "Fine-tuning involves copying the weights from a pre-trained network and tuning them on the downstream task.\n",
333
+ "Recent work shows that fine-tuning often enjoys better performance than feature-based transfer~\\citep{howard2018universal}.\n",
334
+ "\n",
335
+ "Both feature-based transfer and fine-tuning require a new set of weights for each task.\n",
336
+ "Fine-tuning is more parameter efficient if the lower layers of a network are shared between tasks.\n",
337
+ "However, our proposed adapter tuning method is even more parameter efficient.\n",
338
+ "Figure~\\ref{fig:glue_summary_results} demonstrates this trade-off.\n",
339
+ "The \\emph{x}-axis shows the number of parameters trained per task;\n",
340
+ "this corresponds to the marginal increase in the model size required to solve each additional task.\n",
341
+ "Adapter-based tuning requires training two orders of magnitude fewer parameters to fine-tuning, while attaining similar performance.\n",
342
+ "\n",
343
+ "Adapters are new modules added between layers of a pre-trained network.\n",
344
+ "Adapter-based tuning differs from feature-based transfer and fine-tuning in the following way.\n",
345
+ "Consider a function (neural network) with parameters $\\bm w$: $\\phi_{\\bm w}(\\bm x)$.\n",
346
+ "Feature-based transfer composes $\\phi_{\\bm w}$ with a new function, $\\chi_{\\bm v}$, to yield $\\chi_{\\bm v}(\\phi_{\\bm w}(\\bm x))$.\n",
347
+ "Only the new, task-specific, parameters, $\\bm v$, are then trained.\n",
348
+ "Fine-tuning involves adjusting the original parameters, $\\bm w$, for each new task, limiting compactness.\n",
349
+ "For adapter tuning, a new function, $\\psi_{\\bm w, \\bm v}(\\bm x)$, is defined, where parameters $\\bm w$ are copied over from pre-training.\n",
350
+ "The initial parameters $\\bm v_0$ are set such that the new function resembles the original: $\\psi_{\\bm w, \\bm v_0}(\\bm x) \\approx \\phi_{\\bm w}(\\bm x)$.\n",
351
+ "During training, only $\\bm v$ are tuned.\n",
352
+ "For deep networks, defining $\\psi_{\\bm w, \\bm v}$ typically involves adding new layers to the original network, $\\phi_{\\bm w}$.\n",
353
+ "If one chooses $|\\bm v|\\ll|\\bm w|$, the resulting model requires $\\sim|\\bm w|$ parameters for many tasks.\n",
354
+ "Since $\\bm w$ is fixed, the model can be extended to new tasks without affecting previous ones.\n",
355
+ "\n",
356
+ "Adapter-based tuning relates to \\emph{multi-task} and \\emph{continual} learning.\n",
357
+ "Multi-task learning also results in compact models.\n",
358
+ "However, multi-task learning requires simultaneous access to all tasks, which adapter-based tuning does not require.\n",
359
+ "Continual learning systems aim to learn from an endless stream of tasks.\n",
360
+ "This paradigm is challenging because networks forget previous tasks after re-training~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n",
361
+ "Adapters differ in that the tasks do not interact and the shared parameters are frozen.\n",
362
+ "This means that the model has perfect memory of previous tasks using a small number of task-specific parameters.\n",
363
+ "\n",
364
+ "We demonstrate on a large and diverse set of text classification tasks that adapters yield parameter-efficient tuning for NLP.\n",
365
+ "The key innovation is to design an effective adapter module and its integration with the base model.\n",
366
+ "We propose a simple yet effective, bottleneck architecture.\n",
367
+ "On the GLUE benchmark, our strategy almost matches the performance of the fully fine-tuned BERT,\n",
368
+ "but uses only 3\\\n",
369
+ "We observe similar results on a further $17$ public text datasets, and SQuAD extractive question answering.\n",
370
+ "In summary, adapter-based tuning yields a single, extensible, model that attains near state-of-the-art performance in text classification.\n",
371
+ "\\section{Adapter tuning for NLP}\n",
372
+ "\n",
373
+ "\\begin{SCfigure*}\n",
374
+ "\\begin{tabular}{cc}\n",
375
+ " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_insertion.pdf}&\n",
376
+ " \\includegraphics[width=0.45\\linewidth]{figures/Adapter_arch.pdf}\n",
377
+ "\\end{tabular}\n",
378
+ " \\caption{\n",
379
+ " Architecture of the adapter module and its integration with the Transformer.\n",
380
+ " \\textbf{Left:} We add the adapter module twice to each Transformer layer:\n",
381
+ " after the projection following multi-headed attention and after the two feed-forward layers.\n",
382
+ " \\textbf{Right:} The adapter consists of a bottleneck which contains few parameters relative to the attention and feedforward layers in the original model.\n",
383
+ " The adapter also contains a skip-connection.\n",
384
+ " During adapter tuning, the green layers are trained on the downstream data, this includes the adapter, the layer normalization parameters,\n",
385
+ " and the final classification layer (not shown in the figure).\n",
386
+ " \\label{fig:adapters_transformer}}\n",
387
+ "\\end{SCfigure*}\n",
388
+ "\n",
389
+ "We present a strategy for tuning a large text model on several downstream tasks.\n",
390
+ "Our strategy has three key properties:\n",
391
+ "(i) it attains good performance,\n",
392
+ "(ii) it permits training on tasks sequentially, that is, it does not require simultaneous access to all datasets,\n",
393
+ "and (iii) it adds only a small number of additional parameters per task.\n",
394
+ "These properties are especially useful in the context of cloud services,\n",
395
+ "where many models need to be trained on a series of downstream tasks, so a high degree of sharing is desirable.\n",
396
+ "\n",
397
+ "To achieve these properties, we propose a new bottleneck adapter module.\n",
398
+ "Tuning with adapter modules involves adding a small number of new parameters to a model, which are trained on the downstream task~\\citep{rebuffi2017}.\n",
399
+ "When performing vanilla fine-tuning of deep networks, a modification is made to the top layer of the network.\n",
400
+ "This is required because the label spaces and losses for the upstream and downstream tasks differ.\n",
401
+ "Adapter modules perform more general architectural modifications to re-purpose a pre-trained network for a downstream task.\n",
402
+ "In particular, the adapter tuning strategy involves injecting new layers into the original network.\n",
403
+ "The weights of the original network are untouched, whilst the new adapter layers are initialized at random.\n",
404
+ "In standard fine-tuning, the new top-layer and the original weights are co-trained.\n",
405
+ "In contrast, in adapter-tuning, the parameters of the original network are frozen and therefore may be shared by many tasks.\n",
406
+ "\n",
407
+ "Adapter modules have two main features: a small number of parameters, and a near-identity initialization.\n",
408
+ "The adapter modules need to be small compared to the layers of the original network.\n",
409
+ "This means that the total model size grows relatively slowly when more tasks are added.\n",
410
+ "A near-identity initialization is required for stable training of the adapted model;\n",
411
+ "we investigate this empirically in Section~\\ref{sec:discussion}.\n",
412
+ "By initializing the adapters to a near-identity function, original network is unaffected when training starts.\n",
413
+ "During training, the adapters may then be activated to change the distribution of activations throughout the network.\n",
414
+ "The adapter modules may also be ignored if not required;\n",
415
+ "in Section~\\ref{sec:discussion} we observe that some adapters have more influence on the network than others.\n",
416
+ "We also observe that if the initialization deviates too far from the identity function, the model may fail to train.\n",
417
+ "\n",
418
+ "\\subsection{Instantiation for Transformer Networks\\label{sec:bottleneckadapter}}\n",
419
+ "\n",
420
+ "We instantiate adapter-based tuning for text Transformers.\n",
421
+ "These models attain state-of-the-art performance in many NLP tasks,\n",
422
+ "including translation, extractive QA, and text classification problems~\\citep{vaswani2017,radford2018improving,devlin2018bert}.\n",
423
+ "We consider the standard Transformer architecture, as proposed in~\\citet{vaswani2017}.\n",
424
+ "\n",
425
+ "Adapter modules present many architectural choices.\n",
426
+ "We provide a simple design that attains good performance.\n",
427
+ "We experimented with a number of more complex designs, see Section~\\ref{sec:discussion},\n",
428
+ "but we found the following strategy performed as well as any other that we tested, across many datasets.\n",
429
+ "\n",
430
+ "Figure~\\ref{fig:adapters_transformer} shows our adapter architecture, and its application it to the Transformer.\n",
431
+ "Each layer of the Transformer contains two primary sub-layers: an attention layer and a feedforward layer.\n",
432
+ "Both layers are followed immediately by a projection that maps the features size back to the size of layer's input.\n",
433
+ "A skip-connection is applied across each of the sub-layers.\n",
434
+ "The output of each sub-layer is fed into layer normalization.\n",
435
+ "We insert two serial adapters after each of these sub-layers.\n",
436
+ "The adapter is always applied directly to the output of the sub-layer, after the projection back to the input size,\n",
437
+ "but before adding the skip connection back.\n",
438
+ "The output of the adapter is then passed directly into the following layer normalization.\n",
439
+ "\n",
440
+ "To limit the number of parameters, we propose a bottleneck architecture.\n",
441
+ "The adapters first project the original $d$-dimensional features into a smaller dimension, $m$, apply a nonlinearity, then project back to $d$ dimensions.\n",
442
+ "The total number of parameters added per layer, including biases, is $2md+d+m$.\n",
443
+ "By setting $m\\ll d$, we limit the number of parameters added per task;\n",
444
+ "in practice, we use around $0.5-8\\\n",
445
+ "The bottleneck dimension, $m$, provides a simple means to trade-off performance with parameter efficiency.\n",
446
+ "The adapter module itself has a skip-connection internally.\n",
447
+ "With the skip-connection, if the parameters of the projection layers are initialized to near-zero,\n",
448
+ "the module is initialized to an approximate identity function.\n",
449
+ "\n",
450
+ "Alongside the layers in the adapter module, we also train new layer normalization parameters per task.\n",
451
+ "This technique, similar to conditional batch normalization~\\citep{de2017modulating},\n",
452
+ "FiLM~\\citep{perez2018}, and self-modulation~\\citep{chen2019}, also yields parameter-efficient adaptation of a network; with only $2d$ parameters per layer.\n",
453
+ "However, training the layer normalization parameters alone is insufficient for good performance,\n",
454
+ "see Section~\\ref{sec:param_efficiency}.\n"
455
+ ],
456
+ "context_after_exp": [
457
+ "\\section{Experiments\\label{sec:experiments}}\n",
458
+ "\n",
459
+ "We show that adapters achieve parameter efficient transfer for text tasks.\n",
460
+ "On the GLUE benchmark~\\citep{wang2018glue},\n",
461
+ "adapter tuning is within $0.4\\\n",
462
+ "We confirm this result on a further $17$ public classification tasks and SQuAD question answering.\n",
463
+ "Analysis shows that adapter-based tuning automatically focuses on the higher layers of the network.\n",
464
+ "\n",
465
+ "\\subsection{Experimental Settings}\n",
466
+ "\n",
467
+ "We use the public, pre-trained BERT Transformer network as our base model.\n",
468
+ "To perform classification with BERT, we follow the approach in~\\citet{devlin2018bert}.\n",
469
+ "The first token in each sequence is a special ``classification token''.\n",
470
+ "We attach a linear layer to the embedding of this token to predict the class label.\n",
471
+ "\n",
472
+ "Our training procedure also follows~\\citet{devlin2018bert}.\n",
473
+ "We optimize using Adam~\\citep{kingma2014adam},\n",
474
+ "whose learning rate is increased linearly over the first $10\\\n",
475
+ "All runs are trained on $4$ Google Cloud TPUs with a batch size of $32$.\n",
476
+ "For each dataset and algorithm, we run a hyperparameter sweep and select the best model according to accuracy on the validation set.\n",
477
+ "For the GLUE tasks, we report the test metrics provided by the submission website\\footnote{\\url{https://gluebenchmark.com/}}.\n",
478
+ "For the other classification tasks we report test-set accuracy.\n",
479
+ "\n",
480
+ "We compare to fine-tuning, the current standard for transfer of large pre-trained models,\n",
481
+ "and the strategy successfully used by BERT.\n",
482
+ "For $N$ tasks, full fine-tuning requires $N{\\times}$ the number of parameters of the pre-trained model.\n",
483
+ "Our goal is to attain performance equal to fine-tuning, but with fewer total parameters, ideally near to $1{\\times}$.\n",
484
+ "\n",
485
+ "\\subsection{GLUE benchmark\\label{sec:glue}}\n",
486
+ "\n",
487
+ "\\begin{table*}[t]\n",
488
+ "\\centering\n",
489
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
490
+ "\\begin{tabular}{l|ll|rrrrrrrrr|r}\n",
491
+ "\\toprule\n",
492
+ "{} & \\pbox{3cm}{Total num\\\\ params} & \\pbox{3cm}{Trained \\\\ params / task} & CoLA & SST & MRPC & STS-B & QQP & MNLI\\textsubscript{m} & MNLI\\textsubscript{mm} & QNLI & RTE & Total \\\\\n",
493
+ "\\midrule\n",
494
+ "BERT\\textsubscript{LARGE} & $9.0\\times$ & $100\\\n",
495
+ "Adapters ($8$-$256$) & $1.3\\times$ & $3.6\\\n",
496
+ "Adapters ($64$) & $1.2\\times$ & $2.1\\\n",
497
+ "\\bottomrule\n",
498
+ "\\end{tabular}\n",
499
+ "\\end{adjustbox}\n",
500
+ "\\caption{\n",
501
+ "Results on GLUE test sets scored using the GLUE evaluation server.\n",
502
+ "MRPC and QQP are evaluated using F1 score.\n",
503
+ "STS-B is evaluated using Spearman's correlation coefficient.\n",
504
+ "CoLA is evaluated using Matthew's Correlation.\n",
505
+ "The other tasks are evaluated using accuracy.\n",
506
+ "Adapter tuning achieves comparable overall score ($80.0$) to full fine-tuning ($80.4$) using $1.3\\times$ parameters in total, compared to $9\\times$.\n",
507
+ "Fixing the adapter size to $64$ leads to a slightly decreased overall score of $79.6$ and slightly smaller model.\n",
508
+ "\\label{tab:glue}}\n",
509
+ "\\end{table*}\n",
510
+ "\n",
511
+ "We first evaluate on GLUE.\\footnote{\n",
512
+ "We omit WNLI as in~\\citet{devlin2018bert} because the no current algorithm beats the baseline of predicting the majority class.}\n",
513
+ "For these datasets, we transfer from the pre-trained BERT\\textsubscript{LARGE} model,\n",
514
+ "which contains $24$ layers, and a total of $330$M parameters, see~\\citet{devlin2018bert} for details.\n",
515
+ "We perform a small hyperparameter sweep for adapter tuning:\n",
516
+ "We sweep learning rates in $\\{3 \\cdot 10^{-5}, 3 \\cdot 10^{-4}, 3 \\cdot 10^{-3}\\}$, and number of epochs in $\\{3, 20\\}$.\n",
517
+ "We test both using a fixed adapter size (number of units in the bottleneck),\n",
518
+ "and selecting the best size per task from $\\{8, 64, 256\\}$.\n",
519
+ "The adapter size is the only adapter-specific hyperparameter that we tune.\n",
520
+ "Finally, due to training instability, we re-run $5$ times with different random seeds and select the best model on the validation set.\n",
521
+ "\n",
522
+ "Table~\\ref{tab:glue} summarizes the results.\n",
523
+ "Adapters achieve a mean GLUE score of $80.0$, compared to $80.4$ achieved by full fine-tuning.\n",
524
+ "The optimal adapter size varies per dataset. For example, $256$ is chosen for MNLI, whereas for the smallest dataset, RTE, $8$ is chosen.\n",
525
+ "Restricting always to size $64$, leads to a small decrease in average accuracy to $79.6$.\n",
526
+ "To solve all of the datasets in Table~\\ref{tab:glue}, fine-tuning requires $9\\times$ the total number of BERT parameters.\\footnote{\n",
527
+ "We treat MNLI\\textsubscript{m} and MNLI\\textsubscript{mm} as separate tasks with individually tuned hyperparameters.\n",
528
+ "However, they could be combined into one model, leaving $8\\times$ overall.}\n",
529
+ "In contrast, adapters require only $1.3\\times$ parameters.\n",
530
+ "\n",
531
+ "\\subsection{Additional Classification Tasks}\n",
532
+ "\n",
533
+ "\\renewcommand{\\arraystretch}{0.9}\n",
534
+ "\\begin{table*}[t]\n",
535
+ "\\centering\n",
536
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
537
+ "\\begin{tabular}{l|rrrrr}\n",
538
+ "\\toprule\n",
539
+ "Dataset &\n",
540
+ "\\pbox{5cm}{No BERT\\\\baseline} &\n",
541
+ "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Fine-tune} &\n",
542
+ "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Variable FT} &\n",
543
+ "\\pbox{5cm}{BERT\\textsubscript{BASE}\\\\Adapters} \\\\\n",
544
+ "\\midrule\n",
545
+ "20 newsgroups & $ 91.1 $ & $ 92.8 \\pm 0.1 $ & $ 92.8 \\pm 0.1 $ & $91.7 \\pm 0.2$ \\\\\n",
546
+ "Crowdflower airline & $ 84.5 $ & $ 83.6 \\pm 0.3 $ & $ 84.0 \\pm 0.1 $ & $ 84.5 \\pm 0.2 $ \\\\\n",
547
+ "Crowdflower corporate messaging & $ 91.9 $ & $ 92.5 \\pm 0.5 $ & $ 92.4 \\pm 0.6 $ & $ 92.9 \\pm 0.3 $ \\\\\n",
548
+ "Crowdflower disasters & $ 84.9 $ & $ 85.3 \\pm 0.4 $ & $ 85.3 \\pm 0.4 $ & $ 84.1 \\pm 0.2 $ \\\\\n",
549
+ "Crowdflower economic news relevance & $ 81.1 $ & $ 82.1 \\pm 0.0 $ & $ 78.9 \\pm 2.8 $ & $ 82.5 \\pm 0.3 $ \\\\\n",
550
+ "Crowdflower emotion & $ 36.3 $ & $ 38.4 \\pm 0.1 $ & $ 37.6 \\pm 0.2 $ & $ 38.7 \\pm 0.1 $ \\\\\n",
551
+ "Crowdflower global warming & $ 82.7 $ & $ 84.2 \\pm 0.4\n",
552
+ " $ & $ 81.9 \\pm 0.2 $ & $ 82.7 \\pm 0.3 $ \\\\\n",
553
+ "Crowdflower political audience & $ 81.0 $ & $ 80.9 \\pm 0.3 $ & $ 80.7 \\pm 0.8\n",
554
+ " $ & $ 79.0 \\pm 0.5 $ \\\\\n",
555
+ "Crowdflower political bias & $ 76.8 $ & $ 75.2 \\pm 0.9 $ & $ 76.5 \\pm 0.4 $ & $ 75.9 \\pm 0.3 $ \\\\\n",
556
+ "Crowdflower political message & $ 43.8 $ & $ 38.9 \\pm 0.6 $ & $ 44.9 \\pm 0.6 $ & $ 44.1 \\pm 0.2 $ \\\\\n",
557
+ "Crowdflower primary emotions & $ 33.5 $ & $ 36.9 \\pm 1.6 $ & $ 38.2 \\pm 1.0 $ & $ 33.9 \\pm 1.4 $ \\\\\n",
558
+ "Crowdflower progressive opinion & $ 70.6 $ & $ 71.6 \\pm 0.5 $ & $ 75.9 \\pm 1.3 $ & $ 71.7 \\pm 1.1 $ \\\\\n",
559
+ "Crowdflower progressive stance & $ 54.3 $ & $ 63.8 \\pm 1.0 $ & $ 61.5 \\pm 1.3 $ & $ 60.6 \\pm 1.4 $ \\\\\n",
560
+ "Crowdflower US economic performance & $ 75.6 $ & $ 75.3 \\pm 0.1 $ & $ 76.5 \\pm 0.4 $ & $ 77.3 \\pm 0.1 $ \\\\\n",
561
+ "Customer complaint database & $ 54.5 $ & $ 55.9 \\pm 0.1 $ & $ 56.4 \\pm 0.1 $ & $55.4 \\pm 0.1$ \\\\\n",
562
+ "News aggregator dataset & $ 95.2 $ & $ 96.3 \\pm 0.0 $ & $ 96.5 \\pm 0.0 $ & $ 96.2 \\pm 0.0 $ \\\\\n",
563
+ "SMS spam collection & $ 98.5 $ & $ 99.3 \\pm 0.2 $ & $ 99.3 \\pm 0.2 $ & $ 95.1 \\pm 2.2 $ \\\\\n",
564
+ "\\midrule\n",
565
+ "Average & $72.7$ & $73.7$ & $74.0$ & $73.3$ \\\\\n",
566
+ "\\midrule\n",
567
+ "Total number of params & \\textemdash & $17\\times$ & $9.9\\times$ & $1.19\\times$ \\\\\n",
568
+ "Trained params/task & \\textemdash & $100\\\n",
569
+ "\\bottomrule\n",
570
+ "\\end{tabular}\n",
571
+ "\\end{adjustbox}\n",
572
+ "\\caption{\n",
573
+ "Test accuracy for additional classification tasks.\n",
574
+ "In these experiments we transfer from the BERT\\textsubscript{BASE} model.\n",
575
+ "For each task and algorithm, the model with the best validation set accuracy is chosen.\n",
576
+ "We report the mean test accuracy and s.e.m. across runs with different random seeds.}\n",
577
+ "\\label{tab:hub_results}\n",
578
+ "\\vskip-3mm\n",
579
+ "\\end{table*}\n",
580
+ "\\renewcommand{\\arraystretch}{1.0}\n",
581
+ "\n",
582
+ "To further validate that adapters yields compact, performant, models, we test on additional, publicly available, text classification tasks.\n",
583
+ "This suite contains a diverse set of tasks:\n",
584
+ "The number of training examples ranges from $900$ to $330$k,\n",
585
+ "the number of classes ranges from $2$ to $157$,\n",
586
+ "and the average text length ranging from $57$ to $1.9$k characters.\n",
587
+ "Statistics and references for all of the datasets are in the appendix.\n",
588
+ "\n",
589
+ "For these datasets, we use a batch size of $32$.\n",
590
+ "The datasets are diverse, so we sweep a wide range of learning rates:\n",
591
+ "$\\{1 \\cdot 10^{-5}, 3 \\cdot 10^{-5}, 1 \\cdot 10^{-4}, 3 \\cdot 10^{-3}\\}$.\n",
592
+ "Due to the large number of datasets, we select the number of training epochs from the set $\\{20, 50, 100\\}$ manually, from inspection of the validation set learning curves.\n",
593
+ "We select the optimal values for both fine-tuning and adapters; the exact values are in the appendix.\n",
594
+ "\n",
595
+ "We test adapters sizes in $\\{2, 4, 8, 16, 32, 64\\}$.\n",
596
+ "Since some of the datasets are small, fine-tuning the entire network may be sub-optimal.\n",
597
+ "Therefore, we run an additional baseline: variable fine-tuning.\n",
598
+ "For this, we fine-tune only the top $n$ layers, and freeze the remainder.\n",
599
+ "We sweep $n\\in\\{1,2,3,5,7,9,11,12\\}$.\n",
600
+ "In these experiments, we use the BERT\\textsubscript{BASE} model with $12$ layers,\n",
601
+ "therefore, variable fine-tuning subsumes full fine-tuning when $n=12$.\n",
602
+ "\n",
603
+ "\\begin{figure*}[t]\n",
604
+ "\\centering\n",
605
+ "\\vskip-1mm\n",
606
+ "\\begin{tabular}{cc}\n",
607
+ "GLUE (BERT\\textsubscript{LARGE}) & Additional Tasks (BERT\\textsubscript{BASE}) \\\\\n",
608
+ "\\includegraphics[width=0.45\\textwidth]{figures/glue_results_b.pdf}&\n",
609
+ "\\includegraphics[width=0.45\\textwidth]{figures/extra_tasks_plot.pdf}\n",
610
+ "\\end{tabular}\n",
611
+ "\\vskip-2mm\n",
612
+ "\\caption{\n",
613
+ "Accuracy versus the number of trained parameters, aggregated across tasks.\n",
614
+ "We compare adapters of different sizes (orange) with fine-tuning the top $n$ layers, for varying $n$ (blue).\n",
615
+ "The lines and shaded areas indicate the $20$th, $50$th, and $80$th percentiles across tasks.\n",
616
+ "For each task and algorithm, the best model is selected for each point along the curve.\n",
617
+ "For GLUE, the validation set accuracy is reported.\n",
618
+ "For the additional tasks, we report the test-set accuracies.\n",
619
+ "To remove the intra-task variance in scores,\n",
620
+ "we normalize the scores for each model and task by subtracting the performance of full fine-tuning on the corresponding task.\n",
621
+ "}\n",
622
+ "\\label{fig:tradeoff_alltasks}\n",
623
+ "\\end{figure*}\n",
624
+ "\n",
625
+ "\\begin{figure*}[h!]\n",
626
+ "\\centering\n",
627
+ "\\vskip-1mm\n",
628
+ "\\begin{tabular}{cc}\n",
629
+ "MNLI\\textsubscript{m}(BERT\\textsubscript{BASE}) & CoLA (BERT\\textsubscript{BASE}) \\\\\n",
630
+ "\\includegraphics[width=0.45\\linewidth]{figures/param_efficiency_mnli.pdf}&\n",
631
+ "\\includegraphics[width=0.45\\linewidth]{figures/param_efficiency_cola.pdf}\n",
632
+ "\\end{tabular}\n",
633
+ "\\caption{\n",
634
+ "Validation set accuracy versus number of trained parameters for three methods:\n",
635
+ "(i) Adapter tuning with an adapter sizes $2^n$ for $n=0 \\ldots 9$ (orange).\n",
636
+ "(ii) Fine-tuning the top $k$ layers for $k=1\\ldots 12$ (blue).\n",
637
+ "(iii) Tuning the layer normalization parameters only (green).\n",
638
+ "Error bars indicate $\\pm 1$ s.e.m. across three random seeds.}\n",
639
+ "\\label{fig:tradeoff_glue}\n",
640
+ "\\vskip-3mm\n",
641
+ "\\end{figure*}\n",
642
+ "\n",
643
+ "Unlike the GLUE tasks, there is no comprehensive set of state-of-the-art numbers for this suite of tasks.\n",
644
+ "Therefore, to confirm that our BERT-based models are competitive, we collect our own benchmark performances.\n",
645
+ "For this, we run a large-scale hyperparameter search over standard network topologies.\n",
646
+ "Specifically, we run the single-task Neural AutoML algorithm, similar to~\\citet{zoph2017,wong2018transferautoml}.\n",
647
+ "This algorithm searches over a space of feedforward and convolutional networks,\n",
648
+ "stacked on pre-trained text embeddings modules publicly available via TensorFlow Hub\\footnote{\\url{https://www.tensorflow.org/hub}}.\n",
649
+ "The embeddings coming from the TensorFlow Hub modules may be frozen or fine-tuned.\n",
650
+ "The full search space is described in the appendix.\n",
651
+ "For each task, we run AutoML for one week on CPUs, using $30$ machines.\n",
652
+ "In this time the algorithm explores over $10$k models on average per task.\n",
653
+ "We select the best final model for each task according to validation set accuracy.\n",
654
+ "\n",
655
+ "The results for the AutoML benchmark (``no BERT baseline''), fine-tuning, variable fine-tuning, and adapter-tuning are reported in Table~\\ref{tab:hub_results}.\n",
656
+ "The AutoML baseline demonstrates that the BERT models are competitive.\n",
657
+ "This baseline explores thousands of models, yet the BERT models perform better on average.\n",
658
+ "We see similar pattern of results to GLUE.\n",
659
+ "The performance of adapter-tuning is close to full fine-tuning ($0.4\\\n",
660
+ "Fine-tuning requires $17\\times$ the number of parameters to BERT\\textsubscript{BASE} to solve all tasks.\n",
661
+ "Variable fine-tuning performs slightly better than fine-tuning, whilst training fewer layers.\n",
662
+ "The optimal setting of variable fine-tuning results in training $52\\\n",
663
+ "Adapters, however, offer a much more compact model.\n",
664
+ "They introduce $1.14\\\n",
665
+ "\n",
666
+ "\\subsection{Parameter/Performance trade-off\\label{sec:param_efficiency}}\n",
667
+ "\n",
668
+ "The adapter size controls the parameter efficiency, smaller adapters introduce fewer parameters, at a possible cost to performance.\n",
669
+ "To explore this trade-off, we consider different adapter sizes, and compare to two baselines:\n",
670
+ "(i) Fine-tuning of only the top $k$ layers of BERT\\textsubscript{BASE}.\n",
671
+ "(ii) Tuning only the layer normalization parameters.\n",
672
+ "The learning rate is tuned using the range presented in Section~\\ref{sec:glue}.\n",
673
+ "\n",
674
+ "Figure~\\ref{fig:tradeoff_alltasks} shows the parameter/performance trade-off aggregated over all classification tasks in each suite (GLUE and ``additional'').\n",
675
+ "On GLUE, performance decreases dramatically when fewer layers are fine-tuned.\n",
676
+ "Some of the additional tasks benefit from training fewer layers, so performance of fine-tuning decays much less.\n",
677
+ "In both cases, adapters yield good performance across a range of sizes two orders of magnitude fewer than fine-tuning.\n",
678
+ "\n",
679
+ "Figure~\\ref{fig:tradeoff_glue} shows more details for two GLUE tasks: MNLI\\textsubscript{m} and CoLA.\n",
680
+ "Tuning the top layers trains more task-specific parameters for all $k>2$.\n",
681
+ "When fine-tuning using a comparable number of task-specific parameters, the performance decreases substantially compared to adapters.\n",
682
+ "For instance, fine-tuning just the top layer yields approximately $9$M trainable parameters and $77.8 \\\n",
683
+ "In contrast, adapter tuning with size $64$ yields approximately $2$M trainable parameters and $83.7\\\n",
684
+ "For comparison, full fine-tuning attains $84.4 \\\n",
685
+ "We observe a similar trend on CoLA.\n",
686
+ "\n",
687
+ "As a further comparison, we tune the parameters of layer normalization alone.\n",
688
+ "These layers only contain point-wise additions and multiplications, so introduce very few trainable parameters: $40$k for BERT\\textsubscript{BASE}.\n",
689
+ "However this strategy performs poorly: performance decreases by approximately $3.5\\\n",
690
+ "\n",
691
+ "To summarize, adapter tuning is highly parameter-efficient, and produces a compact model with a strong performance, comparable to full fine-tuning.\n",
692
+ "Training adapters with sizes $0.5-5\\\n",
693
+ "performance is within $1\\\n",
694
+ "\n",
695
+ "\\subsection{SQuAD Extractive Question Answering}\n",
696
+ "\n",
697
+ "\\begin{figure}[t]\n",
698
+ "\\centering\n",
699
+ "\\vskip-2mm\n",
700
+ "\\includegraphics[width=0.9\\linewidth]{figures/squad_adapters_baseline.pdf}\n",
701
+ "\\caption{\n",
702
+ "Validation accuracy versus the number of trained parameters for SQuAD v1.1.\n",
703
+ "Error bars indicate the s.e.m. across three seeds, using the best hyperparameters.\n",
704
+ "}\n",
705
+ "\\label{fig:squad}\n",
706
+ "\\vskip-5mm\n",
707
+ "\\end{figure}\n",
708
+ "\n",
709
+ "Finally, we confirm that adapters work on tasks other than classification by running on SQuAD v1.1~\\citep{rajpurkar2018}.\n",
710
+ "Given a question and Wikipedia paragraph, this task requires selecting the answer span to the question from the paragraph.\n",
711
+ "Figure~\\ref{fig:squad} displays the parameter/performance trade-off of fine-tuning and adapters on the SQuAD validation set.\n",
712
+ "For fine-tuning, we sweep the number of trained layers, learning rate in $\\{3\\cdot 10^{-5}, 5\\cdot 10^{-5}, 1\\cdot 10^{-4}\\}$, and number of epochs in $\\{2,3,5\\}$.\n",
713
+ "For adapters, we sweep the adapter size, learning rate in $\\{3\\cdot 10^{-5}, 1\\cdot 10^{-4}, 3\\cdot 10^{-4}, 1\\cdot 10^{-3}\\}$, and number of epochs in $\\{3,10,20\\}$.\n",
714
+ "As for classification, adapters attain performance comparable to full fine-tuning, while training many fewer parameters.\n",
715
+ "Adapters of size $64$ ($2\\\n",
716
+ "SQuAD performs well even with very small adapters, those of size $2$ ($0.1\\\n",
717
+ "\n",
718
+ "\\subsection{Analysis and Discussion\\label{sec:discussion}}\n",
719
+ "\n",
720
+ "We perform an ablation to determine which adapters are influential.\n",
721
+ "For this, we remove some trained adapters and re-evaluate the model (without re-training) on the validation set.\n",
722
+ "Figure~\\ref{fig:ablation_and_init} shows the change in performance when removing adapters from all continuous layer spans.\n",
723
+ "The experiment is performed on BERT\\textsubscript{BASE} with adapter size $64$ on MNLI and CoLA.\n",
724
+ "\n",
725
+ "First, we observe that removing any single layer's adapters has only a small impact on performance.\n",
726
+ "The elements on the heatmaps' diagonals show the performances of removing adapters from single layers, where largest performance drop is $2\\\n",
727
+ "In contrast, when all of the adapters are removed from the network,\n",
728
+ "the performance drops substantially:\n",
729
+ "to $37\\\n",
730
+ "This indicates that although each adapter has a small influence on the overall network, the overall effect is large.\n",
731
+ "\n",
732
+ "Second, Figure~\\ref{fig:ablation_and_init} suggests that adapters on the lower layers have a smaller impact than the higher-layers.\n",
733
+ "Removing the adapters from the layers $0-4$ on MNLI barely affects performance.\n",
734
+ "This indicates that adapters perform well because they automatically prioritize higher layers.\n",
735
+ "Indeed, focusing on the upper layers is a popular strategy in fine-tuning~\\citep{howard2018universal}.\n",
736
+ "One intuition is that the lower layers extract lower-level features that are shared among tasks, while the\n",
737
+ "higher layers build features that are unique to different tasks.\n",
738
+ "This relates to our observation that for some tasks, fine-tuning only the top layers outperforms full fine-tuning, see Table~\\ref{tab:hub_results}.\n",
739
+ "\n",
740
+ "Next, we investigate the robustness of the adapter modules to the number of neurons and initialization scale.\n",
741
+ "In our main experiments the weights in the adapter module were drawn from\n",
742
+ "a zero-mean Gaussian with standard deviation $10^{-2}$, truncated to two standard deviations.\n",
743
+ "To analyze the impact of the initialization scale on the performance, we test standard deviations in the interval $[10^{-7},1]$.\n",
744
+ "Figure~\\ref{fig:ablation_and_init} summarizes the results.\n",
745
+ "We observe that on both datasets, the performance of adapters is robust for standard deviations below $10^{-2}$.\n",
746
+ "However, when the initialization is too large, performance degrades, more substantially on CoLA.\n",
747
+ "\n",
748
+ "To investigate robustness of adapters to the number of neurons, we re-examine the experimental data from Section~\\ref{sec:glue}.\n",
749
+ "We find that the quality of the model across adapter sizes is stable,\n",
750
+ "and a fixed adapter size across all the tasks could be used with small detriment to performance.\n",
751
+ "For each adapter size we calculate the mean validation accuracy across the eight\n",
752
+ "classification tasks by selecting the optimal learning rate and number of epochs\\footnote{\n",
753
+ "We treat here MNLI\\textsubscript{m} and MNLI\\textsubscript{mm} as separate tasks.\n",
754
+ "For consistency, for all datasets we use accuracy metric and exclude the regression STS-B task.}.\n",
755
+ "For adapter sizes $8$, $64$, and $256$, the mean validation accuracies are $86.2\\\n",
756
+ "This message is further corroborated by Figures~\\ref{fig:tradeoff_glue} and~\\ref{fig:squad},\n",
757
+ "which show a stable performance across a few orders of magnitude.\n",
758
+ "\n",
759
+ "Finally, we tried a number of extensions to the adapter's architecture\n",
760
+ "that did not yield a significant boost in performance.\n",
761
+ "We document them here for completeness.\n",
762
+ "We experimented with\n",
763
+ "(i) adding a batch/layer normalization to the adapter,\n",
764
+ "(ii) increasing the number of layers per adapter,\n",
765
+ "(iii) different activation functions, such as tanh,\n",
766
+ "(iv) inserting adapters only inside the attention layer,\n",
767
+ "(v) adding adapters in parallel to the main layers, and possibly with a multiplicative interaction.\n",
768
+ "In all cases we observed the resulting performance to be similar to the bottleneck proposed in Section~\\ref{sec:bottleneckadapter}.\n",
769
+ "Therefore, due to its simplicity and strong performance, we recommend the original adapter architecture.\n",
770
+ "\n",
771
+ "\\begin{figure*}\n",
772
+ "\\centering\n",
773
+ "\\begin{tabular}[t]{ccc}\n",
774
+ "MNLI\\textsubscript{m} & CoLA & \\\\\n",
775
+ "\\includegraphics[height=3.7cm]{figures/mnli_ablation_heatmap.pdf}&\n",
776
+ "\\includegraphics[height=3.7cm]{figures/cola_ablation_heatmap.pdf}&\n",
777
+ "\\raisebox{-0.2cm}{\\includegraphics[height=3.9cm]{figures/init_study.pdf}}\n",
778
+ "\\end{tabular}\n",
779
+ "\\caption{\n",
780
+ "\\textbf{Left, Center:}\n",
781
+ "Ablation of trained adapters from continuous layer spans.\n",
782
+ "The heatmap shows the relative decrease in validation accuracy to the fully trained adapted model.\n",
783
+ "The \\emph{y} and \\emph{x} axes indicate the first and last layers ablated (inclusive), respectively.\n",
784
+ "The diagonal cells, highlighted in green, indicate ablation of a single layer's adapters.\n",
785
+ "The cell in the top-right indicates ablation of all adapters.\n",
786
+ "Cells in the lower triangle are meaningless, and are set to $0\\\n",
787
+ "\\textbf{Right:}\n",
788
+ "Performance of BERT\\textsubscript{BASE} using adapters with different initial weight magnitudes.\n",
789
+ "The \\emph{x}-axis is the standard deviation of the initialization distribution.\n",
790
+ "}\n",
791
+ "\\label{fig:ablation_and_init}\n",
792
+ "\\vskip-4mm\n",
793
+ "\\end{figure*}\n",
794
+ "\\section{Related Work}\n",
795
+ "\n",
796
+ "\\paragraph{Pre-trained text representations}\n",
797
+ "Pre-trained textual representations are widely used to improve performance on NLP tasks.\n",
798
+ "These representations are trained on large corpora (usually unsupervised), and fed as features to downstream models.\n",
799
+ "In deep networks, these features may also be fine-tuned on the downstream task.\n",
800
+ "Brown clusters, trained on distributional information, are a classic example of pre-trained representations~\\citep{brown1992}.\n",
801
+ "\\citet{turian2010} show that pre-trained embeddings of words outperform those trained from scratch.\n",
802
+ "Since deep-learning became popular, word embeddings have been widely used, and many training strategies have arisen~\\citep{mikolov2013,pennington2014,bojanowski2017enriching}.\n",
803
+ "Embeddings of longer texts, sentences and paragraphs, have also been developed~\\citep{le2014,kiros2015,conneau2017,cer2019}.\n",
804
+ "\n",
805
+ "To encode context in these representations, features are extracted from internal representations of sequence models,\n",
806
+ "such as MT systems~\\citep{mccann2017}, and BiLSTM language models, as used in ELMo~\\citep{peters2018}.\n",
807
+ "As with adapters, ELMo exploits the layers other than the top layer of a pre-trained network.\n",
808
+ "However, this strategy only \\emph{reads} from the inner layers.\n",
809
+ "In contrast, adapters \\emph{write} to the inner layers, re-configuring the processing of features through the entire network.\n",
810
+ "\n",
811
+ "\\paragraph{Fine-tuning}\n",
812
+ "Fine-tuning an entire pre-trained model has become a popular alternative to features~\\citep{dai2015,howard2018universal,radford2018improving}\n",
813
+ "In NLP, the upstream model is usually a neural language model~\\citep{bengio2003}.\n",
814
+ "Recent state-of-the-art results on question answering~\\citep{rajpurkar2016} and text classification~\\citep{wang2018glue} have been attained by fine-tuning a Transformer network~\\citep{vaswani2017} with a Masked Language Model loss~\\citep{devlin2018bert}.\n",
815
+ "Performance aside, an advantage of fine-tuning is that it does not require task-specific model design, unlike representation-based transfer.\n",
816
+ "However, vanilla fine-tuning does require a new set of network weights for every new task.\n",
817
+ "\n",
818
+ "\\paragraph{Multi-task Learning}\n",
819
+ "Multi-task learning (MTL) involves training on tasks simultaneously.\n",
820
+ "Early work shows that sharing network parameters across tasks exploits task regularities, yielding improved performance~\\citep{caruana1997}.\n",
821
+ "The authors share weights in lower layers of a network, and use specialized higher layers.\n",
822
+ "Many NLP systems have exploited MTL.\n",
823
+ "Some examples include: text processing systems (part of speech, chunking, named entity recognition, etc.)~\\citep{collobert2008}, multilingual models~\\citep{huang2013cross}, semantic parsing~\\citep{peng2017}, machine translation~\\citep{johnson2017}, and question answering~\\citep{choi2017}.\n",
824
+ "MTL yields a single model to solve all problems.\n",
825
+ "However, unlike our adapters, MTL requires simultaneous access to the tasks during training.\n",
826
+ "\n",
827
+ "\\paragraph{Continual Learning}\n",
828
+ "As an alternative to simultaneous training, continual, or lifelong, learning aims to learn from a sequence of tasks~\\citep{thrun1998}.\n",
829
+ "However, when re-trained, deep networks tend to forget how to perform previous tasks; a challenge termed catastrophic forgetting~\\citep{mccloskey1989catastrophic,french1999catastrophic}.\n",
830
+ "Techniques have been proposed to mitigate forgetting~\\citep{kirkpatrick2017overcoming,zenke2017continual}, however, unlike for adapters, the memory is imperfect.\n",
831
+ "Progressive Networks avoid forgetting by instantiating a new network ``column'' for each task~\\citep{rusu2016progressive}.\n",
832
+ "However, the number of parameters grows linearly with the number of tasks,\n",
833
+ "since adapters are very small, our models scale much more favorably.\n",
834
+ "\n",
835
+ "\\paragraph{Transfer Learning in Vision}\n",
836
+ "Fine-tuning models pre-trained on ImageNet~\\citep{deng2009} is ubiquitous when building image recognition models~\\citep{yosinski2014,huh2016makes}.\n",
837
+ "This technique attains state-of-the-art performance on many vision tasks, including classification~\\citep{kornblith2018better}, fine-grained classifcation~\\citep{hermans2017}, segmentation~\\citep{long2015}, and detection~\\citep{girshick2014}.\n",
838
+ "In vision, convolutional adapter modules have been studied~\\citep{rebuffi2017,rebuffi2018,rosenfeld2018incremental}.\n",
839
+ "These works perform incremental learning in multiple domains by adding small convolutional layers to a ResNet~\\citep{he2016} or VGG net~\\citep{simonyan2014very}.\n",
840
+ "Adapter size is limited using $1\\times 1$ convolutions, whilst the original networks typically use $3\\times 3$.\n",
841
+ "This yields $11\\\n",
842
+ "Since the kernel size cannot be further reduced other weight compression techniques must be used to attain further savings.\n",
843
+ "Our bottleneck adapters can be much smaller, and still perform well.\n",
844
+ "\n",
845
+ "Concurrent work explores similar ideas for BERT~\\citep{stickland2019bert}.\n",
846
+ "The authors introduce Projected Attention Layers (PALs), small layers with a similar role to our adapters.\n",
847
+ "The main differences are i) \\citet{stickland2019bert} use a different architecture,\n",
848
+ "and ii) they perform multitask training, jointly fine-tuning BERT on all GLUE tasks.\n",
849
+ "\\citet{semnani2019} perform an emprical comparison of our bottleneck Adpaters and PALs on SQuAD v2.0~\\citep{rajpurkar2018}.\n",
850
+ "\n",
851
+ "\\subsubsection*{Acknowledgments}\n",
852
+ "We would like to thank Andrey Khorlin, Lucas Beyer,\n",
853
+ "No\\'e Lutz, and Jeremiah Harmsen for useful comments and discussions.\n",
854
+ "\n",
855
+ "\\bibliography{nlp}\n",
856
+ "\\bibliographystyle{icml2019}\n",
857
+ "\n",
858
+ "\n",
859
+ "\\clearpage\n",
860
+ "\n",
861
+ "\\appendix\n",
862
+ "\\onecolumn\n",
863
+ "\n",
864
+ "\\icmltitle{Supplementary Material for\\\\Parameter-Efficient Transfer Learning for NLP}\n",
865
+ "\n",
866
+ "\\section{Additional Text Classification Tasks}\n",
867
+ "\\label{appendix:hub_stats}\n",
868
+ "\n",
869
+ "\n",
870
+ "\\begin{table*}[ht]\n",
871
+ "\\centering\n",
872
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
873
+ "\\begin{tabular}{l|rrrrrr}\n",
874
+ "\\toprule\n",
875
+ "Dataset & Train examples & Validation examples & Test examples & Classes & Avg text length & Reference \\\\\n",
876
+ "\\midrule\n",
877
+ "20 newsgroups & $15076$ & $1885$ & $1885$ & $20$ & $1903$ & \\citep{Lang95} \\\\\n",
878
+ "Crowdflower airline & $11712$ & $1464$ & $1464$ & $3$ & $104$ & crowdflower.com \\\\\n",
879
+ "Crowdflower corporate messaging & $2494$ & $312$ & $312$ & $4$ & $121$ & crowdflower.com \\\\\n",
880
+ "Crowdflower disasters & $8688$ & $1086$ & $1086$ & $2$ & $101$ & crowdflower.com \\\\\n",
881
+ "Crowdflower economic news relevance & $6392$ & $799$ & $800$ & $2$ & $1400$ & crowdflower.com \\\\\n",
882
+ "Crowdflower emotion & $32000$ & $4000$ & $4000$ & $13$ & $73$ & crowdflower.com \\\\\n",
883
+ "Crowdflower global warming & $3380$ & $422$ & $423$ & $2$ & $112$ & crowdflower.com \\\\\n",
884
+ "Crowdflower political audience & $4000$ & $500$ & $500$ & $2$ & $205$ & crowdflower.com \\\\\n",
885
+ "Crowdflower political bias & $4000$ & $500$ & $500$ & $2$ & $205$ & crowdflower.com \\\\\n",
886
+ "Crowdflower political message & $4000$ & $500$ & $500$ & $9$ & $205$ & crowdflower.com \\\\\n",
887
+ "Crowdflower primary emotions & $2019$ & $252$ & $253$ & $18$ & $87$ & crowdflower.com \\\\\n",
888
+ "Crowdflower progressive opinion & $927$ & $116$ & $116$ & $3$ & $102$ & crowdflower.com \\\\\n",
889
+ "Crowdflower progressive stance & $927$ & $116$ & $116$ & $4$ & $102$ & crowdflower.com \\\\\n",
890
+ "Crowdflower US economic performance & $3961$ & $495$ & $496$ & $2$ & $305$ & crowdflower.com \\\\\n",
891
+ "Customer complaint database & $146667$ & $18333$ & $18334$ & $157$ & $1046$ & catalog.data.gov \\\\\n",
892
+ "News aggregator dataset & $338349$ & $42294$ & $42294$ & $4$ & $57$ & \\citep{Lichman:2013}\\\\\n",
893
+ "SMS spam collection & $4459$ & $557$ & $558$ & $2$ & $81$ & \\citep{Almeida:2011:CSS:2034691.2034742}\\\\\n",
894
+ "\\bottomrule\n",
895
+ "\\end{tabular}\n",
896
+ "\\end{adjustbox}\n",
897
+ "\\caption{Statistics and references for the additional text classification tasks.}\n",
898
+ "\\label{tab:hub_stats}\n",
899
+ "\\end{table*}\n",
900
+ "\n",
901
+ "\n",
902
+ "\\begin{table*}[ht]\n",
903
+ "\\centering\n",
904
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
905
+ "\\begin{tabular}{l|rr}\n",
906
+ "\\toprule\n",
907
+ "Dataset & Epochs (Fine-tune) & Epochs (Adapters) \\\\\n",
908
+ "\\midrule\n",
909
+ "20 newsgroups & $50$ & $50$ \\\\\n",
910
+ "Crowdflower airline & $50$ & $20$ \\\\\n",
911
+ "Crowdflower corporate messaging & $100$ & $50$ \\\\\n",
912
+ "Crowdflower disasters & $50$ & $50$ \\\\\n",
913
+ "Crowdflower economic news relevance & $20$ & $20$ \\\\\n",
914
+ "Crowdflower emotion & $20$ & $20$ \\\\\n",
915
+ "Crowdflower global warming & $100$ & $50$ \\\\\n",
916
+ "Crowdflower political audience & $50$ & $20$ \\\\\n",
917
+ "Crowdflower political bias & $50$ & $50$ \\\\\n",
918
+ "Crowdflower political message & $50$ & $50$ \\\\\n",
919
+ "Crowdflower primary emotions & $100$ & $100$ \\\\\n",
920
+ "Crowdflower progressive opinion & $100$ & $100$ \\\\\n",
921
+ "Crowdflower progressive stance & $100$ & $100$ \\\\\n",
922
+ "Crowdflower US economic performance & $100$ & $20$ \\\\\n",
923
+ "Customer complaint database & $20$ & $20$ \\\\\n",
924
+ "News aggregator dataset & $20$ & $20$ \\\\\n",
925
+ "SMS spam collection & $50$ & $20$ \\\\\n",
926
+ "\\bottomrule\n",
927
+ "\\end{tabular}\n",
928
+ "\\end{adjustbox}\n",
929
+ "\\caption{Number of training epochs selected for the additional classification tasks.}\n",
930
+ "\\label{tab:hub_epochs}\n",
931
+ "\\end{table*}\n",
932
+ "\n",
933
+ "\n",
934
+ "\\begin{table*}[ht]\n",
935
+ "\\centering\n",
936
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
937
+ "\\begin{tabular}{ll}\n",
938
+ "\\toprule\n",
939
+ "\\bf Parameter & \\bf Search Space \\\\\n",
940
+ "\\midrule\n",
941
+ "1) Input embedding modules & Refer to Table~\\ref{tab:hub_embeddings_text} \\\\\n",
942
+ "2) Fine-tune input embedding module & \\{True, False\\} \\\\\n",
943
+ "3) Lowercase text & \\{True, False\\} \\\\\n",
944
+ "4) Remove non alphanumeric text & \\{True, False\\} \\\\\n",
945
+ "5) Use convolution & \\{True, False\\} \\\\\n",
946
+ "6) Convolution activation & \\{relu, relu6, leaky relu, swish, sigmoid, tanh\\} \\\\\n",
947
+ "7) Convolution batch norm & \\{True, False\\} \\\\\n",
948
+ "8) Convolution max ngram length & \\{2, 3\\} \\\\\n",
949
+ "9) Convolution dropout rate & [0.0, 0.4] \\\\\n",
950
+ "10) Convolution number of filters & [50, 200]\\\\\n",
951
+ "11) Convolution embedding dropout rate & [0.0, 0.4] \\\\\n",
952
+ "12) Number of hidden layers & \\{0, 1, 2, 3, 5\\} \\\\\n",
953
+ "13) Hidden layers size & \\{64, 128, 256\\} \\\\\n",
954
+ "14) Hidden layers activation & \\{relu, relu6, leaky relu, swish, sigmoid, tanh\\} \\\\\n",
955
+ "15) Hidden layers normalization & \\{none, batch norm, layer norm\\} \\\\\n",
956
+ "16) Hidden layers dropout rate & \\{0.0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5\\} \\\\\n",
957
+ "17) Deep tower learning rate & \\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\\} \\\\\n",
958
+ "18) Deep tower regularization weight & \\{0.0, 0.0001, 0.001, 0.01\\} \\\\\n",
959
+ "19) Wide tower learning rate & \\{0.001, 0.005, 0.01, 0.05, 0.1, 0.5\\} \\\\\n",
960
+ "20) Wide tower regularization weight & \\{0.0, 0.0001, 0.001, 0.01\\} \\\\\n",
961
+ "21) Number of training samples & \\{1e5, 2e5, 5e5, 1e6, 2e6\\} \\\\\n",
962
+ "\\bottomrule\n",
963
+ "\\end{tabular}\n",
964
+ "\\end{adjustbox}\n",
965
+ "\\caption{The search space of baseline models for the additional text classification tasks.}\n",
966
+ "\\label{tab:hub_ss}\n",
967
+ "\\end{table*}\n",
968
+ "\n",
969
+ "\n",
970
+ "\\begin{table*}[ht]\n",
971
+ "\\centering\n",
972
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
973
+ "\\begin{tabular}{llllll}\n",
974
+ "\\bf ID & \\bf Dataset &\\bf Embed & \\bf Vocab. & \\bf Training & \\bf\n",
975
+ "\n",
976
+ "TensorFlow Hub Handles \\\\\n",
977
+ "\\bf & \\bf size & \\bf dim. & \\bf size & \\bf algorithm &\n",
978
+ "Prefix: \\texttt{https://tfhub.dev/google/}\\\\\n",
979
+ "& (tokens) & & & \\\\\n",
980
+ "\\hline \\\\\n",
981
+ "English-small & 7B & 50 & 982k & Lang. model &\n",
982
+ "\n",
983
+ "\\texttt{nnlm-en-dim50-with-normalization/1} \\\\\n",
984
+ "English-big & 200B & 128 & 999k & Lang. model &\n",
985
+ "\n",
986
+ "\\texttt{nnlm-en-dim128-with-normalization/1} \\\\\n",
987
+ "English-wiki-small & 4B & 250 & 1M & Skipgram &\n",
988
+ "\n",
989
+ "\\texttt{Wiki-words-250-with-normalization/1} \\\\\n",
990
+ "English-wiki-big & 4B & 500 & 1M & Skipgram &\n",
991
+ "\n",
992
+ "\\texttt{Wiki-words-500-with-normalization/1} \\\\\n",
993
+ "Universal-sentence-encoder & - & 512 & - & \\citep{cer2018universal} &\n",
994
+ "\n",
995
+ "\\texttt{universal-sentence-encoder/2} \\\\\n",
996
+ "\\end{tabular}\n",
997
+ "\\end{adjustbox}\n",
998
+ "\\caption{Options for text input embedding modules.\n",
999
+ "These are pre-trained text embedding tables.\n",
1000
+ "\n",
1001
+ "\n",
1002
+ "We provide the handle for the modules that are publicly distributed via the TensorFlow Hub service (\\texttt{https://www.tensorflow.org/hub}).}\n",
1003
+ "\\label{tab:hub_embeddings_text}\n",
1004
+ "\\end{table*}\n",
1005
+ "\n",
1006
+ "\n",
1007
+ "\\begin{table*}[ht]\n",
1008
+ "\\centering\n",
1009
+ "\\begin{adjustbox}{max width=\\textwidth}\n",
1010
+ "\\begin{tabular}{l|lllllllllllllllllllll}\n",
1011
+ "\\toprule\n",
1012
+ "Dataset & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21\\\\\n",
1013
+ "\\midrule\n",
1014
+ "20 newsgroups & Universal-sentence-encoder & False & True & True & False & relu6 & False & 2 & 0.37 & 94 & 0.38 & 1 & 128 & leaky relu & batch norm & 0.5 & 0.5 & 0 & 0.05 & 0.0001 & 1000000 \\\\\n",
1015
+ "Crowdflower airline & English-big & False & False & False & True & leaky relu & False & 3 & 0.36 & 200 & 0.07 & 0 & 128 & tanh & layer norm & 0.4 & 0.1 & 0.001 & 0.05 & 0.001 & 200000 \\\\\n",
1016
+ "Crowdflower corporate messaging & English-big & False & False & True & True & tanh & True & 3 & 0.40 & 56 & 0.40 & 1 & 64 & tanh & batch norm & 0.5 & 0.5 & 0.001 & 0.01 & 0 & 200000 \\\\\n",
1017
+ "Crowdflower disasters & Universal-sentence-encoder & True & True & False & True & swish & True & 3 & 0.27 & 52 & 0.22 & 0 & 64 & relu & none & 0.2 & 0.005 & 0.0001 & 0.005 & 0.01 & 500000 \\\\\n",
1018
+ "Crowdflower economic news relevance & Universal-sentence-encoder & True & True & False & False & leaky relu & False & 2 & 0.27 & 63 & 0.04 & 3 & 128 & swish & layer norm & 0.2 & 0.01 & 0.01 & 0.001 & 0 & 100000 \\\\\n",
1019
+ "Crowdflower emotion & Universal-sentence-encoder & False & True & False & False & relu6 & False & 3 & 0.35 & 132 & 0.34 & 1 & 64 & tanh & none & 0.05 & 0.05 & 0 & 0.05 & 0 & 200000 \\\\\n",
1020
+ "Crowdflower global warming & Universal-sentence-encoder & False & True & True & False & swish & False & 3 & 0.39 & 200 & 0.36 & 1 & 128 & leaky relu & batch norm & 0.4 & 0.05 & 0 & 0.001 & 0.001 & 1000000 \\\\\n",
1021
+ "Crowdflower political audience & English-small & True & False & True & True & relu & False & 3 & 0.11 & 98 & 0.07 & 0 & 64 & relu & none & 0.5 & 0.05 & 0.001 & 0.001 & 0 & 100000 \\\\\n",
1022
+ "Crowdflower political bias & English-big & False & True & True & False & swish & False & 3 & 0.12 & 81 & 0.30 & 0 & 64 & relu6 & none & 0 & 0.01 & 0 & 0.005 & 0.01 & 200000 \\\\\n",
1023
+ "Crowdflower political message & Universal-sentence-encoder & False & False & True & False & swish & True & 2 & 0.36 & 57 & 0.35 & 0 & 64 & tanh & none & 0.5 & 0.01 & 0.001 & 0.005 & 0 & 200000 \\\\\n",
1024
+ "Crowdflower primary emotions & English-big & False & True & True & True & swish & False & 3 & 0.40 & 191 & 0.03 & 0 & 256 & relu6 & none & 0.5 & 0.1 & 0.001 & 0.05 & 0 & 200000 \\\\\n",
1025
+ "Crowdflower progressive opinion & English-big & True & False & True & True & relu6 & False & 3 & 0.40 & 199 & 0.28 & 0 & 128 & relu & batch norm & 0.3 & 0.1 & 0.01 & 0.005 & 0.001 & 200000 \\\\\n",
1026
+ "Crowdflower progressive stance & Universal-sentence-encoder & True & False & True & False & relu & True & 3 & 0.01 & 195 & 0.00 & 2 & 256 & tanh & layer norm & 0.4 & 0.005 & 0 & 0.005 & 0.0001 & 500000 \\\\\n",
1027
+ "Crowdflower us economic performance & English-big & True & False & True & True & tanh & True & 2 & 0.31 & 53 & 0.24 & 1 & 256 & leaky relu & batch norm & 0.3 & 0.05 & 0.0001 & 0.001 & 0.0001 & 100000 \\\\\n",
1028
+ "Customer complaint database & English-big & True & False & False & False & tanh & False & 2 & 0.03 & 69 & 0.10 & 1 & 256 & leaky relu & layer norm & 0.1 & 0.05 & 0.0001 & 0.05 & 0.001 & 1000000 \\\\\n",
1029
+ "News aggregator dataset & Universal-sentence-encoder & False & True & True & False & sigmoid & True & 2 & 0.00 & 156 & 0.29 & 3 & 256 & relu & batch norm & 0.05 & 0.05 & 0 & 0.5 & 0.0001 & 1000000 \\\\\n",
1030
+ "Sms spam collection & English-wiki-small & True & True & True & True & leaky relu & False & 3 & 0.20 & 54 & 0.00 & 1 & 128 & leaky relu & batch norm & 0 & 0.1 & 0 & 0.05 & 0.01 & 1000000 \\\\\n",
1031
+ "\\bottomrule\n",
1032
+ "\\end{tabular}\n",
1033
+ "\\end{adjustbox}\n",
1034
+ "\\caption{Search space parameters (see Table~\\ref{tab:hub_ss}) for the AutoML baseline models that were selected.}\n",
1035
+ "\\label{tab:hub_model_params}\n",
1036
+ "\\end{table*}\n",
1037
+ "\n",
1038
+ "\n",
1039
+ "\\section{Learning Rate Robustness}\n",
1040
+ "\n",
1041
+ "\n",
1042
+ "\\begin{figure}[t]\n",
1043
+ "\\centering\n",
1044
+ "\\includegraphics[width=0.5\\linewidth]{figures/squad_adapters_lr.pdf}\n",
1045
+ "\\caption{\n",
1046
+ "Best performing models at different learning rates.\n",
1047
+ "Error vars indicate the s.e.m. across three random seeds.\n",
1048
+ "}\n",
1049
+ "\\label{fig:lr}\n",
1050
+ "\\end{figure}\n",
1051
+ "\n",
1052
+ "We test the robustness of adapters and fine-tuning to the learning rate.\n",
1053
+ "We ran experiments with learning rates in the range $[2\\cdot 10^{-5},10^{-3}]$, and selected the best hyperparameters for each method at each learning rate.\n",
1054
+ "Figure~\\ref{fig:lr} shows the results.\n",
1055
+ "\n",
1056
+ "\\end{document}\n"
1057
+ ],
1058
+ "del_percentage": 0.12222
1059
+ }
1060
+ }
Experiment_Design/1902.00751/images/Adapter_arch.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1f02f8cee1bd40e5d53fcf27a9d19fa72f39b9e108fe10172abb0ee7fd646e6
3
+ size 23032
Experiment_Design/1902.00751/images/Adapter_insertion.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c79f4ce44a123eb7a3932263ed103a846cc3a62a7d94629552bfd407f9db7777
3
+ size 24639
Experiment_Design/1902.00751/images/cola_ablation_heatmap.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0290e28440c095f5947afe8f7adf857474b10817ba913f4f02fb91145ecf9e45
3
+ size 48193
Experiment_Design/1902.00751/images/extra_tasks_plot.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb495fa355a6e620f7dce2bef4aa6806a35358429f4c140e5e66d570549be4e
3
+ size 46117
Experiment_Design/1902.00751/images/glue_results.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbfbcbbe3b43a40b1d01cfc176acef21dc2de48528a7a35844b45c0955067c85
3
+ size 46589
Experiment_Design/1902.00751/images/glue_results_b.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:048cf318358131935831642558a8ceb505f0d1d3d4653e741def0445874e39ee
3
+ size 46581
Experiment_Design/1902.00751/images/init_study.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:074c3dc3d69a22029e9100fa7a71f8419a0b6cee9db83aa20aab088b50c5d826
3
+ size 70409
Experiment_Design/1902.00751/images/mnli_ablation_heatmap.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c75a9c5616787c6498b33596de0c175acc8079fd58029c239a506d9fa188be65
3
+ size 48030
Experiment_Design/1902.00751/images/param_efficiency_cola.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83b46edda93f15677639ab40f080b69e9954112cdfe05a4851d124db1fe6622b
3
+ size 47316
Experiment_Design/1902.00751/images/param_efficiency_mnli.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e1caca2b81875ddba3a75d4055636c445c7622b90a429482c8452452388c90a
3
+ size 47255
Experiment_Design/1902.00751/images/squad_adapters_baseline.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e37a2f13431f9231e26b73de28fc8f835da2150e4a76508158a6f841e6de1b3e
3
+ size 46318
Experiment_Design/1902.00751/images/squad_adapters_lr.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb0480a9029fc7966a4a4e80eac9df5a5139882d613e835a24c9843182aa83c1
3
+ size 45283
Experiment_Design/1902.00751/images/used/Adapter_arch_page_1.png ADDED

Git LFS Details

  • SHA256: cd3280611bde8a027c7b199f20710229309f6708fa8c74117be2cb1476a71a74
  • Pointer size: 130 Bytes
  • Size of remote file: 21.1 kB
Experiment_Design/1902.00751/images/used/Adapter_insertion_page_1.png ADDED

Git LFS Details

  • SHA256: fe864ba42e68866e01b36ff3fda4bdc3e768079dd8f267bc3df85fee16085ef6
  • Pointer size: 130 Bytes
  • Size of remote file: 20.1 kB
Experiment_Design/1906.01502/1906.01502_source.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ab8ae4ed7130568d58302b6ebe976734ba13d43b67130073e6486148ccd5f84
3
+ size 184505
Experiment_Design/1906.01502/data_text.json ADDED
@@ -0,0 +1,691 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "id": "1906.01502",
3
+ "annotator": "moosa",
4
+ "input": [
5
+ "\\documentclass[11pt,a4paper]{article}\n",
6
+ "\\usepackage[hyperref]{acl2019}\n",
7
+ "\\usepackage{times}\n",
8
+ "\\usepackage{latexsym}\n",
9
+ "\\usepackage[pdftex]{graphicx} \n",
10
+ "\\usepackage{amsmath}\n",
11
+ "\\usepackage{bm}\n",
12
+ "\\usepackage{enumitem}\n",
13
+ "\\setlist{itemsep=0px,parsep=0px,topsep=0.5\\baselineskip,itemindent=-2mm}\n",
14
+ "\\usepackage{url}\n",
15
+ "\\usepackage[farskip=0pt]{subfig}\n",
16
+ "\\usepackage{lscape}\n",
17
+ "\\usepackage[pass]{geometry}\n",
18
+ "\\usepackage{float}\n",
19
+ "\\setlength{\\abovedisplayskip}{1pt}\n",
20
+ "\\setlength{\\belowdisplayskip}{1pt}\n",
21
+ "\\newcommand{\\commentfn}[1]{#1}\n",
22
+ "\\newcommand{\\tp}[1]{\\commentfn{\\textcolor{red} {[\\textsc{Telmo}: #1]}}}\n",
23
+ "\\newcommand{\\dg}[1]{\\commentfn{\\textcolor{red}{[\\textsc{Dan}: #1]}}}\n",
24
+ "\\newcommand{\\es}[1]{\\commentfn{\\textcolor{red}{[\\textsc{Eva}: #1]}}}\n",
25
+ "\\newcommand{\\bert}{\\textsc{Bert}}\n",
26
+ "\\newcommand{\\mbert}{\\mbox{\\textsc{M}-\\bert{}}}\n",
27
+ "\\newcommand{\\enbert}{\\mbox{\\textsc{En}-\\bert{}}}\n",
28
+ "\\newcommand{\\pos}{\\textsc{pos}}\n",
29
+ "\\newcommand{\\ner}{\\textsc{ner}}\n",
30
+ "\\newcommand{\\nlp}{\\textsc{nlp}}\n",
31
+ "\\setlength{\\textfloatsep}{10pt plus 1.0pt minus 2.0pt}\n",
32
+ "\\setlength{\\floatsep}{6pt plus 1.0pt minus 1.0pt}\n",
33
+ "\\setlength{\\intextsep}{6pt plus 1.0pt minus 1.0pt}\n",
34
+ "\\aclfinalcopy \n",
35
+ "\\def\\aclpaperid{2122} \n",
36
+ "\\title{How multilingual is Multilingual BERT?}\n",
37
+ "\\author{Telmo Pires\\thanks{\\ \\ Google AI Resident.} \\qquad Eva Schlinger \\qquad Dan Garrette \\\\\n",
38
+ " Google Research \\\\ \n",
39
+ " \\texttt{\\{telmop,eschling,dhgarrette\\}@google.com}}\n",
40
+ "\\date{}\n",
41
+ "\\begin{document}\n",
42
+ "\\maketitle\n",
43
+ "\\begin{abstract}\n",
44
+ "In this paper, we show that Multilingual BERT (\\mbert{}), released by \\citet{devlin2018bert} as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language.\n",
45
+ "From these results, we can conclude that \\mbert{} does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.\n",
46
+ "\\end{abstract}\n",
47
+ "\\section{Introduction}\n",
48
+ "Deep, contextualized language models provide powerful, general-purpose linguistic representations that have enabled significant advances among a wide range of natural language processing tasks \\cite{peter2018elmo,devlin2018bert}. These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations.\n",
49
+ "Previous work on model probing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore focused on what models trained on English capture about English \\cite{D18-1179,tenney2018what, tenney2019acl}.\n",
50
+ "Our results show that \\mbert{} is able to perform cross-lingual generalization surprisingly well.\n",
51
+ "\\section{Models and Data}\n",
52
+ "Like the original English BERT model (henceforth, \\enbert{}), \\mbert{} is a 12 layer transformer\n",
53
+ "It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translation-equivalent pairs to have similar representations.\n",
54
+ "For \\ner{} and \\pos{}, we use the same sequence tagging architecture as \\citet{devlin2018bert}. We tokenize the input sentence, feed it to \\textsc{Bert}, get the last layer's activations, and pass them through a final layer to make the tag predictions.\n"
55
+ ],
56
+ "output": {
57
+ "What experiments do you suggest doing?": [
58
+ "1. Expand the definition of overlap. The authors should calculate overlap based on all the words shared between two languages, instead of just shared vocabulary on just the entities.",
59
+ "2. Report performance gains for using some popular language similarity criterion, e.g., WALS.",
60
+ "3. Effect of tokens per word. The authors should perform experiments on more scripts, specifically looking at the effect of words being split into multiple tokens.",
61
+ "4. Control for vocabulary overlap among languages. Choose languages that have large vocabulary overlap and different word order feature. Train on one set of languages and then perform zero shot evaluation on the rest.",
62
+ "5. Ablate the effect of common word pieces by using a non-overlapping tokenizer for different languages."
63
+ ],
64
+ "Why do you suggest these experiments?": [
65
+ "1. To check whether non-entity overlap between two languages also contribute to better performance on recognizing the entities. The model may use information from non-entity words to recognize an entity. Additionally, successfully recognizing that a word is not an entity also contributes the performance on the NER task.",
66
+ "2. To understand which features the language model can exploit for cross-lingual transfer. This will give us insights into what typological similarity the multilingual language model can pick up during pretraining.",
67
+ "3. To understand the effect of POS label frequency. The idea is that two languages with similar token to word ratio will result in better cross-lingual transfer. The reason is that continuation tokens should be classified properly and the change in the training corpus of the frequency of continuation tokens will result in different performance.",
68
+ "4. To properly control for the effect of vocabulary overlap. Since large overlap in vocabulary can lead to performance gain, the reported results does not reflect the true impact of word order.",
69
+ "5. To understand the effect of structure of sentences in different languages for cross-lingual understanding of multilingual language models. Since there will be no overlap between different languages the model must learn cross-lingual representations based on syntactic and semantic properties of the languages."
70
+ ]
71
+ },
72
+ "paper_info": {
73
+ "title": "How multilingual is Multilingual BERT?",
74
+ "authors": [
75
+ "Telmo Pires",
76
+ "Eva Schlinger",
77
+ "Dan Garrette"
78
+ ],
79
+ "abstract": "In this paper, we show that Multilingual BERT (M-BERT), released by Devlin et\nal. (2018) as a single language model pre-trained from monolingual corpora in\n104 languages, is surprisingly good at zero-shot cross-lingual model transfer,\nin which task-specific annotations in one language are used to fine-tune the\nmodel for evaluation in another language. To understand why, we present a large\nnumber of probing experiments, showing that transfer is possible even to\nlanguages in different scripts, that transfer works best between typologically\nsimilar languages, that monolingual corpora can train models for\ncode-switching, and that the model can find translation pairs. From these\nresults, we can conclude that M-BERT does create multilingual representations,\nbut that these representations exhibit systematic deficiencies affecting\ncertain language pairs.",
80
+ "comments": null
81
+ },
82
+ "raw_data": {
83
+ "context_before_exp": [
84
+ "\n",
85
+ "\n",
86
+ "\n",
87
+ "\n",
88
+ "\n",
89
+ "\n",
90
+ "\n",
91
+ "\n",
92
+ "\n",
93
+ "\n",
94
+ "\n",
95
+ "\n",
96
+ "\n",
97
+ "\\documentclass[11pt,a4paper]{article}\n",
98
+ "\\usepackage[hyperref]{acl2019}\n",
99
+ "\\usepackage{times}\n",
100
+ "\\usepackage{latexsym}\n",
101
+ "\\usepackage[pdftex]{graphicx} \n",
102
+ "\\usepackage{amsmath}\n",
103
+ "\\usepackage{bm}\n",
104
+ "\\usepackage{enumitem}\n",
105
+ "\\setlist{itemsep=0px,parsep=0px,topsep=0.5\\baselineskip,itemindent=-2mm}\n",
106
+ "\\usepackage{url}\n",
107
+ "\\usepackage[farskip=0pt]{subfig}\n",
108
+ "\\usepackage{lscape}\n",
109
+ "\\usepackage[pass]{geometry}\n",
110
+ "\\usepackage{float}\n",
111
+ "\n",
112
+ "\\setlength{\\abovedisplayskip}{1pt}\n",
113
+ "\\setlength{\\belowdisplayskip}{1pt}\n",
114
+ "\n",
115
+ "\\newcommand{\\commentfn}[1]{#1}\n",
116
+ "\\newcommand{\\tp}[1]{\\commentfn{\\textcolor{red} {[\\textsc{Telmo}: #1]}}}\n",
117
+ "\\newcommand{\\dg}[1]{\\commentfn{\\textcolor{red}{[\\textsc{Dan}: #1]}}}\n",
118
+ "\\newcommand{\\es}[1]{\\commentfn{\\textcolor{red}{[\\textsc{Eva}: #1]}}}\n",
119
+ "\n",
120
+ "\n",
121
+ "\\newcommand{\\bert}{\\textsc{Bert}}\n",
122
+ "\\newcommand{\\mbert}{\\mbox{\\textsc{M}-\\bert{}}}\n",
123
+ "\\newcommand{\\enbert}{\\mbox{\\textsc{En}-\\bert{}}}\n",
124
+ "\\newcommand{\\pos}{\\textsc{pos}}\n",
125
+ "\\newcommand{\\ner}{\\textsc{ner}}\n",
126
+ "\\newcommand{\\nlp}{\\textsc{nlp}}\n",
127
+ "\n",
128
+ "\n",
129
+ "\n",
130
+ "\n",
131
+ "\n",
132
+ "\n",
133
+ "\\setlength{\\textfloatsep}{10pt plus 1.0pt minus 2.0pt}\n",
134
+ "\\setlength{\\floatsep}{6pt plus 1.0pt minus 1.0pt}\n",
135
+ "\\setlength{\\intextsep}{6pt plus 1.0pt minus 1.0pt}\n",
136
+ "\n",
137
+ "\\aclfinalcopy \n",
138
+ "\\def\\aclpaperid{2122} \n",
139
+ "\n",
140
+ "\n",
141
+ "\n",
142
+ "\n",
143
+ "\n",
144
+ "\n",
145
+ "\n",
146
+ "\\title{How multilingual is Multilingual BERT?}\n",
147
+ "\n",
148
+ "\\author{Telmo Pires\\thanks{\\ \\ Google AI Resident.} \\qquad Eva Schlinger \\qquad Dan Garrette \\\\\n",
149
+ " Google Research \\\\ \n",
150
+ "\n",
151
+ " \\texttt{\\{telmop,eschling,dhgarrette\\}@google.com}}\n",
152
+ "\n",
153
+ "\\date{}\n",
154
+ "\n",
155
+ "\\begin{document}\n",
156
+ "\\maketitle\n",
157
+ "\n",
158
+ "\\begin{abstract}\n",
159
+ "In this paper, we show that Multilingual BERT (\\mbert{}), released by \\citet{devlin2018bert} as a single language model pre-trained from monolingual corpora in 104 languages, is surprisingly good at zero-shot cross-lingual model transfer, in which task-specific annotations in one language are used to fine-tune the model for evaluation in another language.\n",
160
+ "To understand why, we present a large number of probing experiments, showing that transfer is possible even to languages in different scripts, that transfer works best between typologically similar languages, that monolingual corpora can train models for code-switching, and that the model can find translation pairs.\n",
161
+ "From these results, we can conclude that \\mbert{} does create multilingual representations, but that these representations exhibit systematic deficiencies affecting certain language pairs.\n",
162
+ "\\end{abstract}\n",
163
+ "\n",
164
+ "\n",
165
+ "\\section{Introduction}\n",
166
+ "\n",
167
+ "\n",
168
+ "\n",
169
+ "\n",
170
+ "Deep, contextualized language models provide powerful, general-purpose linguistic representations that have enabled significant advances among a wide range of natural language processing tasks \\cite{peter2018elmo,devlin2018bert}. These models can be pre-trained on large corpora of readily available unannotated text, and then fine-tuned for specific tasks on smaller amounts of supervised data, relying on the induced language model structure to facilitate generalization beyond the annotations.\n",
171
+ "Previous work on model probing has shown that these representations are able to encode, among other things, syntactic and named entity information, but they have heretofore focused on what models trained on English capture about English \\cite{D18-1179,tenney2018what, tenney2019acl}.\n",
172
+ "\n",
173
+ "\n",
174
+ "In this paper, we empirically investigate the degree to which these representations generalize \\emph{across} languages. We explore this question using Multilingual BERT (henceforth, \\mbert{}), released by \\citet{devlin2018bert} as a single language model pre-trained on the concatenation of monolingual Wikipedia corpora from 104 languages.\\footnote{https://github.com/google-research/bert} \\mbert{} is particularly well suited to this probing study because it enables a very straightforward approach to zero-shot cross-lingual model transfer: we fine-tune the model using task-specific supervised training data from one language, and evaluate that task in a different language, thus allowing us to observe the ways in which the model generalizes information across languages.\n",
175
+ "\n",
176
+ "Our results show that \\mbert{} is able to perform cross-lingual generalization surprisingly well.\n",
177
+ "\n",
178
+ "More importantly, we present the results of a number of probing experiments designed to test various hypotheses about how the model is able to perform this transfer. Our experiments show that while high lexical overlap between languages improves transfer, \\mbert{} is also able to transfer between languages written in different scripts---thus having \\emph{zero} lexical overlap---indicating that it captures multilingual representations. We further show that transfer works best for typologically similar languages, suggesting that while \\mbert{}'s multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.\n",
179
+ "\n",
180
+ "\n",
181
+ "\n",
182
+ "\n",
183
+ "\n",
184
+ "\n",
185
+ "\n",
186
+ "\\section{Models and Data}\n",
187
+ "\n",
188
+ "\n",
189
+ "\n",
190
+ "Like the original English BERT model (henceforth, \\enbert{}), \\mbert{} is a 12 layer transformer\n",
191
+ "\n",
192
+ "\\citep{devlin2018bert}, but instead of being trained only on monolingual English data with an English-derived vocabulary, it is trained on the Wikipedia pages of 104 languages with a shared word piece vocabulary.\n",
193
+ "\n",
194
+ "It does not use any marker denoting the input language, and does not have any explicit mechanism to encourage translation-equivalent pairs to have similar representations.\n",
195
+ "\n",
196
+ "\n",
197
+ "\n",
198
+ "For \\ner{} and \\pos{}, we use the same sequence tagging architecture as \\citet{devlin2018bert}. We tokenize the input sentence, feed it to \\textsc{Bert}, get the last layer's activations, and pass them through a final layer to make the tag predictions.\n",
199
+ "\n",
200
+ "\n",
201
+ "\n",
202
+ "\n",
203
+ "\n",
204
+ "The whole model is then fine-tuned to minimize the cross entropy loss for the task. When tokenization splits words into multiple pieces, we take the prediction for the first piece as the prediction for the word.\n",
205
+ "\n",
206
+ "\n",
207
+ "\n",
208
+ "\n"
209
+ ],
210
+ "context_after_exp": [
211
+ "\\subsection{Named entity recognition experiments}\n",
212
+ "\n",
213
+ "We perform \\ner{} experiments on two datasets: the publicly available CoNLL-2002 and -2003 sets, containing Dutch, Spanish, English, and German \\cite{sang2002conll,sang2003conll};\n",
214
+ "\n",
215
+ "\n",
216
+ "and an in-house dataset with 16 languages,\\footnote{Arabic, Bengali, Czech, German, English, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Russian, Turkish, and Chinese.} using the same CoNLL categories.\n",
217
+ "Table \\ref{tab:conll} shows \\mbert{} zero-shot performance on all language pairs in the CoNLL data.\n",
218
+ "\n",
219
+ "\\begin{table}[t!]\n",
220
+ "\\centering\n",
221
+ "\\small\n",
222
+ "\\begin{tabular}{lllll}\n",
223
+ " Fine-tuning \\textbackslash{} Eval & \\textsc{en} & \\textsc{de} & \\textsc{nl} & \\textsc{es} \\\\\n",
224
+ " \\hline\n",
225
+ " \\textsc{en} & \\textbf{90.70} & 69.74 & 77.36 & 73.59 \\\\\n",
226
+ " \\textsc{de} & 73.83 & \\textbf{82.00} & 76.25 & 70.03 \\\\\n",
227
+ " \\textsc{nl} & 65.46 & 65.68 & \\textbf{89.86} & 72.10 \\\\\n",
228
+ " \\textsc{es} & 65.38 & 59.40 & 64.39 & \\textbf{87.18} \\\\\n",
229
+ "\\end{tabular}\n",
230
+ "\\caption{\\textsc{Ner} F1 results on the CoNLL data.}\n",
231
+ "\\label{tab:conll}\n",
232
+ "\\end{table}\n",
233
+ "\n",
234
+ "\n",
235
+ "\n",
236
+ "\\subsection{Part of speech tagging experiments}\n",
237
+ "\n",
238
+ "We perform \\pos{} experiments using Universal Dependencies (UD) \\cite{nivre2016universaldependencies} data for 41 languages.\\footnote{Arabic, Bulgarian, Catalan, Czech, Danish, German, Greek, English, Spanish, Estonian, Basque, Persian, Finnish, French, Galician, Hebrew, Hindi, Croatian, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Marathi, Dutch, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, Tamil, Telugu, Turkish, Urdu, and Chinese.} We use the evaluation sets from \\citet{zeman2017conll}.\n",
239
+ "Table \\ref{tab:pos_european} shows \\mbert{} zero-shot results for four European languages. We see that \\mbert{} generalizes well across languages, achieving over $80\\\n",
240
+ "\n",
241
+ "\n",
242
+ "\n",
243
+ "\n",
244
+ "\n",
245
+ "\n",
246
+ "\\begin{table}[t!]\n",
247
+ "\\small\n",
248
+ "\\centering\n",
249
+ "\\begin{tabular}{llllll}\n",
250
+ " Fine-tuning \\textbackslash{} Eval & \\textsc{en} & \\textsc{de} & \\textsc{es} & \\textsc{it} \\\\\n",
251
+ " \\hline\n",
252
+ " \\textsc{en} & \\textbf{96.82} & 89.40 & 85.91 & 91.60 \\\\\n",
253
+ " \\textsc{de} & 83.99 & \\textbf{93.99} & 86.32 & 88.39 \\\\\n",
254
+ " \\textsc{es} & 81.64 & 88.87 & \\textbf{96.71} & 93.71 \\\\\n",
255
+ " \\textsc{it} & 86.79 & 87.82 & 91.28 & \\textbf{98.11} \\\\\n",
256
+ "\\end{tabular}\n",
257
+ "\n",
258
+ "\\caption{\\textsc{Pos} accuracy on a subset of UD languages.}\n",
259
+ "\\label{tab:pos_european}\n",
260
+ "\\end{table}\n",
261
+ "\n",
262
+ "\n",
263
+ "\\section{Vocabulary Memorization \\label{sec:vocab_memorization}}\n",
264
+ "Because \\mbert{} uses a single, multilingual vocabulary, one form of cross-lingual transfer occurs when word pieces present during fine-tuning also appear in the evaluation languages. In this section, we present experiments probing \\mbert{}'s dependence on this superficial form of generalization: How much does transferability depend on lexical overlap? And is transfer possible to languages written in different scripts (\\emph{no} overlap)?\n",
265
+ "\n",
266
+ "\\subsection{Effect of vocabulary overlap}\n",
267
+ "\n",
268
+ "If \\mbert{}'s ability to generalize were mostly due to vocabulary memorization, we would expect zero-shot performance on \\ner{} to be highly dependent on word piece overlap, since entities are often similar across languages.\n",
269
+ "To measure this effect, we compute $E_\\textit{train}$ and $E_\\textit{eval}$, the sets of word pieces used in entities in the training and evaluation datasets, respectively, and define overlap as the fraction of common word pieces used in the entities:\n",
270
+ "\n",
271
+ "$\\textit{overlap} = |E_\\textit{train} \\cap E_\\textit{eval}|~/~|E_\\textit{train} \\cup E_\\textit{eval}|$.\n",
272
+ "\n",
273
+ "\\begin{figure}\n",
274
+ " \\centering\n",
275
+ " \\includegraphics[width=7cm]{ner_overlap}\n",
276
+ " \\caption{Zero-shot \\ner{} F1 score versus entity word piece overlap among 16 languages.\n",
277
+ " While performance using \\enbert{} depends directly on word piece overlap, \\mbert{}'s performance is largely independent of overlap, indicating that it learns multilingual representations deeper than simple vocabulary memorization.}\n",
278
+ " \\label{fig:ner_overlap}\n",
279
+ "\\end{figure}\n",
280
+ "\n",
281
+ "Figure \\ref{fig:ner_overlap} plots \\ner{} F1 score versus entity overlap for zero-shot transfer between every language pair in an in-house dataset of 16 languages, for both \\mbert{} and \\enbert{}.\\footnote{Results on CoNLL data follow the same trends, but those trends are more apparent with 16 languages than with 4.}\n",
282
+ "\n",
283
+ "\n",
284
+ "We can see that performance using \\enbert{} depends directly on word piece overlap: the ability to transfer deteriorates as word piece overlap diminishes, and F1 scores are near zero for languages written in different scripts.\n",
285
+ "\\mbert{}'s performance, on the other hand, is flat for a wide range of overlaps, and even for language pairs with almost no lexical overlap, scores vary between $40\\\n",
286
+ "\n",
287
+ "To further verify that \\enbert{}'s inability to generalize is due to its lack of a multilingual representation and not an inability of its English-specific word piece vocabulary to represent data in other languages, we evaluate on \\emph{non}-cross-lingual \\ner{} and see that it performs comparably to a previous state of the art model (see Table \\ref{tab:conll_en}).\n",
288
+ "\n",
289
+ "\n",
290
+ "\n",
291
+ "\\begin{table}[t!]\n",
292
+ "\\centering\n",
293
+ "\\small\n",
294
+ "\\begin{tabular}{lllll}\n",
295
+ " Model & \\textsc{en} & \\textsc{de} & \\textsc{nl} & \\textsc{es} \\\\\n",
296
+ " \\hline\n",
297
+ " \\citet{lample2016ner} & 90.94 & 78.76 & 81.74 & 85.75 \\\\\n",
298
+ " \\enbert{} & 91.07 & 73.32 & 84.23 & 81.84 \\\\\n",
299
+ "\\end{tabular}\n",
300
+ "\\caption{\\ner{} F1 results fine-tuning and evaluating on the \\emph{same} language (not zero-shot transfer).}\n",
301
+ "\\label{tab:conll_en}\n",
302
+ "\\end{table}\n",
303
+ "\n",
304
+ "\n",
305
+ "\n",
306
+ "\n",
307
+ "\n",
308
+ "\n",
309
+ "\n",
310
+ "\n",
311
+ "\n",
312
+ "\n",
313
+ "\\subsection{Generalization across scripts}\n",
314
+ "\n",
315
+ "\\mbert{}'s ability to transfer between languages that are written in different scripts, and thus have effectively \\emph{zero} lexical overlap, is surprising given that it was trained on separate monolingual corpora and not with a multilingual objective. To probe deeper into how the model is able to perform this generalization, Table \\ref{tab:pos_different_script} shows a sample of \\pos{} results for transfer across scripts.\n",
316
+ "\n",
317
+ "\n",
318
+ "Among the most surprising results, an \\mbert{} model that has been fine-tuned using only \\pos{}-labeled Urdu (written in Arabic script), achieves 91\\\n",
319
+ "\n",
320
+ "However, cross-script transfer is less accurate for other pairs, such as English and Japanese, indicating that \\mbert{}'s multilingual representation is not able to generalize equally well in all cases. A possible explanation for this, as we will see in section \\ref{sec:typological}, is typological similarity. English and Japanese have a different order of subject, verb and object, while English and Bulgarian have the same, and \\mbert{} may be having trouble generalizing across different orderings.\n",
321
+ "\n",
322
+ "\n",
323
+ "\n",
324
+ "\n",
325
+ "\n",
326
+ "\n",
327
+ "\n",
328
+ "\n",
329
+ "\n",
330
+ "\n",
331
+ "\n",
332
+ "\n",
333
+ "\n",
334
+ "\n",
335
+ "\n",
336
+ "\n",
337
+ "\n",
338
+ "\n",
339
+ "\n",
340
+ "\n",
341
+ "\n",
342
+ "\n",
343
+ "\n",
344
+ "\n",
345
+ "\n",
346
+ "\n",
347
+ "\n",
348
+ "\n",
349
+ "\n",
350
+ "\n",
351
+ "\\begin{table}[t]\n",
352
+ "\\small\n",
353
+ "\\centering\n",
354
+ "\\renewcommand{\\arraystretch}{1} \n",
355
+ "\\subfloat{\n",
356
+ "\\begin{tabular}{lll}\n",
357
+ " & \\textsc{hi} & \\textsc{ur} \\\\\n",
358
+ " \\hline\n",
359
+ " \\textsc{hi} & \\textbf{97.1} & 85.9 \\\\\n",
360
+ " \\textsc{ur} & 91.1 & \\textbf{93.8} \\\\\n",
361
+ " \\\\\n",
362
+ "\\end{tabular}}\n",
363
+ "\\hskip2.5em\n",
364
+ "\n",
365
+ "\\subfloat{\n",
366
+ "\\begin{tabular}{llll}\n",
367
+ " & \\textsc{en} & \\textsc{bg} & \\textsc{ja} \\\\\n",
368
+ " \\hline\n",
369
+ " \\textsc{en} & \\textbf{96.8} & 87.1 & 49.4 \\\\\n",
370
+ " \\textsc{bg} & 82.2 & \\textbf{98.9} & 51.6 \\\\\n",
371
+ " \\textsc{ja} & 57.4 & 67.2 & \\textbf{96.5} \\\\\n",
372
+ "\\end{tabular}}\n",
373
+ "\\qquad\n",
374
+ "\\caption{\\pos{} accuracy on the UD test set for languages with different scripts. Row=fine-tuning, column=eval.\\label{tab:pos_different_script}}\n",
375
+ "\\end{table}\n",
376
+ "\n",
377
+ "\n",
378
+ "\n",
379
+ "\n",
380
+ "\n",
381
+ "\\section{Encoding Linguistic Structure \\label{sec:pos}}\n",
382
+ "\n",
383
+ "In the previous section, we showed that \\mbert{}'s ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation. In this section, we present probing experiments that investigate the nature of that representation: How does typological similarity affect \\mbert{}'s ability to generalize? Can \\mbert{} generalize from monolingual inputs to code-switching text? Can the model generalize to transliterated text without transliterated language model pretraining?\n",
384
+ "\n",
385
+ "\n",
386
+ "\n",
387
+ "\n",
388
+ "\n",
389
+ "\n",
390
+ "\n",
391
+ "\\subsection{Effect of language similarity}\n",
392
+ "Following \\citet{naseem2012selectivesharing}, we compare languages on a subset of the WALS features \\cite{dryer2013wals} relevant to grammatical ordering.\\footnote{81A (Order of Subject, Object and Verb), 85A (Order of Adposition and Noun), 86A (Order of Genitive and Noun), 87A (Order of Adjective and Noun), 88A (Order of Demonstrative and Noun), and 89A (Order of Numeral and Noun).}\n",
393
+ "Figure \\ref{fig:wals} plots \\pos{} zero-shot accuracy against the number of common WALS features. As expected, performance improves with similarity, showing that it is easier for \\mbert{} to map linguistic structures when they are more similar, although it still does a decent job for low similarity languages when compared to \\enbert{}.\n",
394
+ "\n",
395
+ "\n",
396
+ "\n",
397
+ "\\begin{figure}\n",
398
+ " \\centering\n",
399
+ " \\includegraphics[width=7cm]{wals_nofixed_bars}\n",
400
+ " \\caption{Zero-shot \\pos{} accuracy versus number of common WALS features. Due to their scarcity, we exclude pairs with no common features.}\n",
401
+ " \\label{fig:wals}\n",
402
+ "\\end{figure}\n",
403
+ "\n",
404
+ "\n",
405
+ "\n",
406
+ "\n",
407
+ "\n",
408
+ "\n",
409
+ "\n",
410
+ "\n",
411
+ "\n",
412
+ "\n",
413
+ "\n",
414
+ "\n",
415
+ "\n",
416
+ "\n",
417
+ "\\subsection{Generalizing across typological features\n",
418
+ "\\label{sec:typological}}\n",
419
+ "\n",
420
+ "\n",
421
+ "\n",
422
+ "\n",
423
+ "\n",
424
+ "\n",
425
+ "Table \\ref{tab:wals} shows macro-averaged \\pos{} accuracies for transfer between languages grouped according to two typological features: subject/object/verb order, and adjective/noun order\\footnote{\\textbf{SVO languages}: Bulgarian, Catalan, Czech, Danish, English, Spanish, Estonian, Finnish, French, Galician, Hebrew, Croatian, Indonesian, Italian, Latvian, Norwegian (Bokmaal and Nynorsk), Polish, Portuguese (European and Brazilian), Romanian, Russian, Slovak, Slovenian, Swedish, and Chinese. \\textbf{SOV Languages}: Basque, Farsi, Hindi, Japanese, Korean, Marathi, Tamil, Telugu, Turkish, and Urdu.} \\cite{dryer2013wals}. The results reported include only zero-shot transfer, i.e. they do not include cases training and testing on the same language.\n",
426
+ "\n",
427
+ "\n",
428
+ "We can see that performance is best when transferring between languages that share word order features, suggesting that while \\mbert{}'s multilingual representation is able to map learned structures onto new vocabularies, it does not seem to learn systematic transformations of those structures to accommodate a target language with different word order.\n",
429
+ "\n",
430
+ "\n",
431
+ "\n",
432
+ "\n",
433
+ "\\begin{table}[t]\n",
434
+ "\\small\n",
435
+ "\\centering\n",
436
+ "\\subfloat[Subj./verb/obj. order.\\label{tab:pos_svo}]{\n",
437
+ "\\begin{tabular}{lll}\n",
438
+ " & SVO & SOV \\\\\n",
439
+ " \\hline\n",
440
+ " SVO & \\textbf{81.55} & 66.52 \\\\\n",
441
+ " SOV & 63.98 & \\textbf{64.22} \\\\\n",
442
+ "\\end{tabular}}\n",
443
+ "\\qquad\n",
444
+ "\\subfloat[Adjective/noun order.\\label{tab:pos_an}]{\n",
445
+ "\\begin{tabular}{lll}\n",
446
+ " & AN & NA \\\\\n",
447
+ " \\hline\n",
448
+ " AN & \\textbf{73.29} & 70.94 \\\\\n",
449
+ " NA & 75.10 & \\textbf{79.64} \\\\\n",
450
+ "\\end{tabular}}\n",
451
+ "\\qquad\n",
452
+ "\\caption{Macro-average \\pos{} accuracies when transferring between SVO/SOV languages or AN/NA languages. Row = fine-tuning, column = evaluation.}\n",
453
+ "\\label{tab:wals}\n",
454
+ "\\end{table}\n",
455
+ "\n",
456
+ "\\subsection{Code switching and transliteration}\n",
457
+ "Code-switching (CS)---the mixing of multiple languages within a single utterance---and transliteration---writing that is not in the language's standard script---present unique test cases for \\mbert{}, which is pre-trained on monolingual, standard-script corpora.\n",
458
+ "\n",
459
+ "Generalizing to code-switching is similar to other cross-lingual transfer scenarios, but would benefit to an even larger degree from a shared multilingual representation. Likewise, generalizing to transliterated text is similar to other cross-script transfer experiments, but has the additional caveat that \\mbert{} was not pre-trained on text that looks like the target.\n",
460
+ "\n",
461
+ "\n",
462
+ "\n",
463
+ "We test \\mbert{} on the CS Hindi/English UD corpus from \\citet{bhat2018udcs}, which provides texts in two formats: \\emph{transliterated}, where Hindi words are written in Latin script, and \\emph{corrected}, where annotators have converted them back to Devanagari script.\n",
464
+ "Table \\ref{tab:pos_cs} shows the results for models fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version).\n",
465
+ "\n",
466
+ "\n",
467
+ "\n",
468
+ "\n",
469
+ "\n",
470
+ "\n",
471
+ "\n",
472
+ "\n",
473
+ "\n",
474
+ "\n",
475
+ "\n",
476
+ "\n",
477
+ "\n",
478
+ "\n",
479
+ "\n",
480
+ "\n",
481
+ "\n",
482
+ "\n",
483
+ "\\begin{table}[t]\n",
484
+ "\\small\n",
485
+ "\\centering\n",
486
+ "\\begin{tabular}{lrr}\n",
487
+ " & Corrected & Transliterated \\\\\n",
488
+ " \\hline\n",
489
+ " \\multicolumn{3}{l}{Train on monolingual \\textsc{hi}+\\textsc{en}} \\\\\n",
490
+ " \\quad{} \\mbert{} & 86.59 & 50.41 \\\\\n",
491
+ " \\quad{} \\citet{ball2018codeswitching} & --- & 77.40\\vspace{0.5mm}\\\\ \n",
492
+ " \\multicolumn{3}{l}{Train on code-switched \\textsc{hi}/\\textsc{en}} \\\\\n",
493
+ " \\quad{} \\mbert{} & 90.56 & 85.64 \\\\\n",
494
+ " \\quad{} \\citet{bhat2018udcs} & --- & 90.53 \\\\\n",
495
+ "\\end{tabular}\n",
496
+ "\\caption{\\mbert{}'s \\pos{} accuracy on the code-switched Hindi/English dataset from \\citet{bhat2018udcs}, on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch \\pos{}.}\n",
497
+ "\\label{tab:pos_cs}\n",
498
+ "\\end{table}\n",
499
+ "\n",
500
+ "For script-corrected inputs, i.e., when Hindi is written in Devanagari, \\mbert{}'s performance when trained only on monolingual corpora is comparable to performance when training on code-switched data, and it is likely that some of the remaining difference is due to domain mismatch. This provides further evidence that \\mbert{} uses a representation that is able to incorporate information from multiple languages. \n",
501
+ "\n",
502
+ "However, \\mbert{} is not able to effectively transfer to a transliterated target, suggesting that it is the language model pre-training on a particular language that allows transfer to that language. \\mbert{} is outperformed by previous work in both the monolingual-only and code-switched supervision scenarios. Neither \\citet{ball2018codeswitching} nor \\citet{bhat2018udcs} use contextualized word embeddings, but both incorporate explicit transliteration signals into their approaches.\n",
503
+ "\n",
504
+ "\n",
505
+ "\n",
506
+ "\n",
507
+ "\n",
508
+ "\n",
509
+ "\\section{Multilingual characterization of the feature space \\label{sec:vector_translation}}\n",
510
+ "In this section, we study the structure of \\mbert{}'s feature space. If it is multilingual, then the transformation mapping between the same sentence in $2$ languages should not depend on the sentence itself, just on the language pair.\n",
511
+ "\n",
512
+ "\\subsection{Experimental Setup}\n",
513
+ "We sample $5000$ pairs of sentences from WMT16 \\cite{wmt2016sharedtask} and feed each sentence (separately) to \\mbert{} with no fine-tuning. We then extract the hidden feature activations at each layer for each of the sentences, and average the representations for the input tokens except \\textsc{[cls]} and \\textsc{[sep]}, to get a vector for each sentence, at each layer $l$, $v_\\textsc{lang}^{(l)}$.\n",
514
+ "For each pair of sentences, e.g. $(v_{\\textsc{en}_i}^{(l)}, v_{\\textsc{de}_i}^{(l)})$, we compute the vector pointing from one to the other and average it over all pairs: $\\bar{v}_{\\textsc{en}\\rightarrow \\textsc{de}}^{(l)} = \\frac{1}{M}\\sum_i \\left(v_{\\textsc{de}_i}^{(l)} - v_{\\textsc{en}_i}^{(l)}\\right)$, where $M$ is the number of pairs.\n",
515
+ "Finally, we translate each sentence, $v_{\\textsc{en}_i}^{(l)}$, by $\\bar{v}_{\\textsc{en}\\rightarrow \\textsc{de}}^{(l)}$, find the closest German sentence vector\\footnote{In terms of $\\ell_2$ distance.}, and measure the fraction of times the nearest neighbour is the correct pair, which we call the ``nearest neighbor accuracy''.\n",
516
+ "\n",
517
+ "\n",
518
+ "\n",
519
+ "\n",
520
+ "\\subsection{Results}\n",
521
+ "In Figure \\ref{fig:vector_translation}, we plot the nearest neighbor accuracy for \\textsc{en}-\\textsc{de} (solid line). It achieves over $50\\\n",
522
+ "\n",
523
+ "\\begin{figure}\n",
524
+ " \\centering\n",
525
+ " \\includegraphics[width=7cm]{NearestNeighborCrossLing_ru}\n",
526
+ " \\caption{Accuracy of nearest neighbor translation for \\textsc{en}-\\textsc{de}, \\textsc{en}-\\textsc{ru}, and \\textsc{hi}-\\textsc{ur}.}\n",
527
+ " \\label{fig:vector_translation}\n",
528
+ "\\end{figure}\n",
529
+ "\n",
530
+ "As to the reason why the accuracy goes down in the last few layers, one possible explanation is that since the model was pre-trained for language modeling, it might need more language-specific information to correctly predict the missing word.\n",
531
+ "\n",
532
+ "\\section{Conclusion}\n",
533
+ "\n",
534
+ "\n",
535
+ "\n",
536
+ "\n",
537
+ "\n",
538
+ "In this work, we showed that \\mbert{}'s robust, often surprising, ability to generalize cross-lingually is underpinned by a multilingual representation, without being explicitly trained for it. The model handles transfer across scripts and to code-switching fairly well, but effective transfer to typologically divergent and transliterated targets will likely require the model to incorporate an explicit multilingual training objective, such as that used by \\citet{lample2019crosslingualpretraining} or \\citet{artetxe2018massively}.\n",
539
+ "\n",
540
+ "\n",
541
+ "\n",
542
+ "\n",
543
+ "As to why \\mbert{} generalizes across languages, we hypothesize that having word pieces used in all languages (numbers, URLs, etc) which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space.\n",
544
+ "\n",
545
+ "It is our hope that these kinds of probing experiments will help steer researchers toward the most promising lines of inquiry by encouraging them to focus on the places where current contextualized word representation approaches fall short.\n",
546
+ "\n",
547
+ "\n",
548
+ "\n",
549
+ "\n",
550
+ "\\section{Acknowledgements}\n",
551
+ "We would like to thank Mark Omernick, Livio Baldini Soares,\n",
552
+ "\n",
553
+ "\n",
554
+ "Emily Pitler, Jason Riesa, and Slav Petrov for the valuable discussions and feedback.\n",
555
+ "\n",
556
+ "\n",
557
+ "\\bibliography{acl2019}\n",
558
+ "\\bibliographystyle{acl_natbib}\n",
559
+ "\n",
560
+ "\n",
561
+ "\\appendix\n",
562
+ "\n",
563
+ "\\section{Model Parameters}\n",
564
+ "\n",
565
+ "All models were fine-tuned with a batch size of $32$, and a maximum sequence length of $128$ for $3$ epochs. We used a learning rate of $3\\mathrm{e}{-5}$ with learning rate warmup during the first $10\\\n",
566
+ "We used the \\texttt{BERT-Base, Multilingual Cased} checkpoint from \\url{https://github.com/google-research/bert}.\n",
567
+ "\n",
568
+ "\\section{CoNLL Results for \\enbert{}}\n",
569
+ "\n",
570
+ "\\begin{table}[H]\n",
571
+ "\\centering\n",
572
+ "\\small\n",
573
+ "\\begin{tabular}{lllll}\n",
574
+ " Fine-tuning \\textbackslash Eval & \\textsc{en} & \\textsc{de} & \\textsc{nl} & \\textsc{es} \\\\\n",
575
+ " \\hline\n",
576
+ " \\textsc{en} & \\textbf{91.07} & 24.38 & 40.62 & 49.99 \\\\\n",
577
+ " \\textsc{de} & 55.36 & \\textbf{73.32} & 54.84 & 50.80 \\\\\n",
578
+ " \\textsc{nl} & 59.36 & 27.57 & \\textbf{84.23} & 53.15 \\\\\n",
579
+ " \\textsc{es} & 55.09 & 26.13 & 48.75 & \\textbf{81.84} \\\\\n",
580
+ "\\end{tabular}\n",
581
+ "\\caption{\\ner{} results on the CoNLL test sets for \\enbert{}. The row is the fine-tuning language, the column the evaluation language. There is a big gap between this model's zero-shot performance and \\mbert{}'s, showing that the pre-training is helping in cross-lingual transfer.}\n",
582
+ "\\end{table}\n",
583
+ "\n",
584
+ "\\section{Some \\pos{} Results for \\enbert{}}\n",
585
+ "\n",
586
+ "\\begin{table}[H]\n",
587
+ "\\small\n",
588
+ "\\centering\n",
589
+ "\\begin{tabular}{llllll}\n",
590
+ " Fine-tuning \\textbackslash Eval & \\textsc{en} & \\textsc{de} & \\textsc{es} & \\textsc{it} \\\\\n",
591
+ " \\hline\n",
592
+ " \\textsc{en} & \\textbf{96.94} & 38.31 & 50.38 & 46.07 \\\\\n",
593
+ " \\textsc{de} & 28.62 & \\textbf{92.63} & 30.23 & 25.59 \\\\\n",
594
+ " \\textsc{es} & 28.78 & 46.15 & \\textbf{94.36} & 71.50 \\\\\n",
595
+ " \\textsc{it} & 52.48 & 48.08 & 76.51 & \\textbf{96.41} \\\\\n",
596
+ "\\end{tabular}\n",
597
+ "\\caption{\\pos{} accuracy on the UD test sets for a subset of European languages using \\enbert{}. The row specifies a fine-tuning language, the column the evaluation language. There is a big gap between this model's zero-shot performance and \\mbert{}'s, showing the pre-training is helping learn a useful cross-lingual representation for grammar.}\n",
598
+ "\\end{table}\n",
599
+ "\n",
600
+ "\n",
601
+ "\n",
602
+ "\n",
603
+ "\n",
604
+ "\n",
605
+ "\n",
606
+ "\n",
607
+ "\n",
608
+ "\n",
609
+ "\n",
610
+ "\n",
611
+ "\n",
612
+ "\n",
613
+ "\n",
614
+ "\n",
615
+ "\n",
616
+ "\n",
617
+ "\n",
618
+ "\n",
619
+ "\n",
620
+ "\n",
621
+ "\n",
622
+ "\n",
623
+ "\n",
624
+ "\n",
625
+ "\n",
626
+ "\n",
627
+ "\n",
628
+ "\n",
629
+ "\n",
630
+ "\n",
631
+ "\n",
632
+ "\n",
633
+ "\n",
634
+ "\n",
635
+ "\n",
636
+ "\n",
637
+ "\n",
638
+ "\n",
639
+ "\n",
640
+ "\n",
641
+ "\n",
642
+ "\n",
643
+ "\n",
644
+ "\n",
645
+ "\n",
646
+ "\n",
647
+ "\n",
648
+ "\n",
649
+ "\n",
650
+ "\n",
651
+ "\n",
652
+ "\n",
653
+ "\n",
654
+ "\n",
655
+ "\n",
656
+ "\n",
657
+ "\n",
658
+ "\n",
659
+ "\n",
660
+ "\n",
661
+ "\n",
662
+ "\n",
663
+ "\n",
664
+ "\n",
665
+ "\n",
666
+ "\n",
667
+ "\n",
668
+ "\n",
669
+ "\n",
670
+ "\n",
671
+ "\n",
672
+ "\n",
673
+ "\n",
674
+ "\n",
675
+ "\n",
676
+ "\n",
677
+ "\n",
678
+ "\n",
679
+ "\n",
680
+ "\n",
681
+ "\n",
682
+ "\n",
683
+ "\n",
684
+ "\n",
685
+ "\n",
686
+ "\n",
687
+ "\\end{document}\n"
688
+ ],
689
+ "del_percentage": 0.09091
690
+ }
691
+ }
Experiment_Design/1906.01502/images/NearestNeighborCrossLing_ru.png ADDED

Git LFS Details

  • SHA256: 2a0fc18387c77e59bfe398960c3075c21622f9d0a4532daff640ea8d2e5f5460
  • Pointer size: 130 Bytes
  • Size of remote file: 46.7 kB
Experiment_Design/1906.01502/images/ner_overlap.png ADDED

Git LFS Details

  • SHA256: 0c309cbf16262656bcbf665db6e9a1fc7b61e93418e78187bb8a26c8ca7c89f1
  • Pointer size: 130 Bytes
  • Size of remote file: 59.1 kB
Experiment_Design/1906.01502/images/wals_nofixed_bars.png ADDED

Git LFS Details

  • SHA256: 3d81eff5b0030562a9c17642cf4a72b270168de55f69d22c30465114cb127c35
  • Pointer size: 130 Bytes
  • Size of remote file: 43.8 kB
Experiment_Design/1906.03158/1906.03158_source.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0d4ebd2a2167fe024163dd78a116abd832bff691f801792b0ad537d3815481c
3
+ size 470099
Experiment_Design/1906.03158/data_text.json ADDED
The diff for this file is too large to render. See raw diff
 
Experiment_Design/1906.03158/images/cls_basic.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f4a63df561a2f8acb16dfbf7756ca34a1cf02fb681aa8aeac412a566591bf05
3
+ size 34609
Experiment_Design/1906.03158/images/cls_basic.svg ADDED
Experiment_Design/1906.03158/images/entity_mention_pooling.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd19a23c310bc83c76f807579cfac09eb322fb4ba2a3084f6b40fd8a4a9179bf
3
+ size 34836
Experiment_Design/1906.03158/images/fewrel_limit_examples.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d1c7f893037220efd50d9350138dbeb89ba69f039c976b5c99879784ddd1908
3
+ size 27408
Experiment_Design/1906.03158/images/fewrel_limit_examples_10x1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0780b5efa970c9fcceca9b574f9bfd19884d2b8170989b56ebd5715709e4425
3
+ size 25252
Experiment_Design/1906.03158/images/fewrel_limit_examples_5x1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52a419bd3dda746fae181b3fee14eba0e971664a653dbe1f95d8e667a981607c
3
+ size 25389
Experiment_Design/1906.03158/images/fewrel_limit_relations.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87fd982d7ea6d5184d58949377409879dc6826b2fc5d0b6b345b385e66cd6380
3
+ size 26743
Experiment_Design/1906.03158/images/fewrel_limit_relations_10x1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a01621a3bdd94f81092fb8cbb2ae3c4e4207568dcd55b0d99e06130730e24b6
3
+ size 24580
Experiment_Design/1906.03158/images/fewrel_limit_relations_5x1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fe365a4be2b847ab84748f219eccddcde62c8e1faa26e661c7eaeb646fddbe1
3
+ size 24822
Experiment_Design/1906.03158/images/fewshot_training.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0821ee3fef76e438a2cd8eacf9c5a2792288f6b4bbcf52773601600edde13e29
3
+ size 25414
Experiment_Design/1906.03158/images/markers_and_cls.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a45a308910298c0c8cac42cfa877bd282df0addce80f01ed993fcc0a1295a31
3
+ size 35571
Experiment_Design/1906.03158/images/markers_and_entity_mentions.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f8105e43911c5907774f0664fbacef96e23ed9f9cf95ae40f466b632c0c11f6
3
+ size 35844
Experiment_Design/1906.03158/images/markers_and_markers_pooling.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f06da8ae5724963cbedd70f8008868b9388fb9571897b099bb97cd5eee74bac
3
+ size 35765
Experiment_Design/1906.03158/images/mtb_train_progress.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d144bd170b559fb04221ee1313da2057aadc6454db87c35c952d56c44ea98db6
3
+ size 12857
Experiment_Design/1906.03158/images/positional_embeddings.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef6fdefc5d006e49dd83a471aad8958e5d7796c62a957dd941c3be96a253cf19
3
+ size 52511
Experiment_Design/1906.03158/images/positional_embeddings_old.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:402b127fe3d95833592526b84f81fb76b060193a857ce12386aba62470f7e8f1
3
+ size 52374
Experiment_Design/1906.03158/images/supervised_training.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:651ff124ed7602c8ac63f1baeec47a41c97b4a15ef3814306fe30cd2d26a7fb6
3
+ size 27346
Experiment_Design/1906.03158/images/used/cls_basic_page_1.png ADDED

Git LFS Details

  • SHA256: 9f120ab6fd15a516fba74c5ee92e1eff7af48cceae82efdfb09994e6ac834a73
  • Pointer size: 129 Bytes
  • Size of remote file: 9.89 kB
Experiment_Design/1906.03158/images/used/entity_mention_pooling_page_1.png ADDED

Git LFS Details

  • SHA256: ce1c4f1a220d478a5a3c0c84622ad9258a75979ae8fcfe2835ca73ea5ba64686
  • Pointer size: 130 Bytes
  • Size of remote file: 10.5 kB
Experiment_Design/1906.03158/images/used/fewshot_training_page_1.png ADDED

Git LFS Details

  • SHA256: 5c814347d5d59d3aec8cd648d4b0257846c9b2bb2813198a80f05111e1be46b3
  • Pointer size: 130 Bytes
  • Size of remote file: 16.9 kB
Experiment_Design/1906.03158/images/used/markers_and_cls_page_1.png ADDED

Git LFS Details

  • SHA256: 9ab7776ea1fad6c194b8ac9ef8a2f57fd1e5f027836bf63bfb027fdefb5168a0
  • Pointer size: 130 Bytes
  • Size of remote file: 10.5 kB
Experiment_Design/1906.03158/images/used/markers_and_entity_mentions_page_1.png ADDED

Git LFS Details

  • SHA256: f63452765bca188c526ef6b089cba35b2acbce6acd63e19af4b6e3d9cffdb68b
  • Pointer size: 130 Bytes
  • Size of remote file: 11.6 kB
Experiment_Design/1906.03158/images/used/markers_and_markers_pooling_page_1.png ADDED

Git LFS Details

  • SHA256: be39aefb0127fffe808014047179cf298c4f7b2471e9d78b3ce19a31a1b59425
  • Pointer size: 130 Bytes
  • Size of remote file: 11 kB
Experiment_Design/1906.03158/images/used/positional_embeddings_page_1.png ADDED

Git LFS Details

  • SHA256: c31e981427f9022f17dce39054bc85b097b9acdee3bcb9448dd04447cac77a6f
  • Pointer size: 130 Bytes
  • Size of remote file: 15.1 kB
Experiment_Design/1906.03158/images/used/supervised_training_page_1.png ADDED

Git LFS Details

  • SHA256: aca4790d2565ac1c7573c879f33b5fa2fdc863f7a25a4cd7cbcc75eae15ea429
  • Pointer size: 130 Bytes
  • Size of remote file: 17.5 kB