{ "id": "2110.15943", "annotator": "yuxuan", "input": [ "\\pdfoutput=1\n", "\\documentclass[11pt]{article}\n", "\\usepackage{acl}\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{url}\n", "\\usepackage{hyperref}\n", "\\usepackage{enumitem}\n", "\\usepackage{booktabs}\n", "\\usepackage{multirow}\n", "\\usepackage{array}\n", "\\usepackage{graphicx}\n", "\\usepackage{amsmath}\n", "\\usepackage{amsfonts}\n", "\\usepackage{subcaption}\n", "\\usepackage{makecell}\n", "\\usepackage{color,soul}\n", "\\usepackage{algorithm}\n", "\\usepackage[noend]{algpseudocode}\n", "\\usepackage{xcolor,colortbl}\n", "\\usepackage{xspace}\n", "\\definecolor{green}{rgb}{0.1,0.1,0.1}\n", "\\definecolor{gitgreen}{HTML}{006400}\n", "\\definecolor{chocolate}{HTML}{D2691E}\n", "\\definecolor{maroon}{HTML}{A00000}\n", "\\definecolor{indigo}{HTML}{4B0082}\n", "\\definecolor{green}{HTML}{008000}\n", "\\definecolor{red}{HTML}{e41a1c}\n", "\\usepackage{amssymb}\n", "\\usepackage{pifont}\n", "\\newcommand{\\cmark}{{\\protect\\color{maroon} \\ding{51}}}\n", "\\newcommand{\\xmark}{\\ding{55}}\n", "\\newcommand{\\red}[1]{{\\protect\\color{red} #1}}\n", "\\usepackage[T1]{fontenc}\n", "\\usepackage[utf8]{inputenc}\n", "\\usepackage{microtype}\n", "\\newcommand{\\ours}{MetaICL}\n", "\\newcommand{\\ourslong}{\\textbf{Meta}-training for \\textbf{I}n-\\textbf{C}ontext \\textbf{L}earning}\n", "\\newcommand{\\main}{HR$\\rightarrow$LR}\n", "\\newcommand\\myrotatebox[2]{\\multirow{#1}{*}{\\rotatebox[origin=c]{90}{#2}}}\n", "\\newcommand{\\code}{\n", " \\href{https://github.com/facebookresearch/MetaICL}{\\nolinkurl{github.com/facebookresearch/MetaICL}}\n", "}\n", "\\newcommand{\\hanna}[1]{\\textcolor{magenta}{[#1 ({\\bf Hanna})]}}\n", "\\newcommand{\\luke}[1]{\\textcolor{purple}{[#1 ({\\bf Luke})]}}\n", "\\newcommand{\\mike}[1]{\\textcolor{brown}{[#1 ({\\bf Mike})]}}\n", "\\newcommand{\\sewon}[1]{\\textcolor{green}{[#1 ({\\bf Sewon})]}}\n", "\\newcommand{\\updated}[1]{\\textcolor{green}{#1}}\n", "\\title{\\ours: Learning to Learn In Context}\n", "\\newcommand{\\affilsup}[1]{\\rlap{\\textsuperscript{\\normalfont#1}}}\n", "\\newcommand*\\samethanks[1][\\value{footnote}]{\\footnotemark[#1]}\n", "\\author{\n", " Sewon Min\\affilsup{1,2} \\quad \n", " ~~~Mike Lewis\\affilsup{2} \\quad \n", " ~~Luke Zettlemoyer\\affilsup{1,2} \\quad\n", " ~~~Hannaneh Hajishirzi\\affilsup{1,3} \n", " \\\\\n", " $^1$University of Washington \\qquad\n", " $^2$Meta AI \\qquad\n", " $^3$Allen Institute for AI \\\\\n", " \\texttt{\\{sewon,lsz,hannaneh\\}@cs.washington.edu} \\qquad \n", " \\texttt{mikelewis@fb.com} \\\\\n", "}\n", "\\begin{document}\n", "\\maketitle\n", "\\begin{abstract}\n", "We introduce \\ours\\ (\\ourslong), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks.\n", "This meta-training enables the model to more effectively learn a new task in context at test time, by simply conditioning on a few training examples with no parameter updates or task-specific templates.\n", "We also show that \\ours\\ approaches (and sometimes beats) the performance of models fully finetuned on the target task, and outperforms much bigger models with nearly 8x parameters.\n", "\\end{abstract}\n", "Large language models (LMs) have recently been shown to be able to do {\\em in-context learning}~\\citep{brown2020language}, where they learn a new task simply by conditioning on a few training examples and predicting which tokens best complete a test input. \n", "This type of learning is attractive because the model learns a new task through inference alone, without any parameter updates.\n", "However, performance significantly lags behind supervised finetuning, results are often high variance~\\citep{zhao2021calibrate,perez2021true}, and it can be difficult to engineer the templates that convert existing tasks to this format. \n", "Simply finetuning the model in this data setup directly leads to better in-context learning---the model learns to recover the semantics of the task from the given examples, as must be done for in-context learning of a new task at test time.\n", "This approach is related to recent work that uses multi-task learning for better zero-shot performance at test time~\\citep{ khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2022finetuned,sanh2022multitask}.\n", "This leads to 52 unique target tasks in total, which is the largest among all recent related work to the best of our knowledge.\n", "\\ours\\ often gets close to (and sometimes beats) the performance of models trained with supervised finetuning on the target datasets, and perform as well as models with 8x parameters.\n", "Code and data are publicly released at \\code.\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \\begin{tabular}{lll }\n", " \\toprule\n", " & Meta-training & Inference \\\\\n", " \\midrule\n", " \\cmidrule{1-3}\n", " \\multirow{2}{*}{Data given} &\n", " &\n", " & Test input $x$ \\\\\n", " \\cmidrule{1-3}\n", " \\multirow{4}{*}{Objective} & For each iteration, & \\multirow{4}{*}{$\\mathrm{argmax}_{c \\in \\mathcal{C}}P(c|x_1,y_1,\\cdots,x_k,y_k,x)$} \\\\\n", " & ~~~1. Sample task $i \\in [1, C]$ \\\\\n", " & ~~~2. Sample $k+1$ examples from $\\mathcal{T}_i$: $(x_1,y_1),\\cdots,(x_{k+1},y_{k+1})$ \\\\\n", " & ~~~3. Maximize $P(y_{k+1}|x_1,y_1,\\cdots,x_k,y_k,x_{k+1})$ \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.1em}\n", " \\caption{\n", " }\\label{tab:overview}\n", "\\end{table*}\n", "\\vspace{.2em}\n", "\\paragraph{In-context learning}\n", "\\citet{brown2020language} propose to use a language model (LM) conditioned on a concatenation of training examples for few-shot learning with no parameter updates.\n", "It has been further improved by later work~\\citep{zhao2021calibrate, holtzman2021surface,min2022noisy}, showing promising results on a variety of tasks.\n", "However, in-context learning with an LM achieves poor performance when the target task is very different from language modeling in nature or the LM is not large enough. Moreover, it can have high variance and poor worst-case accuracy~\\citep{perez2021true,lu2021fantastically}.\n", "Our paper is based on the core idea of in-context learning by conditioning on training examples. We show that, by explicitly training on an in-context learning objective, \\ours\\ achieves substantial improvements even with smaller LMs. \n", "\\vspace{.2em}\n", "\\paragraph{Meta-training via multi-task learning}\n", "Our work is broadly inspired by a large body of work in meta-learning~\\citep{vilalta2002perspective,finn2017model} and multi-task learning~\\citep{evgeniou2004regularized,ruder2017overview}.\n", "Prior work has shown that multi-task learning on a large collection of tasks leads to better performance on a new task, either when tested zero-shot~\\citep{khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2022finetuned} or when further finetuned~\\citep{aghajanyan2021muppet,ye2021crossfit}.\n", "In particular, the former is closely related to our work, as it eliminates the need for parameter updates on a target task.\n", "However, these zero-shot models are either limited to tasks sharing the same format as training tasks (e.g., a question answering format)~\\citep{khashabi2020unifiedqa,zhong2021adapting}, or rely heavily on task-specific templates~\\citep{mishra2022cross,wei2022finetuned,sanh2022multitask}\n", "which are difficult to engineer due to high variance in performance from very small changes~\\citep{mishra2021reframing}.\n", "\\citet{chen2022meta}, concurrently to our work, propose meta-training for in-context learning. Our approach differs in a number of ways:\n", "we remove requirements of\n", "We introduce \\ours: \\ourslong.\n", "Table~\\ref{tab:overview} provides an overview of the approach.\n", "\\subsection{Meta-training}\\label{subsec:meta-training}\n", "The model is meta-trained on a collection of tasks which we call meta-training tasks.\n", "\\subsection{Inference}\\label{subsec:inference}\n", "It is also given a set of candidates $\\mathcal{C}$ which is either a set of labels (in classification) or answer options (in question answering).\n", "As in meta-training, the model takes a concatenation of $x_1, y_1, \\cdots, x_k, y_k, x$ as the input, and compute the conditional probability of each label $c_i \\in \\mathcal{C}$.\n", "The label with the maximum conditional probability is returned as a prediction.\n", "\\subsection{Channel \\ours}\\label{subsec:channel-mic}\n", "We introduce a noisy channel variant of \\ours\\ called Channel \\ours, following \\citet{min2022noisy}.\n", "In the noisy channel model, $P(y|x)$ is reparameterized to\n", "$\\frac{P(x|y)P(y)}{P(x)} \\propto P(x|y)P(y)$.\n", "We follow \\citet{min2022noisy} in using $P(y)=\\frac{1}{|\\mathcal{C}|}$ and modeling $P(x|y)$ which allows us to use the channel approach by simply flipping $x_i$ and $y_i$.\n", "Specifically, at meta-training time, the model is given a concatenation of $y_1, x_1, \\cdots, y_k, x_k, y_{k+1}$ and is trained to generate $x_{k+1}$. At inference, the model computes $\\mathrm{argmax}_{c \\in \\mathcal{C}} P(x|y_1,x_1,\\cdots,y_k,x_k,c)$. \n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \\begin{tabular}{l @{\\hspace{-.6em}} r @{\\hspace{0.4em}} r l @{\\hspace{-.8em}} r}\n", " \\toprule\n", " \\multicolumn{3}{c}{Meta-train} & \\multicolumn{2}{c}{Target} \\\\\n", " \\cmidrule(lr){1-3} \\cmidrule(lr){4-5}\n", " \\midrule\n", " HR & 61 & 819,200 & LR & 26 \\\\\n", " \\cmidrule(lr){1-5}\n", " Classification & 43 & 384,022 & \\multirow{2}{*}{Classification} & \\multirow{2}{*}{20} \\\\\n", " Non-Classification & 37 & 368,768 & & \\\\\n", " \\cmidrule(lr){1-5}\n", " QA & 37 & 486,143 & \\multirow{2}{*}{QA} & \\multirow{2}{*}{22} \\\\\n", " Non-QA & 33 & 521,342 & & \\\\\n", " \\cmidrule(lr){1-5}\n", " Non-NLI & 55 & 463,579 & NLI & 8 \\\\\n", " \\cmidrule(lr){1-5}\n", " Non-Paraphrase & 59 & 496,106 & Paraphrase & 4 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.1em}\n", " \\caption{Statistics of seven different settings.\n", " `HR' and `LR' indicate high resource and low resource, respectively.\n", " Datasets and the task ontology are taken from \\textsc{CrossFit}~\\citep{ye2021crossfit} and \\textsc{UnifiedQA}~\\citep{khashabi2020unifiedqa}.\n", " Full datasets for each split are provided in Appendix~\\ref{app:dataset}.\n", " }\\label{tab:data-summary}\n", "\\end{table}\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \\begin{tabular}{l \n", " ccc}\n", " \\toprule\n", " \\multirow{2}{*}{Method} &\n", " Meta &\n", " \\multicolumn{2}{c}{Target} \\\\\n", " \\cmidrule(lr){2-2} \\cmidrule(lr){3-4}\n", " & train & train & \\# samples \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em LMs}} \\\\\n", " 0-shot & \\xmark & \\xmark & 0 \\\\\n", " PMI 0-shot & \\xmark & \\xmark & 0 \\\\\n", " Channel 0-shot & \\xmark & \\xmark & 0 \\\\\n", " PMI In-context & \\xmark & \\xmark & $k$ \\\\\n", " Channel In-context & \\xmark & \\xmark & $k$ \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em Meta-trained}} \\\\\n", " Multi-task 0-shot & \\cmark & \\xmark & 0 \\\\\n", " Channel Multi-task 0-shot & \\cmark & \\xmark & 0 \\\\\n", " Channel \\ours\\ (Ours) & \\cmark & \\xmark & $k$ \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em Fine-tune}} \\\\\n", " Fine-tune & \\xmark & \\cmark & $k$ \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Summary of the baselines and \\ours.\n", " }\\label{tab:methods}\n", "\\end{table}\n", "\\subsection{Datasets}\\label{subsec:dataset}\n", "Each target task is either classification or multi-choice, where a set of candidate options ($\\mathcal{C}$ in Table~\\ref{tab:overview}) is given.\n", "\\vspace{.25em}\n", "\\vspace{.25em}\n", "\\vspace{.25em}\n", "\\vspace{.25em}\n", "Full details of the settings and datasets with citations are provided in Appendix~\\ref{app:dataset}.\n", "\\subsection{Baselines}\\label{subsec:baselines}\n", "\\vspace{.2em}\n", "\\noindent \\textbf{0-shot}: We use a pretrained LM as it is and run zero-shot inference, following \\citet{brown2020language}.\n", "\\vspace{.2em}\n", "\\vspace{.2em}\n", "\\noindent \\textbf{PMI 0-shot, PMI In-context}: We use the PMI method from \\citet{holtzman2021surface,zhao2021calibrate} for 0-shot and In-context learning.\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Channel 0-shot, Channel In-context}: We use the noisy channel model from \\citet{min2022noisy} for 0-shot and In-context learning.\n", "\\vspace{.2em}\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Channel Multi-task 0-shot}: We have a channel variant of Multi-task 0-shot.\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Fine-tune}: We fine-tune the LM on an individual target task. This is not directly comparable to other methods as parameter updates are required for every target task.\n", "\\vspace{.2em}\n", "\\newcommand{\\prom}{{\\protect\\color{green} [P]}}\n", "\\newcommand{\\hyp}{{\\protect\\color{maroon} [H]}}\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \\setlength{\\tabcolsep}{0.3em}\n", " \\begin{tabular}{ll}\n", " \\toprule\n", " \\multicolumn{2}{l}{\\prom: Time Warner is the world\u2019s largest media and Internet} \\\\ \\multicolumn{2}{l}{company.} \\\\\n", " \\multicolumn{2}{l}{\\hyp: Time Warner is the world\u2019s largest company.} \\\\\n", " \\multicolumn{2}{l}{Labels: \\texttt{entailment}, \\texttt{not\\_entailment}} \\\\\n", " \\midrule\n", " \\multicolumn{2}{l}{\\textbf{\\em \\citet{holtzman2021surface}}} \\\\\n", " Input & \\prom\\ question: \\hyp\\ true or false? answer: \\\\\n", " Output & $\\{$true, false$\\}$\n", " \\\\\n", " \\cmidrule{1-2}\n", " \\multicolumn{2}{l}{\\textbf{\\em \\citet{wei2022finetuned}}} \\\\\n", " Input & \\prom\\ Based on the paragraph above, can we \\\\\n", " & conclude that \\hyp? \\\\\n", " Output & $\\{$yes, no$\\}$\n", " \\\\\n", " \\cmidrule{1-2}\n", " \\multicolumn{2}{l}{\\textbf{\\em Ours}} \\\\\n", " Input & \\prom\\ \\hyp\\ \\\\\n", " Output & $\\{$entailment, not\\_entailment$\\}$ \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{Example input-output pairs for an NLI task.\n", " We show human-authored templates taken from prior work as references.\n", " }\\label{tab:templates}\n", "\\end{table}\n", "\\newcolumntype{C}[1]{>{\\PreserveBackslash\\centering}p{#1}}\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \\begin{tabular}{\n", " l @{\\hspace{2em}}\n", " ccccccc\n", " }\n", " \\toprule\n", " Method &\n", " \\main &\n", " \\makecell[c]{Class \\\\ $\\rightarrow$Class} & \\makecell[c]{non-Class \\\\ $\\rightarrow$Class} &\n", " \\makecell[c]{QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-NLI \\\\ $\\rightarrow$NLI} &\n", " \\makecell[c]{non-Para \\\\ $\\rightarrow$Para} \\\\\n", " \\midrule\n", " \\multicolumn{8}{c}{\\em All target tasks} \\\\\n", " 0-shot & 34.8 & 34.2 & 34.2 & 40.2 & 40.2 & 25.5 & 34.2 \\\\\n", " PMI 0-shot & 35.1 & 33.8 & 33.8 & 40.2 & 40.2 & 27.9 & 39.2 \\\\\n", " Channel 0-shot & 36.5 & 37.3 & 37.3 & 38.7 & 38.7 & 33.9 & 39.5 \\\\\n", " In-context & 38.2/35.3 & 37.4/33.9 & 37.4/33.9 & 40.1/38.7 & 40.1/38.7 & 34.0/28.3 & 33.7/33.1 \\\\\n", " PMI In-context & 39.2/33.7 & 38.8/30.0 & 38.8/30.0 & 40.3/38.8 & 40.3/38.8 & 33.0/28.0 & 38.6/33.4 \\\\\n", " Channel In-context & 43.1/38.5 & 46.3/40.3 & 46.3/40.3 & 40.8/38.1 & 40.8/38.1 & 39.9/34.8 & 45.4/40.9 \\\\\n", " \\cmidrule{1-8}\n", " Multi-task 0-shot & 35.6 & 37.3 & 36.8 & 45.7 & 36.0 & 40.7 & 30.6 \\\\\n", " Channel Multi-task 0-shot & 38.8 & 40.9 & 42.2 & 42.1 & 36.4 & 36.8 & 35.1 \\\\\n", " \\ours & 43.3/41.7 & 43.4/39.9 & 38.1/31.8 & \\textbf{46.0}/44.8 & 38.5/36.8 & 49.0/44.8 & 33.1/33.1 \\\\\n", " Channel \\ours & \\textbf{49.1}/46.8 & \\textbf{50.7}/48.0 & \\textbf{50.6}/48.1 & 44.9/43.5 & \\textbf{41.9}/40.5 & \\textbf{54.6}/51.9 & \\textbf{52.2}/50.3 \\\\\n", " \\cmidrule{1-8}\n", " Fine-tune & 46.4/40.0 & 50.7/44.0 & 50.7/44.0 & 41.8/39.1 & 41.8/39.1 & 44.3/32.8 & 54.7/48.9 \\\\\n", " Fine-tune w/ meta-train& 52.0/47.9 & 53.5/48.5 & 51.2/44.9 & 46.7/44.5 & 41.8/39.5 & 57.0/44.6 & 53.7/46.9 \\\\\n", " \\toprule\n", " 0-shot & 32.6 & 32.6 & 32.6 & 45.9 & 45.9 & 33.4 & 38.3 \\\\\n", " PMI 0-shot & 28.9 & 28.9 & 28.9 & 44.4 & 44.4 & 33.4 & 32.9 \\\\\n", " Channel 0-shot & 29.1 & 29.1 & 29.1 & 41.6 & 41.6 & 33.1 & 32.6 \\\\\n", " In-context & 30.6/27.5 & 30.6/27.5 & 30.6/27.5 & 45.6/44.7 & 45.6/44.7 & 52.0/41.3 & 35.8/34.1 \\\\\n", " PMI In-context & 34.9/27.7 & 34.9/27.7 & 34.9/27.7 & 45.4/44.7 & 45.4/44.7 & 47.8/35.2 & 38.5/33.3 \\\\\n", " Channel In-context & 39.6/33.6 & 39.6/33.6 & 39.6/33.6 & 44.7/40.6 & 44.7/40.6 & 40.4/35.7 & 44.1/36.8 \\\\\n", " \\cmidrule{1-8}\n", " Multi-task 0-shot & 35.4 & 28.0 & 28.6 & \\textbf{71.2} & 40.3 & 33.5 & 35.0 \\\\\n", " Channel Multi-task 0-shot & 36.3 & 31.1 & 34.3 & 54.4 & 39.4 & 50.8 & 34.1 \\\\\n", " \\ours & 35.3/32.7 & 32.3/29.3 & 28.1/25.1 & 69.9/68.1 & \\textbf{48.3}/47.2 & \\textbf{80.1}/77.2 & 34.0/34.0 \\\\\n", " Channel \\ours & \\textbf{47.7}/44.7 & \\textbf{41.9}/37.8 & \\textbf{48.0}/45.2 & 57.9/56.6 & 47.2/45.0 & 62.0/57.3 & \\textbf{51.0}/49.9 \\\\\n", " \\cmidrule{1-8}\n", " Fine-tune & 44.9/37.6 & 44.9/37.6 & 44.9/37.6 & 43.6/39.1 & 43.6/39.1 & 56.3/33.4 & 56.6/51.6 \\\\\n", " Fine-tune w/ meta-train& 53.3/43.2 & 53.2/43.7 & 46.1/36.9 & 67.9/66.2 & 44.5/42.8 & 71.8/58.2 & 65.6/61.4 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Main results, using GPT-2 Large.\n", " \\textbf{Bold} indicates the best average result except results from fine-tuned models that are not comparable.\n", " `Class' indicates `Classification'.\n", " }\\label{tab:main-result}\n", "\\end{table*}\n", "\\subsection{Evaluation}\\label{subsec:evaluation}\n", "We use Macro-F1\\footnote{More suitable than accuracy for imbalanced classification.} and Accuracy as evaluation metrics for classification tasks and non-classification tasks, respectively. \n", "We first compute the average and the worst-case performance over seeds for every target task, and then report the macro-average of them over all target tasks.\n" ], "output": { "What experiments do you suggest doing?": [ "1. Diverse meta-training task experimentation: The author should conduct experiments with a diverse set of meta-training tasks to assess the impact on model performance, especially in unseen and dissimilar target tasks", "2. Comparative baseline analysis: The paper should conduct comprehensive comparative experiments on MetaICL and Channel MetaICL across a range of baselines and settings", "3. The author should further explore the effects of varying the number of training examples k to evaluate how it affects MetaICL\u2019s performance.", "4. The author should test the effects of varying the number of training tasks used in the meta-training phase", "5. The author should conduct experiments to analyze the impact of task diversity in the meta-training phase by subsampling the dataset", "6. Necessity of instructions with MetaICL: The authors should experiment to determine if instructions are still necessary when using MetaICL." ], "Why do you suggest these experiments?": [ "1. These experiments are crucial to understand how diversity in meta-training tasks affects the adaptability and effectiveness of the model when faced with new, especially dissimilar, tasks", "2. Conducting thorough comparative analyses with various baselines is essential to establish the efficacy of MetaICL and Channel MetaICL. This approach will help in quantifying the improvements brought by these models over conventional methods", "3. Understanding the impact of different context sizes on learning performance is critical for optimising the model's in-context learning capabilities. This experiment will reveal how MetaICL adjusts to varying amounts of information and how it can be tuned to maximise performance even with minimal data", "4. This experiment aims to evaluate how the number of tasks in meta-training affects the model's ability to generalize across unseen tasks. Finding the optimal number of tasks can significantly enhance the model\u2019s performance", "5. This experiment is crucial for understanding how diversity within the meta-training tasks influences the robustness and flexibility of the model when encountering a wide array of target tasks. Increased diversity may lead to better generalization capabilities, which is essential for applying MetaICL to real-world scenarios", "6. This could explore whether MetaICL can achieve comparable or superior performance without the reliance on manually engineered instructions, at the same time, it can tell us which partition that improve the model performance, learn from instructions or learn from MetaICL" ] }, "paper_info": { "title": "MetaICL: Learning to Learn In Context", "authors": [ "Sewon Min", "Mike Lewis", "Luke Zettlemoyer", "Hannaneh Hajishirzi" ], "abstract": "We introduce MetaICL (Meta-training for In-Context Learning), a new\nmeta-training framework for few-shot learning where a pretrained language model\nis tuned to do in-context learning on a large set of training tasks. This\nmeta-training enables the model to more effectively learn a new task in context\nat test time, by simply conditioning on a few training examples with no\nparameter updates or task-specific templates. We experiment on a large, diverse\ncollection of tasks consisting of 142 NLP datasets including classification,\nquestion answering, natural language inference, paraphrase detection and more,\nacross seven different meta-training/target splits. MetaICL outperforms a range\nof baselines including in-context learning without meta-training and multi-task\nlearning followed by zero-shot transfer. We find that the gains are\nparticularly significant for target tasks that have domain shifts from the\nmeta-training tasks, and that using a diverse set of the meta-training tasks is\nkey to improvements. We also show that MetaICL approaches (and sometimes beats)\nthe performance of models fully finetuned on the target task, and outperforms\nmuch bigger models with nearly 8x parameters. Finally, we show that MetaICL is\ncomplementary to human-written instructions, and the best performance can be\nachieved by combining both approaches.", "comments": "19 pages, 2 figures. Published as a conference paper at NAACL 2022\n (long). Code available at https://github.com/facebookresearch/MetaICL" }, "raw_data": { "context_before_exp": [ "\n", "\\pdfoutput=1\n", "\n", "\n", "\\documentclass[11pt]{article}\n", "\n", "\n", "\\usepackage{acl}\n", "\n", "\n", "\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{url}\n", "\\usepackage{hyperref}\n", "\\usepackage{enumitem}\n", "\\usepackage{booktabs}\n", "\\usepackage{multirow}\n", "\\usepackage{array}\n", "\\usepackage{graphicx}\n", "\\usepackage{amsmath}\n", "\\usepackage{amsfonts}\n", "\\usepackage{subcaption}\n", "\\usepackage{makecell}\n", "\\usepackage{color,soul}\n", "\\usepackage{algorithm}\n", "\\usepackage[noend]{algpseudocode}\n", "\\usepackage{xcolor,colortbl}\n", "\\usepackage{xspace}\n", "\n", "\\definecolor{green}{rgb}{0.1,0.1,0.1}\n", "\\definecolor{gitgreen}{HTML}{006400}\n", "\\definecolor{chocolate}{HTML}{D2691E}\n", "\\definecolor{maroon}{HTML}{A00000}\n", "\\definecolor{indigo}{HTML}{4B0082}\n", "\\definecolor{green}{HTML}{008000}\n", "\\definecolor{red}{HTML}{e41a1c}\n", "\n", "\\usepackage{amssymb}\n", "\\usepackage{pifont}\n", "\\newcommand{\\cmark}{{\\protect\\color{maroon} \\ding{51}}}\n", "\\newcommand{\\xmark}{\\ding{55}}\n", "\\newcommand{\\red}[1]{{\\protect\\color{red} #1}}\n", "\n", "\n", "\\usepackage[T1]{fontenc}\n", "\n", "\n", "\n", "\n", "\n", "\\usepackage[utf8]{inputenc}\n", "\n", "\n", "\n", "\n", "\\usepackage{microtype}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\newcommand{\\ours}{MetaICL}\n", "\\newcommand{\\ourslong}{\\textbf{Meta}-training for \\textbf{I}n-\\textbf{C}ontext \\textbf{L}earning}\n", "\\newcommand{\\main}{HR$\\rightarrow$LR}\n", "\n", "\\newcommand\\myrotatebox[2]{\\multirow{#1}{*}{\\rotatebox[origin=c]{90}{#2}}}\n", "\n", "\\newcommand{\\code}{\n", " \\href{https://github.com/facebookresearch/MetaICL}{\\nolinkurl{github.com/facebookresearch/MetaICL}}\n", "}\n", "\n", "\\newcommand{\\hanna}[1]{\\textcolor{magenta}{[#1 ({\\bf Hanna})]}}\n", "\\newcommand{\\luke}[1]{\\textcolor{purple}{[#1 ({\\bf Luke})]}}\n", "\\newcommand{\\mike}[1]{\\textcolor{brown}{[#1 ({\\bf Mike})]}}\n", "\\newcommand{\\sewon}[1]{\\textcolor{green}{[#1 ({\\bf Sewon})]}}\n", "\\newcommand{\\updated}[1]{\\textcolor{green}{#1}}\n", "\n", "\\title{\\ours: Learning to Learn In Context}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\newcommand{\\affilsup}[1]{\\rlap{\\textsuperscript{\\normalfont#1}}}\n", "\\newcommand*\\samethanks[1][\\value{footnote}]{\\footnotemark[#1]}\n", "\\author{\n", " Sewon Min\\affilsup{1,2} \\quad \n", " ~~~Mike Lewis\\affilsup{2} \\quad \n", " ~~Luke Zettlemoyer\\affilsup{1,2} \\quad\n", " ~~~Hannaneh Hajishirzi\\affilsup{1,3} \n", " \\\\\n", " $^1$University of Washington \\qquad\n", " $^2$Meta AI \\qquad\n", " $^3$Allen Institute for AI \\\\\n", " \\texttt{\\{sewon,lsz,hannaneh\\}@cs.washington.edu} \\qquad \n", " \\texttt{mikelewis@fb.com} \\\\\n", "}\n", "\n", "\\begin{document}\n", "\\maketitle\n", "\\begin{abstract}\n", "\n", "We introduce \\ours\\ (\\ourslong), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks.\n", "This meta-training enables the model to more effectively learn a new task in context at test time, by simply conditioning on a few training examples with no parameter updates or task-specific templates.\n", "\n", "We experiment on a large, diverse collection of tasks consisting of 142 NLP datasets including classification, question answering, natural language inference, paraphrase detection and more, across seven different meta-training/target splits.\n", "\n", "\\ours\\ outperforms a range of baselines including in-context learning without meta-training and multi-task learning followed by zero-shot transfer.\n", "We find that the gains are particularly significant for target tasks that have domain shifts from the meta-training tasks, and that using a diverse set of the meta-training tasks is key to improvements.\n", "We also show that \\ours\\ approaches (and sometimes beats) the performance of models fully finetuned on the target task, and outperforms much bigger models with nearly 8x parameters.\n", "Finally, we show that \\ours\\ is complementary to human-written instructions, and the best performance can be achieved by combining both approaches.\n", "\\end{abstract}\n", "\n", "\n", "Large language models (LMs) have recently been shown to be able to do {\\em in-context learning}~\\citep{brown2020language}, where they learn a new task simply by conditioning on a few training examples and predicting which tokens best complete a test input. \n", "This type of learning is attractive because the model learns a new task through inference alone, without any parameter updates.\n", "However, performance significantly lags behind supervised finetuning, results are often high variance~\\citep{zhao2021calibrate,perez2021true}, and it can be difficult to engineer the templates that convert existing tasks to this format. \n", "\n", "\n", "\n", "In this paper, we address these challenges by introducing \\ours: \\ourslong. \\ours\\ tunes a pretrained language model on a large set of tasks to learn how to in-context learn, and is evaluated on strictly new unseen tasks.\n", "Each meta-training example matches the test setup---it includes $k+1$ training examples from one task that will be presented together as a single sequence to the language model, and the output of the final example is used to calculate the cross-entropy training loss.\n", "\n", "Simply finetuning the model in this data setup directly leads to better in-context learning---the model learns to recover the semantics of the task from the given examples, as must be done for in-context learning of a new task at test time.\n", "This approach is related to recent work that uses multi-task learning for better zero-shot performance at test time~\\citep{ khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2022finetuned,sanh2022multitask}.\n", "However, \\ours\\ is distinct as it allows learning new tasks from $k$ examples alone, without relying on a task reformatting (e.g., reducing everything to question answering) or task-specific templates (e.g., converting different tasks to a language modeling problem).\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "We experiment on a large, diverse collection of tasks taken from \\citet{ye2021crossfit} and \\citet{khashabi2020unifiedqa}, including 142 text classification, question answering, natural language inference and paraphrase detection datasets.\n", "We report seven different settings, all with no overlap between meta-training and target tasks. \n", "This leads to 52 unique target tasks in total, which is the largest among all recent related work to the best of our knowledge.\n", "\n", "Experimental results show that \\ours\\ consistently outperforms baselines including (1) a variety of LM in-context learning baselines without meta-training~\\citep{brown2020language,zhao2021calibrate,holtzman2021surface,min2022noisy}, and (2) multi-task learning followed by zero-shot transfer~\\citep{zhong2021adapting,wei2022finetuned,sanh2022multitask}.\n", "Gains over multi-task zero-shot transfer are particularly significant when meta-training tasks and target tasks are dissimilar, e.g. there are large differences in task formats, domains, or required skills. This demonstrates that \\ours\\ enables the model to recover the semantics of the task in context during inference even when the target does not share similarities with meta-training tasks.\n", "\n", "\n", "\\ours\\ often gets close to (and sometimes beats) the performance of models trained with supervised finetuning on the target datasets, and perform as well as models with 8x parameters.\n", "We also perform extensive ablations to identify key ingredients for success of \\ours\\ such as the number and diversity of meta-training tasks. Finally, we demonstrate \\ours\\ without any templates is better than recent work using human-written natural instructions, while the best performance is achieved by combining both approaches.\n", "Code and data are publicly released at \\code.\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \\begin{tabular}{lll }\n", " \\toprule\n", " & Meta-training & Inference \\\\\n", " \\midrule\n", " Task & $C$ {\\em meta-training} tasks & An unseen {\\em target} task \\\\\n", " \\cmidrule{1-3}\n", " \\multirow{2}{*}{Data given} &\n", " \n", " \\multirow{2}{*}{Training examples $\\mathcal{T}_i=\\{(x^i_j, y^i_j)\\}_{j=1}^{N_i},~\\forall i \\in [1, C]~~~(N_i \\gg k)$}\n", " \n", " & Training examples $(x_1,y_1),\\cdots,(x_k,y_k)$, \\\\\n", " &\n", " \n", " & Test input $x$ \\\\\n", " \\cmidrule{1-3}\n", " \n", " \n", " \\multirow{4}{*}{Objective} & For each iteration, & \\multirow{4}{*}{$\\mathrm{argmax}_{c \\in \\mathcal{C}}P(c|x_1,y_1,\\cdots,x_k,y_k,x)$} \\\\\n", " & ~~~1. Sample task $i \\in [1, C]$ \\\\\n", " & ~~~2. Sample $k+1$ examples from $\\mathcal{T}_i$: $(x_1,y_1),\\cdots,(x_{k+1},y_{k+1})$ \\\\\n", " & ~~~3. Maximize $P(y_{k+1}|x_1,y_1,\\cdots,x_k,y_k,x_{k+1})$ \\\\\n", " \n", " \n", " \n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.1em}\n", " \\caption{\n", " Overview of \\ours\\ (Section~\\ref{sec:method}). \\ours\\ uses the same in-context learning setup at both meta-training and inference. At meta-training time, $k+1$ examples for a task is sampled, where the last example acts as the test example and the rest $k$ examples act as the training examples. Inference is the same as typical in-context learning where $k$ labeled examples are used to make a prediction for a test input.\n", " }\\label{tab:overview}\n", "\\end{table*}\n", "\n", "\\vspace{.2em}\n", "\\paragraph{In-context learning}\n", "\n", "\\citet{brown2020language} propose to use a language model (LM) conditioned on a concatenation of training examples for few-shot learning with no parameter updates.\n", "It has been further improved by later work~\\citep{zhao2021calibrate, holtzman2021surface,min2022noisy}, showing promising results on a variety of tasks.\n", "However, in-context learning with an LM achieves poor performance when the target task is very different from language modeling in nature or the LM is not large enough. Moreover, it can have high variance and poor worst-case accuracy~\\citep{perez2021true,lu2021fantastically}.\n", "\n", "Our paper is based on the core idea of in-context learning by conditioning on training examples. We show that, by explicitly training on an in-context learning objective, \\ours\\ achieves substantial improvements even with smaller LMs. \n", "\n", "\\vspace{.2em}\n", "\\paragraph{Meta-training via multi-task learning}\n", "Our work is broadly inspired by a large body of work in meta-learning~\\citep{vilalta2002perspective,finn2017model} and multi-task learning~\\citep{evgeniou2004regularized,ruder2017overview}.\n", "Prior work has shown that multi-task learning on a large collection of tasks leads to better performance on a new task, either when tested zero-shot~\\citep{khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2022finetuned} or when further finetuned~\\citep{aghajanyan2021muppet,ye2021crossfit}.\n", "In particular, the former is closely related to our work, as it eliminates the need for parameter updates on a target task.\n", "However, these zero-shot models are either limited to tasks sharing the same format as training tasks (e.g., a question answering format)~\\citep{khashabi2020unifiedqa,zhong2021adapting}, or rely heavily on task-specific templates~\\citep{mishra2022cross,wei2022finetuned,sanh2022multitask}\n", "\n", "which are difficult to engineer due to high variance in performance from very small changes~\\citep{mishra2021reframing}.\n", "\n", "In this paper, we propose a meta-training method for better in-context learning that improves few-shot performance. We show that it effectively learns semantics of a new task with no manual effort, significantly outperforming zero-shot transfer methods.\\footnote{We show that \\ours\\ without instructions is still better than zero-shot transfer with instructions, but by using instructions, performance of \\ours\\ further improves (Section~\\ref{subsec:ablations}).}\n", "Furthermore, while \\citet{wei2022finetuned} show that meta-training helps only when the model has 68B or more parameters, our experiments demonstrate improvements with a much smaller model (770M).\n", "\n", "\\citet{chen2022meta}, concurrently to our work, propose meta-training for in-context learning. Our approach differs in a number of ways:\n", "\n", "we remove requirements of\n", "human-written templates or instructions, and include more diverse tasks, stronger baselines, and extensive experiments in much larger scale with many meta-training/target splits.\n", "\n", "\n", "\n", "\n", "\n", "\n", "We introduce \\ours: \\ourslong.\n", "Table~\\ref{tab:overview} provides an overview of the approach.\n", "The key idea is to use a multi-task learning scheme over a large collection of meta-training tasks, in order for the model to learn how to condition on a small set of training examples, recover the {\\em semantics} of a task, and predict the output based on it. Following previous literature~\\citep{brown2020language}, the training examples are concatenated and provided as an single input to the model, which is feasible for $k$-shot learning (e.g., $k=16$).\n", "At test time, the model is evaluated on an unseen target task that comes with $k$ training examples, and inference directly follows the same data format as in meta-training.\n", "\n", "\\subsection{Meta-training}\\label{subsec:meta-training}\n", "\n", "The model is meta-trained on a collection of tasks which we call meta-training tasks.\n", "For every iteration, one meta-training task is sampled, and $k+1$ training examples $(x_1,y_1), \\cdots, (x_{k+1},y_{k+1})$ are sampled from the training examples of the chosen task.\n", "We then supervise the model by feeding the concatenation of $x_1, y_1, \\cdots, x_k, y_k, x_{k+1}$ to the model as an input and train the model to generate $y_{k+1}$ using a negative log likelihood objective.\n", "This simulates in-context learning at inference where the first $k$ examples serve as training examples and the last $(k+1)$-th example is regarded as the test example.\n", "\n", "\\subsection{Inference}\\label{subsec:inference}\n", "\n", "For a new target task, the model is given $k$ training examples $(x_1, y_1), \\cdots, (x_k, y_k)$ as well as a test input $x$.\n", "It is also given a set of candidates $\\mathcal{C}$ which is either a set of labels (in classification) or answer options (in question answering).\n", "As in meta-training, the model takes a concatenation of $x_1, y_1, \\cdots, x_k, y_k, x$ as the input, and compute the conditional probability of each label $c_i \\in \\mathcal{C}$.\n", "The label with the maximum conditional probability is returned as a prediction.\n", "\n", "\\subsection{Channel \\ours}\\label{subsec:channel-mic}\n", "We introduce a noisy channel variant of \\ours\\ called Channel \\ours, following \\citet{min2022noisy}.\n", "In the noisy channel model, $P(y|x)$ is reparameterized to\n", "$\\frac{P(x|y)P(y)}{P(x)} \\propto P(x|y)P(y)$.\n", "We follow \\citet{min2022noisy} in using $P(y)=\\frac{1}{|\\mathcal{C}|}$ and modeling $P(x|y)$ which allows us to use the channel approach by simply flipping $x_i$ and $y_i$.\n", "Specifically, at meta-training time, the model is given a concatenation of $y_1, x_1, \\cdots, y_k, x_k, y_{k+1}$ and is trained to generate $x_{k+1}$. At inference, the model computes $\\mathrm{argmax}_{c \\in \\mathcal{C}} P(x|y_1,x_1,\\cdots,y_k,x_k,c)$. \n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{l @{\\hspace{-.6em}} r @{\\hspace{0.4em}} r l @{\\hspace{-.8em}} r}\n", " \\toprule\n", " \\multicolumn{3}{c}{Meta-train} & \\multicolumn{2}{c}{Target} \\\\\n", " \\cmidrule(lr){1-3} \\cmidrule(lr){4-5}\n", " Setting & \\# tasks & \\# examples & Setting & \\# tasks\\\\\n", " \\midrule\n", " HR & 61 & 819,200 & LR & 26 \\\\\n", " \\cmidrule(lr){1-5}\n", " Classification & 43 & 384,022 & \\multirow{2}{*}{Classification} & \\multirow{2}{*}{20} \\\\\n", " Non-Classification & 37 & 368,768 & & \\\\\n", " \\cmidrule(lr){1-5}\n", " QA & 37 & 486,143 & \\multirow{2}{*}{QA} & \\multirow{2}{*}{22} \\\\\n", " Non-QA & 33 & 521,342 & & \\\\\n", " \\cmidrule(lr){1-5}\n", " Non-NLI & 55 & 463,579 & NLI & 8 \\\\\n", " \\cmidrule(lr){1-5}\n", " Non-Paraphrase & 59 & 496,106 & Paraphrase & 4 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.1em}\n", " \\caption{Statistics of seven different settings.\n", " Each row indicates meta-training/target tasks for each setting.\n", " `\\# tasks' in meta-training is equivalent to $C$ in Table~\\ref{tab:overview}.\n", " For all settings, there is no overlap in tasks between meta-training and target.\n", " `HR' and `LR' indicate high resource and low resource, respectively.\n", " Datasets and the task ontology are taken from \\textsc{CrossFit}~\\citep{ye2021crossfit} and \\textsc{UnifiedQA}~\\citep{khashabi2020unifiedqa}.\n", " Full datasets for each split are provided in Appendix~\\ref{app:dataset}.\n", " \n", " }\\label{tab:data-summary}\n", "\\end{table}\n", "\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{l \n", " ccc}\n", " \\toprule\n", " \\multirow{2}{*}{Method} &\n", " Meta &\n", " \\multicolumn{2}{c}{Target} \\\\\n", " \\cmidrule(lr){2-2} \\cmidrule(lr){3-4}\n", " & train & train & \\# samples \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em LMs}} \\\\\n", " 0-shot & \\xmark & \\xmark & 0 \\\\\n", " PMI 0-shot & \\xmark & \\xmark & 0 \\\\\n", " Channel 0-shot & \\xmark & \\xmark & 0 \\\\\n", " In-context & \\xmark & \\xmark & $k$ \\\\\n", " PMI In-context & \\xmark & \\xmark & $k$ \\\\\n", " Channel In-context & \\xmark & \\xmark & $k$ \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em Meta-trained}} \\\\\n", " Multi-task 0-shot & \\cmark & \\xmark & 0 \\\\\n", " Channel Multi-task 0-shot & \\cmark & \\xmark & 0 \\\\\n", " \\ours\\ (Ours) & \\cmark & \\xmark & $k$ \\\\\n", " Channel \\ours\\ (Ours) & \\cmark & \\xmark & $k$ \\\\\n", " \\midrule\n", " \\multicolumn{4}{l}{\\textbf{\\em Fine-tune}} \\\\\n", " Fine-tune & \\xmark & \\cmark & $k$ \\\\\n", " Fine-tune w/ meta-train & \\cmark & \\cmark & $k$ \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Summary of the baselines and \\ours.\n", " `train' indicates whether the model is trained with parameter updates, and `\\# samples' indicates the number of training examples used on a target task.\n", " Our baselines include a range of recently introduced methods~\\citep{holtzman2021surface,zhao2021calibrate,min2022noisy,wei2022finetuned} as described in Section~\\ref{subsec:baselines}.\n", " }\\label{tab:methods}\n", "\\end{table}\n", "\n", "\\subsection{Datasets}\\label{subsec:dataset}\n", "We use a large collection of tasks taken from \\textsc{CrossFit}~\\citep{ye2021crossfit} and \\textsc{UnifiedQA}~\\citep{khashabi2020unifiedqa}.\n", "We have 142 unique tasks in total, covering a variety of problems including text classification, question answering (QA), natural language inference (NLI) and paraphrase detection. All tasks are in English.\n", "\n", "We experiment with seven distinct settings as shown in Table~\\ref{tab:data-summary}, where there is no overlap between the meta-training and target tasks. The number of unique target tasks in total is 52, which is significantly larger than other relevant work~\\citep{khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2022finetuned,sanh2022multitask}.\n", "Each target task is either classification or multi-choice, where a set of candidate options ($\\mathcal{C}$ in Table~\\ref{tab:overview}) is given.\n", "\n", "\\vspace{.25em}\n", "\\noindent \\textbf{\\main} (High resource to low resource): We experiment with a setting where datasets with 10,000 or more training examples are used as meta-training tasks and the rest are used as target tasks. We think using high resource datasets for meta-training and low resource datasets as targets is a realistic and practical setting for few-shot learning.\n", "\n", "\\vspace{.25em}\n", "\\noindent \\textbf{X$\\rightarrow$X (X=\\{Classification, QA\\})}: We experiment with two settings with meta-training and target tasks sharing the task format, although with no overlap in tasks.\n", "\n", "\\vspace{.25em}\n", "\\noindent \\textbf{Non-X$\\rightarrow$X (X=\\{Classification, QA, NLI, Paraphase\\})}: Lastly, we experiment with four settings where meta-training tasks do not overlap with target tasks in task format and required capabilities. These settings require the most challenging generalization capacities.\n", "\n", "\\vspace{.25em}\n", "Each setting has a subset of target tasks with no domain overlap with any meta-training tasks (e.g., finance, poem, climate or medical).\n", "We report both on all target tasks or on target tasks with no domain overlap only.\n", "Full details of the settings and datasets with citations are provided in Appendix~\\ref{app:dataset}.\n", "\n", "\n", "\\subsection{Baselines}\\label{subsec:baselines}\n", "We compare \\ours\\ and Channel \\ours\\ with a range of baselines, as summarized in Table~\\ref{tab:methods}.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{0-shot}: We use a pretrained LM as it is and run zero-shot inference, following \\citet{brown2020language}.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{In-context}: We use the pretrained LM as it is and use in-context learning by conditioning on a concatenation of $k$ training examples, following \\citet{brown2020language}.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{PMI 0-shot, PMI In-context}: We use the PMI method from \\citet{holtzman2021surface,zhao2021calibrate} for 0-shot and In-context learning.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Channel 0-shot, Channel In-context}: We use the noisy channel model from \\citet{min2022noisy} for 0-shot and In-context learning.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Multi-task 0-shot}: We train the LM on the same meta-training tasks without in-context learning objective, i.e., maximize $P(y|x)$ without $k$ other training examples, and then use zero-shot transfer on a target task. This is equivalent to \\ours\\ with $k=0$. This is a typical multi-task learning approach from previous work~\\citep{khashabi2020unifiedqa,zhong2021adapting,wei2022finetuned}.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Channel Multi-task 0-shot}: We have a channel variant of Multi-task 0-shot.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Fine-tune}: We fine-tune the LM on an individual target task. This is not directly comparable to other methods as parameter updates are required for every target task.\n", "\n", "\\vspace{.2em}\n", "\\noindent \\textbf{Fine-tune w/ meta-train}: We train the LM on meta-training tasks first and then further fine-tuned it on a target task. This is not directly comparable to other methods for the same reason as above.\n", "\n", "\n", "\n", "\n", "\\newcommand{\\prom}{{\\protect\\color{green} [P]}}\n", "\\newcommand{\\hyp}{{\\protect\\color{maroon} [H]}}\n", "\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \\setlength{\\tabcolsep}{0.3em}\n", " \\begin{tabular}{ll}\n", " \\toprule\n", " \\multicolumn{2}{l}{\\prom: Time Warner is the world\u2019s largest media and Internet} \\\\ \\multicolumn{2}{l}{company.} \\\\\n", " \\multicolumn{2}{l}{\\hyp: Time Warner is the world\u2019s largest company.} \\\\\n", " \\multicolumn{2}{l}{Labels: \\texttt{entailment}, \\texttt{not\\_entailment}} \\\\\n", " \\midrule\n", " \\multicolumn{2}{l}{\\textbf{\\em \\citet{holtzman2021surface}}} \\\\\n", " Input & \\prom\\ question: \\hyp\\ true or false? answer: \\\\\n", " Output & $\\{$true, false$\\}$\n", " \\\\\n", " \\cmidrule{1-2}\n", " \\multicolumn{2}{l}{\\textbf{\\em \\citet{wei2022finetuned}}} \\\\\n", " Input & \\prom\\ Based on the paragraph above, can we \\\\\n", " & conclude that \\hyp? \\\\\n", " Output & $\\{$yes, no$\\}$\n", " \\\\\n", " \\cmidrule{1-2}\n", " \\multicolumn{2}{l}{\\textbf{\\em Ours}} \\\\\n", " Input & \\prom\\ \\hyp\\ \\\\\n", " Output & $\\{$entailment, not\\_entailment$\\}$ \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{Example input-output pairs for an NLI task.\n", " We show human-authored templates taken from prior work as references.\n", " }\\label{tab:templates}\n", "\\end{table}\n", "\n", "\\newcolumntype{C}[1]{>{\\PreserveBackslash\\centering}p{#1}}\n", "\n", "\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{\n", " l @{\\hspace{2em}}\n", " ccccccc\n", " }\n", " \\toprule\n", " Method &\n", " \\main &\n", " \\makecell[c]{Class \\\\ $\\rightarrow$Class} & \\makecell[c]{non-Class \\\\ $\\rightarrow$Class} &\n", " \\makecell[c]{QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-NLI \\\\ $\\rightarrow$NLI} &\n", " \\makecell[c]{non-Para \\\\ $\\rightarrow$Para} \\\\\n", " \\midrule\n", " \\multicolumn{8}{c}{\\em All target tasks} \\\\\n", " 0-shot & 34.8 & 34.2 & 34.2 & 40.2 & 40.2 & 25.5 & 34.2 \\\\\n", " PMI 0-shot & 35.1 & 33.8 & 33.8 & 40.2 & 40.2 & 27.9 & 39.2 \\\\\n", " Channel 0-shot & 36.5 & 37.3 & 37.3 & 38.7 & 38.7 & 33.9 & 39.5 \\\\\n", " In-context & 38.2/35.3 & 37.4/33.9 & 37.4/33.9 & 40.1/38.7 & 40.1/38.7 & 34.0/28.3 & 33.7/33.1 \\\\\n", " PMI In-context & 39.2/33.7 & 38.8/30.0 & 38.8/30.0 & 40.3/38.8 & 40.3/38.8 & 33.0/28.0 & 38.6/33.4 \\\\\n", " Channel In-context & 43.1/38.5 & 46.3/40.3 & 46.3/40.3 & 40.8/38.1 & 40.8/38.1 & 39.9/34.8 & 45.4/40.9 \\\\\n", " \\cmidrule{1-8}\n", " Multi-task 0-shot & 35.6 & 37.3 & 36.8 & 45.7 & 36.0 & 40.7 & 30.6 \\\\\n", " Channel Multi-task 0-shot & 38.8 & 40.9 & 42.2 & 42.1 & 36.4 & 36.8 & 35.1 \\\\\n", " \\ours & 43.3/41.7 & 43.4/39.9 & 38.1/31.8 & \\textbf{46.0}/44.8 & 38.5/36.8 & 49.0/44.8 & 33.1/33.1 \\\\\n", " Channel \\ours & \\textbf{49.1}/46.8 & \\textbf{50.7}/48.0 & \\textbf{50.6}/48.1 & 44.9/43.5 & \\textbf{41.9}/40.5 & \\textbf{54.6}/51.9 & \\textbf{52.2}/50.3 \\\\\n", " \\cmidrule{1-8}\n", " Fine-tune & 46.4/40.0 & 50.7/44.0 & 50.7/44.0 & 41.8/39.1 & 41.8/39.1 & 44.3/32.8 & 54.7/48.9 \\\\\n", " Fine-tune w/ meta-train& 52.0/47.9 & 53.5/48.5 & 51.2/44.9 & 46.7/44.5 & 41.8/39.5 & 57.0/44.6 & 53.7/46.9 \\\\\n", " \\toprule\n", " \\multicolumn{8}{c}{\\em Target tasks in unseen domains} \\\\\n", " 0-shot & 32.6 & 32.6 & 32.6 & 45.9 & 45.9 & 33.4 & 38.3 \\\\\n", " PMI 0-shot & 28.9 & 28.9 & 28.9 & 44.4 & 44.4 & 33.4 & 32.9 \\\\\n", " Channel 0-shot & 29.1 & 29.1 & 29.1 & 41.6 & 41.6 & 33.1 & 32.6 \\\\\n", " In-context & 30.6/27.5 & 30.6/27.5 & 30.6/27.5 & 45.6/44.7 & 45.6/44.7 & 52.0/41.3 & 35.8/34.1 \\\\\n", " PMI In-context & 34.9/27.7 & 34.9/27.7 & 34.9/27.7 & 45.4/44.7 & 45.4/44.7 & 47.8/35.2 & 38.5/33.3 \\\\\n", " Channel In-context & 39.6/33.6 & 39.6/33.6 & 39.6/33.6 & 44.7/40.6 & 44.7/40.6 & 40.4/35.7 & 44.1/36.8 \\\\\n", " \\cmidrule{1-8}\n", " Multi-task 0-shot & 35.4 & 28.0 & 28.6 & \\textbf{71.2} & 40.3 & 33.5 & 35.0 \\\\\n", " Channel Multi-task 0-shot & 36.3 & 31.1 & 34.3 & 54.4 & 39.4 & 50.8 & 34.1 \\\\\n", " \\ours & 35.3/32.7 & 32.3/29.3 & 28.1/25.1 & 69.9/68.1 & \\textbf{48.3}/47.2 & \\textbf{80.1}/77.2 & 34.0/34.0 \\\\\n", " Channel \\ours & \\textbf{47.7}/44.7 & \\textbf{41.9}/37.8 & \\textbf{48.0}/45.2 & 57.9/56.6 & 47.2/45.0 & 62.0/57.3 & \\textbf{51.0}/49.9 \\\\\n", " \\cmidrule{1-8}\n", " Fine-tune & 44.9/37.6 & 44.9/37.6 & 44.9/37.6 & 43.6/39.1 & 43.6/39.1 & 56.3/33.4 & 56.6/51.6 \\\\\n", " Fine-tune w/ meta-train& 53.3/43.2 & 53.2/43.7 & 46.1/36.9 & 67.9/66.2 & 44.5/42.8 & 71.8/58.2 & 65.6/61.4 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Main results, using GPT-2 Large.\n", " Two numbers indicate the average and the worst-case performance over different seeds used for $k$ target training examples.\n", " \\textbf{Bold} indicates the best average result except results from fine-tuned models that are not comparable.\n", " `Class' indicates `Classification'.\n", " \n", " }\\label{tab:main-result}\n", "\\end{table*}\n", "\n", "\\subsection{Evaluation}\\label{subsec:evaluation}\n", "We use Macro-F1\\footnote{More suitable than accuracy for imbalanced classification.} and Accuracy as evaluation metrics for classification tasks and non-classification tasks, respectively. \n", "\n", "For a target task, we use $k=16$ training examples, sampled uniformly at random.\n", "We relax the assumption of perfect balance between labels on $k$ training examples, following \\citet{min2022noisy}.\n", "Because in-context learning is known to have high variance~\\citep{zhao2021calibrate,perez2021true,lu2021fantastically}, we use 5 different sets of $k$ training examples.\n", "We first compute the average and the worst-case performance over seeds for every target task, and then report the macro-average of them over all target tasks.\n", "\n" ], "context_after_exp": [ "\\subsection{Experiment Details}\\label{subsec:impl-details}\n", "As a base LM, we use GPT-2 Large ~\\citep{radford2019language} which consists of 770M parameters.\\footnote{Appendix~\\ref{app:abl_lm_size} reports performance for other LM sizes.}\n", "For baselines without meta-training (raw LMs), we also compare with GPT-J~\\citep{wang2021gpt}, which is the largest public causal LM at the time of writing, consisting of 6B parameters.\n", "\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{\n", " l @{\\hspace{1.5em}}\n", " ccccccc\n", " }\n", " \\toprule\n", " Method &\n", " \\main\\ &\n", " \\makecell[c]{Class \\\\ $\\rightarrow$Class} & \\makecell[c]{non-Class \\\\ $\\rightarrow$Class} &\n", " \\makecell[c]{QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-QA \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-NLI \\\\ $\\rightarrow$NLI} &\n", " \\makecell[c]{non-Para \\\\ $\\rightarrow$Para} \\\\\n", " \\midrule\n", " \\multicolumn{8}{c}{\\em All target tasks} \\\\\n", " Channel In-context & 43.1/38.5 & 46.3/40.3 & 46.3/40.3 & 40.8/38.1 & 40.8/38.1 & 39.9/34.8 & 45.4/40.9 \\\\\n", " \\ours & 43.3/41.7 & 43.4/39.9 & 38.1/31.8 & 46.0/44.8 & 38.5/36.8 & 49.0/44.8 & 33.1/33.1 \\\\\n", " Channel \\ours & \\textbf{49.1}/46.8 & 50.7/48.0 & 50.6/48.1 & 44.9/43.5 & 42.1/40.8 & \\textbf{54.6}/51.9 & \\textbf{52.2}/50.3 \\\\\n", "\t \\cmidrule{1-8}\n", "\t GPT-J Channel In-context & 48.6/44.4 & \\textbf{51.5}/47.0 & \\textbf{51.5}/47.0 & \\textbf{47.0}/45.2 & \\textbf{47.0}/45.2 & 47.2/41.7 & 51.0/47.5 \\\\\n", " \\toprule\n", " \\multicolumn{8}{c}{\\em Target tasks in unseen domains} \\\\\n", " Channel In-context & 39.6/33.6 & 39.6/33.6 & 39.6/33.6 & 44.7/40.6 & 44.7/40.6 & 40.4/35.7 & 44.1/36.8 \\\\\n", " \\ours & 35.3/32.7 & 32.3/29.3 & 28.1/25.1 & \\textbf{69.9}/68.1 & 48.3/47.2 & \\textbf{80.1}/77.2 & 34.0/34.0 \\\\\n", " Channel \\ours & \\textbf{47.7}/44.7 & 41.9/37.8 & \\textbf{48.0}/45.2 & 57.9/56.6 & 47.2/45.0 & 62.0/57.3 & 51.0/49.9 \\\\\n", "\t \\cmidrule{1-8}\n", "\t GPT-J Channel In-context & 42.8/38.4 & \\textbf{42.8}/38.4 & 42.8/38.4 & 55.7/54.4 & \\textbf{55.7}/54.4 & 51.1/40.4 & \\textbf{52.0}/46.5 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{\n", " Comparison between raw LM in-context learning (based on GPT-2 Large and GPT-J) and \\ours\\ (based on GPT-2 Large). GPT-2 Large used unless otherwise specified.\n", " Two numbers indicate the average and the worst-case performance over different seeds used for $k$ target training examples.\n", " For raw LM baselines, Channel In-context is reported because it is the best raw LM baseline overall across the settings; full results based on GPT-J are provided in Appendix~\\ref{app:gpt-j-result}.\n", " }\\label{tab:comparison-gptj}\n", "\\end{table*}\n", "\n", "\n", "\\vspace{-.25em}\n", "\\paragraph{Elimination of templates}\n", "Prior work uses human-authored templates to transform the input-output pair to a natural language sentence~\\citep{zhong2021adapting,mishra2022cross,wei2022finetuned,chen2022meta}.\n", "They require expensive manual effort (as 136 different templates are required for 136 tasks in this paper) and cause unstable model performance due to many different ways of writing~\\citep{mishra2021reframing}.\n", "\n", "We eliminate templates, using the given input (or a concatenation of inputs if there are multiple) and label words provided in the original datasets.\\footnote{In our preliminary experiments, we explored templates taken from prior work, but found that they do not consistently improve few-shot performance, even when they do improve zero-shot performance.}\n", "\n", "A comparison of input-output schemes from prior work and our approach is shown in Table~\\ref{tab:templates}.\n", "\n", "\\paragraph{Training details}\n", "All implementation is done in PyTorch~\\citep{paszke2019pytorch} and Transformers~\\citep{wolf-etal-2020-transformers}. For meta-training, we use up to 16,384 training examples per task. We use a batch size of $8$, learning rate of $1 \\times 10^{-5}$ and a sequence length of $1024$. For multi-task 0-shot baselines (the baselines with no in-context learning), we use a sequence length of $256$. We train the model for $30,000$ steps.\\footnote{We also explored training longer, but it did not improve performance.} To save memory during meta-training, we use an 8-bit approximation~\\citep{dettmers20228} of an Adam optimizer~\\citep{kingma2015adam} and mixed precision~\\citep{micikevicius2017mixed}. Training was done for 4.5 hours with eight 32GB GPUs. This is drastically more efficient than recent prior work, e.g., 270 hours of a 512GB TPU in \\citet{sanh2022multitask}.\n", "\n", "\\vspace{.2em}\n", "More details about preprocessing and training can be found in Appendix~\\ref{app:impl-details}.\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\subsection{Main Results}\\label{subsec:main-results}\n", "\n", "Table~\\ref{tab:main-result} reports the full results using GPT-2 Large, where we compute the average and the worst-case performance of every target task and report the macro-average over them.\n", "The top and the bottom respectively evaluate on all target tasks and target tasks in unseen domains only.\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Our baselines are strong}\n", "We first discuss the results of ours baselines.\n", "Among raw LMs without meta-training (the first six rows of Table~\\ref{tab:main-result}), we observe that channel in-context baselines are the most competitive, consistent with findings from \\citet{min2022noisy}.\n", "We then find that Multi-task 0-shot baselines do not outperform the best raw LM baseline in most settings, despite being supervised on a large set of meta-training tasks.\n", "This somewhat contradicts findings from \\citet{wei2022finetuned,sanh2022multitask}.\n", "This is likely for two reasons.\n", "First, our models are much smaller than theirs (770M vs. 11B--137B); in fact, \\citet{wei2022finetuned} reports Multi-task 0-shot starts to be better than raw LMs only when the model size is 68B or larger.\n", "Second, we compare with much stronger channel baselines which they did not;\n", "\n", "Multi-task 0-shot outperforms non-channel LM baselines but not channel LM baselines.\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{\\ours\\ outperforms baselines}\n", "\\ours\\ and Channel \\ours\\ consistently outperform a range of strong baselines.\n", "\n", "In particular, Channel \\ours\\ achieves the best performance in 6 out of 7 settings.\n", "Gains are particularly significant in the \\main, non-NLI$\\rightarrow$NLI and non-Para$\\rightarrow$Para settings (6--15\\\n", "This is noteworthy because \\main\\ targets the common low-resource case where new tasks have very few labeled examples, and the other two represent large data distribution shifts where the test tasks are relatively different from the meta-training tasks. This demonstrates that \\ours\\ can infer the semantics of new tasks in context even when there are no closely related training tasks.\n", "\n", "\n", "While \\ours\\ significantly outperforms baselines in most settings, it only marginally outperforms Multi-task 0-shot in the QA$\\rightarrow$QA setting, as an exception.\n", "This is likely because the meta-training and target tasks are relatively similar, allowing the Multi-task 0-shot baseline to achieve very strong performance.\n", "Nonetheless, performance of Multi-task 0-shot in QA significantly drops when the model is trained on non-QA tasks, while performance of \\ours\\ drops substantially less.\n", "\n", "\\begin{figure}[!t]\n", "\\resizebox{\\columnwidth}{!}{\n", " \\includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip]{figs/metaicl_ablate_k.pdf}\n", "}\n", "\\caption{Ablation on the number of training examples ($k$) in the \\main\\ setting. $k=0$ is equivalent to the zero-shot methods.}\\label{fig:ablate_k}\n", "\\end{figure}\n", "\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Gains are larger on unseen domains}\n", "Gains over Multi-task 0-shot are more significant on target tasks in unseen domains.\n", "In particular, Multi-task 0-shot is generally less competitive compared to raw LM baselines, likely because they require more challenging generalization.\n", "\\ours\\ suffers less from this problem and is consistently better or comparable to raw LM baselines across all settings.\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Comparison to fine-tuning}\n", "\\ours\\ matches or sometimes even outperforms fine-tuned models without meta-training.\n", "\n", "This is a promising signal, given that no prior work has shown models with no parameter updates on the target can match or outperform supervised models.\n", "Nonetheless, fine-tuning with meta-training exceeds both \\ours\\ and fine-tuning without meta-training, because meta-training helps in supervised learning as it does in in-context learning.\n", "\n", "\n", "This indicates that there is still room for improvement in methods that allow learning without parameter updates .\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Comparison to GPT-J}\n", "In Table~\\ref{tab:comparison-gptj}, we compare GPT-2 Large based models with raw LM baselines based on GPT-J which consists of 6B parameters. \\ours, despite being 8x smaller, outperforms or matches GPT-J baselines. \n", "\n", "\n", "\n", "\\subsection{Ablations}\\label{subsec:ablations}\n", "\n", "\\paragraph{Varying number of training examples}\n", "We vary the number of training examples ($k$) from 0, 4, 8, 16 to 32. In-context learning with $k=0$ is equivalent to the zero-shot method. Results are shown in Figure~\\ref{fig:ablate_k}. Increasing $k$ generally helps across all models, and Channel MetaICL outperforms the raw in-context learning over all values of $k$.\n", "We additionally find that the performance tends to saturate when $k$ is closer to $16$, likely because the sequence length limit of the language model makes it hard to encode many training examples.\n", "\n", "\\begin{figure}[!t]\n", "\\resizebox{\\columnwidth}{!}{\n", " \\includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip]{figs/metaicl_num_ablation_1.pdf}\n", "}\n", "\\resizebox{\\columnwidth}{!}{\n", " \\includegraphics[trim={0.2cm 0.2cm 0.2cm 0.2cm},clip]{figs/metaicl_num_ablation_2.pdf}\n", "}\n", "\\caption{Ablation on the number of meta-training tasks ($\\{7,15,30,61\\}$).\n", "The graph of the average (top) and the box chart (bottom) over different meta-training sets using 10 different random seeds (except for $61$).\n", "}\\label{fig:ablate_size}\n", "\\end{figure}\n", "\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Number of meta-training tasks}\n", "To see the impact of the number of meta-training tasks,\n", "we subsample $\\{7,15,30\\}$ meta-training tasks out of 61 in the \\main\\ setting. For each, we use ten different random seeds to additionally see the impact of the choice of meta-training tasks.\n", "\n", "Figure~\\ref{fig:ablate_size} reports the results.\n", "On average, performance generally increases as the number of tasks increase, which is consistent with results in \\citet{mishra2022cross,wei2022finetuned}.\n", "Across different numbers of meta-training tasks, Channel \\ours\\ consistently outperforms other models.\n", "Nonetheless, there is nonnegligible variance across different choices of meta-training (the bottom of Figure~\\ref{fig:ablate_size}), indicating that a choice of meta-training gives substantial impact in performance.\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Diversity in meta-training tasks}\n", "We hypothesize that the diversity in meta-training tasks may impact performance of \\ours. \n", "To verify this hypothesis, we create two settings by subsampling 13 out of 61 meta-training datasets in the \\main\\ setting.\n", "One setting is diverse in their task formats and required capacities: QA, NLI, relation extraction, sentiment analysis, topic classification, hate speech detection and more.\n", "The other setting is less diverse, including tasks related to sentiment analysis, topic classification and hate speech detection only.\n", "A full list of datasets is reported in Appendix~\\ref{app:dataset}. Using these two settings, we compare multi-task zero-shot transfer baselines and \\ours.\n", "\n", "Results are reported in Table~\\ref{tab:ablate_diversity}.\n", "We find that \\ours\\ with a diverse set outperforms \\ours\\ with a non-diverse set by a substantial margin.\n", "This shows that diversity among meta-training tasks is one of substantial factors for the success of \\ours.\n", "\n", "\n", "\n", "\n", "In Appendix~\\ref{app:which-tasks-helpful}, we include ablations that provide more insights on the choice of meta-training tasks, such as (1) high quality data with diverse domains tend to help (e.g., GLUE family~\\citep{wang2018glue}) and (2) adversarially collected data tends to be unhelpful. However, more systematic studies on how to choose the best meta-training tasks and how they relate to particular target tasks should be done, which we leave for future work.\n", "\n", "\\begin{table}[!t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{\n", " l @{\\hspace{2em}} cc\n", " }\n", " \\toprule\n", " Method &\n", " Diverse & No Diverse \\\\\n", " \\midrule\n", " \n", " 0-shot & \\multicolumn{2}{c}{34.9} \\\\\n", " PMI 0-shot & \\multicolumn{2}{c}{34.8} \\\\\n", " Channel 0-shot & \\multicolumn{2}{c}{36.8} \\\\\n", " In-context & \\multicolumn{2}{c}{38.2/35.4} \\\\\n", " PMI In-context & \\multicolumn{2}{c}{38.9/33.3} \\\\\n", " Channel In-context & \\multicolumn{2}{c}{42.9/38.5} \\\\\n", " \\cmidrule{1-3}\n", " Multi-task 0-shot & 35.2 & 29.9 \\\\\n", " Channel Multi-task 0-shot & 41.6 & 38.3 \\\\\n", " \\ours & 45.6/43.4 & 38.8/35.4 \\\\\n", " Channel \\ours & \\textbf{47.2}/44.7 & 45.3/42.6 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Ablation on the diversity of meta-training tasks in the \\main\\ setting. For both settings, the number of meta-training tasks is 13, and the number of target tasks is 26 as in the original \\main\\ setting.\n", " A full list of meta-training tasks is shown in Appendix~\\ref{app:dataset}.\n", " }\\label{tab:ablate_diversity}\n", "\\end{table}\n", "\n", "\\vspace{-.2em}\n", "\\paragraph{Are instructions necessary?}\n", "Most recent work has used human-written natural instructions for zero- or few-shot learning~\\citep{mishra2022cross, wei2022finetuned,sanh2022multitask}.\n", "While we argue for not using instructions to avoid manual engineering and high variance, we also ask: {\\em are instructions still useful with \\ours?} On one hand, learning to condition on $k$ examples may remove the necessity of instructions. On the other hand, instructions may still be complementary and provide the model with extra useful infomration.\n", "\n", "We aim to answer this question by using 32 meta-training tasks and 12 target tasks from the \\main\\ setting for which human-written instructions are available in \\citet{sanh2022multitask}.\\footnote{\\url{github.com/bigscience-workshop/promptsource}}\n", "We have two variants: (a) using one instruction per meta-training task, and (b) using all available instructions including 267 instructions in total (8.3 per meta-training task) which \\citet{sanh2022multitask} found to be better than (a).\n", "We then compare \\ours\\ and a range of baselines with and without instructions.\n", "\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{\n", " l @{\\hspace{0.6em}} c @{\\hspace{1em}} c @{\\hspace{1em}} c\n", " }\n", " \\toprule\n", " Method &\n", " w/o Instruct & \\multicolumn{2}{c}{w/ Instruct} \\\\\n", " \\midrule\n", " \\# instruct/task & 0 & 1 & 8.3 \\\\\n", " \\midrule\n", " 0-shot & 33.3 & \\multicolumn{2}{c}{34.2} \\\\\n", " PMI 0-shot & 34.6 & \\multicolumn{2}{c}{27.8} \\\\\n", " Channel 0-shot & 32.5 & \\multicolumn{2}{c}{30.6} \\\\\n", " In-context & 34.5/31.5 & \\multicolumn{2}{c}{45.2/42.3} \\\\\n", " PMI In-context & 37.7/32.7 & \\multicolumn{2}{c}{41.9/37.6} \\\\\n", " Channel In-context & 39.0/35.4 & \\multicolumn{2}{c}{39.6/35.3} \\\\\n", " \\cmidrule{1-4}\n", " MT 0-shot & 35.7 & 32.6 & 37.1 \\\\\n", " Channel MT 0-shot & 36.7 & 30.6 & 36.0 \\\\\n", " \\ours & 40.4/37.7 & 42.6/41.0 & 43.2/41.0 \\\\\n", " Channel \\ours & 42.2/40.0 & 45.3/43.9 & \\textbf{46.9}/44.2 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Ablation on the impact of {\\em natural instructions}.\n", " `w/ Instruct' uses instructions from \\citet{sanh2022multitask}, either one per meta-training task or all available ones;\n", " `w/o Instruct' does not use instructions, as in all of our other experiments.\n", " `\\# instruct/task' indicates the number of instructions per meta-training task on average.\n", " `MT 0-shot' indicates `Multi-task 0-shot' baselines.\n", " Both settings have the same meta-training and target tasks, \n", " 32 and 12, respectively.\n", " \n", " A full list of tasks is shown in Appendix~\\ref{app:dataset}.\n", " }\\label{tab:ablate_inst}\n", "\\end{table}\n", "\n", "Results are reported Table~\\ref{tab:ablate_inst}.\n", "As in \\citet{wei2022finetuned} and \\citet{sanh2022multitask}, Multi-task 0-shot outperforms the raw-LM 0-shot baseline.\n", "However, \\ours\\ with no instructions is better than Multi-task 0-shot with instructions.\n", "Furthermore, \\ours\\ achieves further improvements when instructions are jointly used, significantly outperforming all baselines.\n", "In fact, \n", "when increasing the number of instructions per task from 0, 1 to 8.3,\n", "performance of \\ours\\ improves much more than performance of Multi-task 0-shot does.\n", "To summarize, (1) learning to in-context learn (\\ours) outperforms learning to learn from instructions; (2) \\ours\\ and using instructions are largely complementary, and (3) \\ours\\ actually benefits more from using instructions than Multi-task 0-shot does.\n", "\n", "Importantly, Channel \\ours\\ trained on available tasks and instructions still achieves lower performance than Channel \\ours\\ without templates/instructions ($46.9$ from Table~\\ref{tab:ablate_inst} vs. $49.1$ from Table~\\ref{tab:main-result}). This is likely because the model with instructions was trained with less meta-training tasks, which was unavoidable since instructions are only available on 32 out of 61 meta-training tasks. This supports our earlier choice of not using human-written templates/instructions, since writing templates and instructions for every task requires extensive effort.\n", "\n", "It is worth noting that, it is nonetheless difficult to make direct comparisons with \\citet{wei2022finetuned} and \\citet{sanh2022multitask} because there are many moving components: size of LMs, types of LMs (e.g., causal LM vs. masked LM), splits between meta-training and target tasks, and more.\n", "\n", "In this paper, we introduced \\ours, a new few-shot learning method where an LM is meta-trained to learn to in-context learn, i.e. condition on training examples to recover the task and make predictions.\n", "\n", "\n", "We experiment with a large, diverse collection of tasks, consisting of 142 unique tasks in total and 52 unique target tasks, using seven different settings.\n", "\\ours\\ outperforms a range of strong baselines including in-context learning without meta-training and multi-task learning followed by zero-shot transfer, and outperforms or matches 8x bigger models.\n", "We identify ingredients for success of \\ours\\ such as the number and diversity of meta-training tasks.\n", "We also demonstrate that, while \\ours\\ is better than recent work using natural instructions, they are complementary and the best performance is achieved by integrating \\ours\\ with instructions.\n", "\n", "\n", "\n", "\n", "\\paragraph{Limitation \\& Future work}\n", "Our work is limited in multiple dimensions. First, in-context learning approaches in general requires much longer context at both meta-training and inference due to feeding the concatenation of the training data, thus being less efficient compared to baselines that do not use in-context learning.\n", "Second, our work experiment with a casual language model with modest size (GPT-2 Large, 770M parameters). Future work may investigate extending our approach to a masked language model and a larger model.\n", "Third, our experiments focus on classification and multi-choice tasks where a set of candidate options is given. Future work may study applying our approach for a wider range of tasks including free-form generation.\n", "\n", "Other avenues for future work include further improving \\ours\\ to outperform supervised models with meta-training, identification of which meta-training tasks are helpful on target tasks, and how to better combine human-written instructions and \\ours. \n", "\n", "\n", "\n", "\n", "\n", "\\section*{Acknowledgements}\n", "We thank Ari Holtzman and Victoria Lin for comments and discussions, and Tim Dettmers for help with experiments.\n", "This research was supported by NSF IIS-2044660, ONR N00014-18-1-2826, \n", "an Allen Distinguished Investigator Award, and a Sloan Fellowship. \n", "\n", "\n", "\\bibliography{acl,datasets}\n", "\\bibliographystyle{acl_natbib}\n", "\n", "\n", "\\clearpage\n", "\\appendix\n", "\\section{Dataset List}\n", "\\label{app:dataset}\n", "\n", "Table~\\ref{tab:full-datasets} and Table~\\ref{tab:full-datasets-citations} report a list of datasets used in the settings detailed in Section~\\ref{subsec:dataset}.\n", "The first 10 rows are for settings described in Section~\\ref{subsec:dataset}; the next two rows are for settings used for ablations on the diversity of meta-training tasks (Table~\\ref{tab:ablate_diversity} of Section~\\ref{subsec:ablations}); the last two rows are for settings used for ablations on using natural instructions (Table~\\ref{tab:ablate_inst} of Section~\\ref{subsec:ablations}).\n", "\n", "\\textbf{Bold} datasets are target datasets with no overlap in domain with meta-training tasks.\n", "All datasets are taken from \\textsc{CrossFit}~\\citep{ye2021crossfit} (except we exclude datasets that are unavailable from their repository\\footnote{\n", "\\href{https://github.com/INK-USC/CrossFit}{\\nolinkurl{github.com/INK-USC/CrossFit}}} or the scope is notably different from other tasks, e.g., solving math problems or breaking down compositional questions) and \\textsc{UnifiedQA}~\\citep{khashabi2020unifiedqa}.\n", "\n", "\\paragraph{How meta-training/target splits are determined}\n", "The \\main\\ setting is created based on the training data size as described in Section~\\ref{subsec:dataset}. Settings involving Classification, NLI and Paraphrase are taken from \\textsc{CrossFit}. Settings involving QA are created by combining QA datasets from \\textsc{CrossFit} and datasets from \\textsc{UnifiedQA}.\n", "\n", "\\vspace{.2em}\n", "Statistics are reported in Table~\\ref{tab:data-summary} and Table~\\ref{tab:data-statistics}.\n", "The number of tasks is the largest among recent related work: we have 142 unique tasks, while \\citet{khashabi2020unifiedqa}, \\citet{zhong2021adapting}, \\citet{mishra2022cross}, \\citet{wei2022finetuned} and \\citet{sanh2022multitask} use 32, 62, 61, 42 and 62 tasks, respectively.\n", "\n", "\n", "\n", "\n", "\n", "References for all datasets are provided in Table~\\ref{tab:full-datasets-citations}.\n", "Data and splits are available at \\code.\n", "\n", "\n", "\\section{Implementation Details}\n", "\\label{app:impl-details}\n", "\\paragraph{Preprocessing details}\n", "For all models with meta-training and the raw GPT-J, we separate the input and the output with one newline (\\texttt{$\\backslash$n}), and separate between examples with three newlines. For the raw GPT-2, we use spaces instead of newlines. This choice was made in order to report the best baseline performance we were able to achieve: when raw LMs are used, GPT-2 is significantly better with spaces than with newlines, and GPT-J is significantly better with newlines than with spaces.\\footnote{For example, in the \\main\\ setting, the raw GPT-2 is about $4$\\\n", "We note that \\ours\\ is less sensitive to these formatting differences, having less than 2\\\n", "\n", "When the concatenation of $k$ examples is too long, we truncate each example to have at most $256$ tokens, and truncate the earlier tokens of the concatenation so that the LM sees the recent tokens. Additionally, for extractive question answering datasets as meta-training tasks, the input passage is truncated with a guarantee that the groundtruth answer is included in the input passage. We do not do this truncation for target datasets.\n", "\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \\setlength{\\tabcolsep}{0.4em}\n", " \\begin{tabular}{lrrrr}\n", " \\toprule\n", " \\multirow{2}{*}{Setting} & \n", " \\multicolumn{2}{c}{Input} & \\multicolumn{2}{c}{Output} \\\\\n", " \\cmidrule(lr){2-3} \\cmidrule(lr){4-5}\n", " & Mean & Median & Mean & Median \\\\\n", " \\midrule\n", " \\multicolumn{5}{c}{\\em Meta-training tasks} \\\\\n", " HR & 81.7 & 73 & 2.8 & 2 \\\\\n", " Classification & 45.8 & 41 & 1.1 & 1 \\\\\n", " Non-Classification & 77.7 & 69 & 4.2 & 3 \\\\\n", " QA & 142.6 & 137 & 2.7 & 2 \\\\\n", " Non-QA & 68.7 & 56 & 2.3 & 2 \\\\\n", " Non-NLI & 44.3 & 39 & 1.1 & 1 \\\\\n", " Non-Paraphrase & 45.0 & 39 & 1.1 & 1 \\\\\n", " \\midrule\n", " \\multicolumn{5}{c}{\\em Target tasks} \\\\\n", " LR & 29.7 & 25 & 1.9 & 1 \\\\\n", " Classification & 44.9 & 38 & 1.0 & 1 \\\\\n", " QA & 74.4 & 69 & 4.6 & 4 \\\\\n", " NLI & 45.4 & 41 & 1.0 & 1 \\\\\n", " Paraphrase & 42.2 & 41 & 1.0 & 1 \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{Length statistics of tasks used in different settings, before any truncation.\n", " We compute the mean and the median of each task, and report the macro-average over all tasks for each setting.\n", " }\\label{tab:data-statistics}\n", "\\end{table}\n", "\n", "\\begin{table*}[!ht]\n", " \\centering \\footnotesize\n", " \\setlength{\\tabcolsep}{0.6em}\n", " \\begin{tabular}{\n", " l @{\\hspace{3em}} ccccc\n", " }\n", " \\toprule\n", " Method & \\main & \n", " \\makecell[c]{\\{Class,non-Class\\} \\\\ $\\rightarrow$Class} & \\makecell[c]{\\{QA,non-QA\\} \\\\ $\\rightarrow$QA} &\n", " \\makecell[c]{non-NLI \\\\ $\\rightarrow$NLI} &\n", " \\makecell[c]{non-Para \\\\ $\\rightarrow$Para} \\\\\n", " \\midrule\n", " \\multicolumn{6}{c}{\\em All tasks} \\\\\n", " 0-shot & 31.5 & 31.5 & 45.6 & 25.7 & 30.0 \\\\\n", " PMI 0-shot & 36.9 & 30.2 & 44.3 & 30.2 & 37.6 \\\\\n", " Channel 0-shot & 39.7 & 41.5 & 42.1 & 36.2 & 45.0 \\\\\n", " In-context & 43.8/39.1 & 43.6/34.3 & \\textbf{50.8}/48.3 & 35.0/27.6 & 41.3/33.2 \\\\\n", " PMI In-context & 43.0/37.4 & 44.8/36.6 & 48.8/46.9 & 31.5/26.0 & 38.4/33.6 \\\\\n", " Channel In-context & \\textbf{48.6}/44.4 & \\textbf{51.5}/47.0 & 47.0/45.2 & \\textbf{47.2}/41.7 & \\textbf{51.0}/47.5 \\\\\n", " \\midrule\n", " \\multicolumn{6}{c}{\\em Target tasks in unseen domains} \\\\\n", " 0-shot & 31.2 & 31.2 & 47.5 & 33.5 & 34.1 \\\\\n", " PMI 0-shot & 25.2 & 25.2 & 43.8 & 36.1 & 34.4 \\\\\n", " Channel 0-shot & 37.2 & 37.2 & 46.9 & \\textbf{53.4} & \\textbf{54.7} \\\\\n", " In-context & 33.1/25.4 & 33.1/25.4 & \\textbf{57.4}/53.1 & 46.7/36.1 & 34.1/34.1 \\\\\n", " PMI In-context & 35.4/28.2 & 35.4/28.2 & 54.5/50.9 & 33.9/33.9 & 32.5/32.4 \\\\\n", " Channel In-context & \\textbf{42.8}/38.4 & \\textbf{42.8}/38.4 & 55.7/54.4 & 51.1/40.4 & 52.0/46.5 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Performance of raw LM baselines using \\textbf{GPT-J} (6B).\n", " Two numbers indicate the average and the worst-case accuracy over different seeds used for $k$ target training examples.\n", " `Class' indicate `Classification'.\n", " }\\label{tab:full-result}\n", "\\end{table*}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", " \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{Comparison with baselines in training and inference cost} Although being trained for the same global steps (30,000 steps), it takes 3 hours to train Multi-task 0-shot baselines (in contrast to 4.5 hours for \\ours), likely because the sequence length is 4x shorter. At inference, Multi-task 0-shot baselines are roughly 4x more efficient, also because the sequence length is 4x shorter.\\footnote{Let $L$ be the sequence length, the memory requirement for attention layers and feed-forward layers are $O(L^2)$ and $O(L)$, respectively. In practice, feed-forward layers are responsible for most memory usage when the size of the transformers is large, thus empirical memory usage tends to be linear to $L$.} We did not control for the training time and the inference time for comparison since both models are efficient enough.\n", "\n", "\\begin{table*}[!ht]\n", " \\centering \\footnotesize\n", " \\setlength{\\tabcolsep}{.6em}\n", " \\begin{tabular}{l@{\\hspace{2em}} cccc c cccc}\n", " \\toprule\n", " & \\multicolumn{4}{c}{\\em All tasks} && \\multicolumn{4}{c}{\\em Target tasks in unseen domains} \\\\\n", " \\cmidrule{2-5} \\cmidrule{7-10}\n", " & S & M & L & XL && S & M & L & XL \\\\\n", " \\midrule\n", " Channel In-context & 41.5/37.4 & 42.2/37.7 & 43.1/38.5 & 43.5/39.9 && 40.9/35.9 & 38.8/34.7 & 39.6/33.6 & 40.0/37.2\\\\\n", " MT 0-shot & 35.4 & 36.4 & 35.6 & - && 34.9 & 32.2 & 35.4 & -\\\\\n", " Channel MT 0-shot & 40.4 & 37.9 & 38.8 & - && 33.8 & 35.9 & 36.3 & -\\\\\n", " \\ours & 39.7/36.2 & 40.3/36.4 & 43.3/41.7 & - && 36.9/32.6 & 38.1/35.0 & 35.3/32.7 & - \\\\\n", " Channel \\ours & 46.2/43.1 & 44.3/41.5 & 49.1/46.8 & - && 46.9/42.6 & 43.1/39.8 & 47.7/44.7 & - \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{Ablation on the size of the LM on the \\main\\ setting.\n", " We use small, medium, large, and XL variants of GPT-2.\n", " \n", " We were unable to meta-train the XL variant due to memory limit.\n", " }\\label{tab:ablate_size}\n", "\\end{table*}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{Ablations in using instructions}\n", "When we choose one instruction per task at meta-training tasks, we choose one by (1) first excluding the instruction if its name contains \\texttt{no\\_option}, (2) then taking the instruction which name contains \\texttt{multiple\\_choice}, \\texttt{most\\_correct} or \\texttt{most\\_suitable} if there are any, and (3) if not, then randomly sampling one. We choose one instruction per target task at test time using the same process. This is different \\citet{sanh2022multitask} where the median of the performance over all instructions is reported. We think our choice better reflects the real use-case scenario---choosing one instruction that looks the most reasonable to human.\n", " \n", "\\section{Additional Results \\& Analyses}\n", "\\label{app:results}\n", "\n", "\\subsection{GPT-J results}\\label{app:gpt-j-result}\n", "Table~\\ref{tab:full-result} reports the full results of raw LM baselines based on GPT-J,\n", "\n", "consisting of 6B parameters. See Section~\\ref{subsec:main-results} for discussion.\n", "\n", "\n", "\\subsection{Varying LM sizes}\\label{app:abl_lm_size}\n", "We vary the size of the GPT-2 models---small, medium, large, and XL---with 124M, 355M, 774M, and 1.5B parameters, respectively. Results are reported in Table~\\ref{tab:ablate_size}. We find that (1) increasing the model size generally helps, (2) for all model sizes, Channel \\ours\\ significantly outperforms baselines, and (3) \\ours\\ enables a much smaller model to outperform a bigger model, e.g., Channel MetaICL based on GPT-2 Small outperforms the GPT-2 XL baseline that is 12x bigger (46.2 vs. 43.5).\n", "\n", "\\begin{table*}[!th]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{l @{\\hspace{0.5em}} p{0.85\\textwidth}}\n", " \\toprule\n", " \\em Single task \\\\\n", " Helpful: & tweet\\_eval-offensive, glue-sst2, glue-mnli, wino\\_grande, kilt\\_hotpotqa \\\\\n", " Unhelpful: & race-middle, cosmos\\_qa, dbpedia\\_14, gigaword, wikisql \\\\\n", " \\midrule\n", " \\em Task pair \\\\\n", " Helpful: &\n", " (yelp\\_review\\_full, glue-mnli), (yelp\\_review\\_full, wino\\_grande),\n", " (hateexplain, glue-sst2),\n", " (hateexplain, glue-mnli),\n", " (hateexplain, glue-qqp),\n", " \\\\\n", " Unhelpful: &\n", " (paws, dbpedia\\_14),\n", " (paws, art),\n", " (paws, cosmos\\_qa),\n", " (cosmos\\_qa, dbpedia\\_14),\n", " (quail, art)\n", " \\\\\n", " \\midrule\n", " \\em Task triple \\\\\n", " Helpful &\n", " (yelp\\_review\\_full, glue-qqp, glue-mnli),\n", " (yelp\\_review\\_full, glue-sst2, glue-mnli),\n", " (yelp\\_review\\_full, hateexplain, glue-mnli),\n", " (yelp\\_review\\_full, hateexplain, qqp),\n", " (yelp\\_review\\_full, hate\\_speech\\_offensive, glue-mnli),\n", " \\\\\n", " Unhelpful &\n", " (paws, dbpedia\\_14, art),\n", " (paws, dbpedia\\_14, cosmos\\_qa),\n", " (paws, cosmos\\_qa, art),\n", " (dbpedia\\_14, cosmos\\_qa, art),\n", " (quail, paws, dbpedia\\_14)\n", " \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Analysis of which meta-training tasks give good performance in Channel \\ours.\n", " We report five most helpful and the most unhelpful tasks (or task sets), respectively.}\\label{tab:analysis_task}\n", "\\end{table*}\n", "\n", "\\subsection{Which meta-training tasks are more helpful?}\\label{app:which-tasks-helpful}\n", "\n", "Based on large variance across different choices of meta-training (Figure~\\ref{fig:ablate_size} of Section~\\ref{subsec:ablations}), we think certain tasks are more helpful for meta-training than other tasks. In this context, we create $50$ sets of seven meta-training tasks using $50$ different random seeds. We then measure the correlation between tasks/task pairs/task triples and average performance of Channel \\ours\\ when the task is included in the meta-training tasks.\n", "\n", "Table~\\ref{tab:analysis_task} reports the result.\n", "We first find that high quality datasets with diverse domain like GLUE family~\\citep{wang2018glue} are often helpful.\n", "We also find that datasets that are collected adversarially (e.g. \\texttt{paws}, \\texttt{art}) or are notably dissimilar from all other tasks (e.g. \\texttt{wikisql} that requires semantic parsing) are often unhelpful.\n", "Nonetheless, we were not able to find good explanations for other cases, e.g., many sentiment analysis datasets being particularly helpful even though only 3 out of 26 target datasets are sentiment analysis, and \\texttt{dbpedia\\_14}/\\texttt{cosmos\\_qa}/\\texttt{race-middle} being unhelpful.\n", "Moreover, we think which tasks are helpful largely depends on the choice of target tasks, and we should not make early conclusions that certain tasks are helpful/unhelpful in all cases.\n", "We think future work should investigate these impacts in a more systematic way.\n", "\n", "\n", "\n", "\n", "\n", "\\begin{table}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{\n", " l @{\\hspace{.7em}} lc @{\\hspace{1em}} c}\n", " \\toprule\n", " \\multirow{2}{*}{Method} & \\multirow{2}{*}{Train labels} & \\multicolumn{2}{c}{Test labels} \\\\\n", " \\cmidrule{3-4}\n", " && Original & Replaced \\\\\n", " \\midrule\n", " \\multicolumn{4}{c}{\\em All target tasks} \\\\\n", " \\multicolumn{2}{l}{Random} & 36.0 & 36.0 \\\\\n", " \\multicolumn{2}{l}{0-shot} & 34.2 & 23.8/16.8 \\\\\n", " \\multicolumn{2}{l}{Channel 0-shot} & 37.3 & 31.4/22.9 \\\\\n", " \\multicolumn{2}{l}{In-context} & 37.4/33.9 & 30.5/25.0 \\\\\n", " \\multicolumn{2}{l}{Channel In-context} & 46.3/40.3 & 37.7/31.3 \\\\\n", " \\cmidrule{1-4}\n", " MT 0-shot & Original & 37.3 & 25.5/16.4 \\\\\n", " Channel MT 0-shot & Original & 40.9 & 28.6/19.9 \\\\\n", " \\ours & Original & 43.4/39.9 & 30.1/24.0 \\\\\n", " Channel \\ours & Original & \\textbf{50.7}/48.0 & 36.5/28.9 \\\\\n", " \\cmidrule{1-4}\n", " MT 0-shot & Replaced & 24.4 & 23.1/15.5 \\\\\n", " Channel MT 0-shot & Replaced & 36.7 & 34.1/28.4 \\\\\n", " \\ours & Replaced & 40.7/36.0 & \\textbf{43.5}/35.2 \\\\\n", " Channel \\ours & Replaced & 47.1/42.7 & 39.5/33.7 \\\\\n", " \\bottomrule\n", " \\end{tabular}\\vspace{-.3em}\n", " \\caption{Ablation where label words are replaced with {\\em random English word} in the class$\\rightarrow$class setting.\n", " {\\em Original} and {\\em Replaced} indicate original label words and labels that are replaced to random English words, respectively.\n", " When tested on {\\em Replaced}, five random seeds used to sample English words.\n", " }\\label{tab:ablate_labels}\n", "\\end{table}\n", "\n", "\\subsection{Does \\ours\\ generalize when semantic hints from label words are removed?}\n", "Our experiments use label words taken from the original dataset, which often contain {\\em semantic hints}---hints on what each label is supposed to mean (\\texttt{entailment} and \\texttt{not\\_entailment} for the NLI task, and \\texttt{positive} and \\texttt{negative} for the sentiment analysis task).\n", "If the model is truly learning the task in-context, it should generalize when label words are replaced with random English words, e.g., \\texttt{entailment} and \\texttt{not\\_entailment} are replaced with \\texttt{apple} and \\texttt{orange}, thus not giving any hints about the task.\n", "In this context, we run experiments where each label word is replaced with a random word sampled from 61,569 common English words.\\footnote{\n", "\\href{https://pypi.org/project/english-words}{\\nolinkurl{pypi.org/project/english-words}}.}\n", "\n", "We use five seeds for sampling random words, and report the average and the worst-case performance. \n", "\n", "Results in Table~\\ref{tab:ablate_labels} show that raw LMs (the first block of the table) and models trained on the original data (the second block) achieve near random guessing performance. This indicates that having semantic hints from label words is a necessary condition for all models to perform the task. \n", "\n", "Next, we meta-train the MT 0-shot baseline and \\ours\\ where, for each iteration of meta-training, we similarly map label words with random words. The mapping from the label set to sampled English words is independent for each iteration, so that the model never sees the same mapping during meta-training and hence does not overfit to a specific mapping.\n", "Results are reported in the third block of Table~\\ref{tab:ablate_labels}.\n", "MT 0-shot baselines are still not better than random guessing, which is expected as they have no way to grasp the meaning of each label.\n", "On the other hand, \\ours\\ benefits from training on the replaced data, improving performance from 30.1\\\n", "\n", "Still, overall performance is relatively poor.\n", "We think\n", "\n", "future work should investigate the model that can in-context learn {\\em any} task.\n", "\n", "\\begin{table*}[!t]\n", " \\centering \\scriptsize\n", " \n", " \\begin{tabular}{p{\\textwidth}}\n", " \\toprule\n", " Setting: \\main\\ Meta-train \\\\\n", " piqa, hate\\_speech\\_offensive, google\\_wellformed\\_query, social\\_i\\_qa, circa, quoref, glue-sst2, scitail, emo, cosmos\\_qa, freebase\\_qa, ag\\_news, art, paws,\n", " kilt\\_ay2, glue-qnli, quail, ade\\_corpus\\_v2-classification, sciq, hatexplain, emotion, glue-qqp, kilt\\_fever, kilt\\_nq, dbpedia\\_14, kilt\\_zsre, hellaswag, squad-with\\_context,\n", " hotpot\\_qa, glue-mnli, ropes, squad-no\\_context, kilt\\_hotpotqa, discovery, superglue-record, race-middle, race-high, lama-trex, swag, gigaword, amazon\\_polarity,\n", " biomrc, tab\\_fact, tweet\\_eval-emoji, tweet\\_eval-offensive, tweet\\_eval-sentiment, tweet\\_qa, imdb, lama-conceptnet, liar, anli, wiki\\_qa, kilt\\_trex, wikisql, wino\\_grande,\n", " wiqa, search\\_qa, xsum, yahoo\\_answers\\_topics, yelp\\_polarity, yelp\\_review\\_full \\\\\n", " \\midrule\n", " Setting: \\main\\ Target \\\\\n", " quarel, \\textbf{financial\\_phrasebank}, openbookqa, codah, qasc, glue-mrpc, dream, sick, commonsense\\_qa, \\textbf{medical\\_questions\\_pairs}, quartz-with\\_knowledge,\n", " \\textbf{poem\\_sentiment}, quartz-no\\_knowledge, glue-wnli, \\textbf{climate\\_fever}, ethos-national\\_origin, ethos-race, ethos-religion, ai2\\_arc, hate\\_speech18,\n", " glue-rte, superglue-cb, superglue-copa, tweet\\_eval-hate, tweet\\_eval-stance\\_atheism, tweet\\_eval-stance\\_feminist \\\\\n", " \\midrule\n", " Setting: Classification Meta-train \\\\\n", " Meta-Train: superglue-rte, tweet\\_eval-sentiment, discovery, glue-rte, superglue-wsc, glue-mrpc, tweet\\_eval-stance\\_hillary, tweet\\_eval-offensive,\n", " emotion, hatexplain, glue-cola, sick, paws, ethos-sexual\\_orientation, glue-qqp, tweet\\_eval-emotion, sms\\_spam, health\\_fact, glue-mnli, imdb, ethos-disability,\n", " glue-wnli, scitail, trec-finegrained, yahoo\\_answers\\_topics, liar, glue-sst2, tweet\\_eval-stance\\_abortion, circa, tweet\\_eval-stance\\_climate, glue-qnli, tweet\\_eval-emoji,\n", " ethos-directed\\_vs\\_generalized, ade\\_corpus\\_v2-classification, hate\\_speech\\_offensive, superglue-wic, google\\_wellformed\\_query, tweet\\_eval-irony,\n", " ethos-gender, onestop\\_english, trec, rotten\\_tomatoes, kilt\\_fever \\\\\n", " \\midrule\n", " Setting: Non-Classification Meta-train \\\\\n", " ade\\_corpus\\_v2-dosage, art, biomrc, blimp-anaphor\\_number\\_agreement, blimp-ellipsis\\_n\\_bar\\_2, blimp-sentential\\_negation\\_npi\\_licensor\\_present,\n", " blimp-sentential\\_negation\\_npi\\_scope, commonsense\\_qa, crows\\_pairs, dream, freebase\\_qa, gigaword, hellaswag, hotpot\\_qa, kilt\\_ay2, kilt\\_hotpotqa, kilt\\_trex,\n", " kilt\\_zsre, lama-conceptnet, lama-google\\_re, lama-squad, numer\\_sense, openbookqa, piqa, proto\\_qa, qa\\_srl, quarel, quartz-no\\_knowledge, race-high, ropes, sciq,\n", " social\\_i\\_qa, spider, superglue-multirc, wikisql, xsum, yelp\\_review\\_full\n", " \\\\\n", " \\midrule\n", " Setting: Classification Target \\\\\n", " tweet\\_eval-stance\\_feminist, ethos-national\\_origin, tweet\\_eval-hate, ag\\_news, amazon\\_polarity, hate\\_speech18, \\textbf{poem\\_sentiment}, \\textbf{climate\\_fever},\n", " \\textbf{medical\\_questions\\_pairs}, tweet\\_eval-stance\\_atheism, superglue-cb, dbpedia\\_14, wiki\\_qa, emo, yelp\\_polarity, ethos-religion, \\textbf{financial\\_phrasebank},\n", " tab\\_fact, anli, ethos-race \\\\\n", " \\midrule\n", " Setting: QA Meta-train \\\\\n", " biomrc, boolq, freebase\\_qa, hotpot\\_qa, kilt\\_hotpotqa, kilt\\_nq, kilt\\_trex, kilt\\_zsre, lama-conceptnet, lama-google\\_re, lama-squad, lama-trex, mc\\_taco, numer\\_sense, quoref, ropes, search\\_qa, squad-no\\_context, squad-with\\_context, superglue-multirc, superglue-record, tweet\\_qa, web\\_questions, unifiedqa:squad2, unifiedqa:natural\\_questions\\_with\\_dpr\\_para, unifiedqa:race\\_string, unifiedqa:squad1\\_1, unifiedqa:drop, unifiedqa:newsqa, unifiedqa:narrativeqa, unifiedqa:winogrande\\_xl, unifiedqa:social\\_iqa, unifiedqa:quoref, unifiedqa:physical\\_iqa, unifiedqa:ropes, unifiedqa:commonsenseqa, unifiedqa:boolq \\\\\n", " \\midrule\n", " Setting: Non-QA Meta-train \\\\\n", " hate\\_speech\\_offensive, google\\_wellformed\\_query, circa, glue-sst2, scitail, emo, ag\\_news, art, paws, kilt\\_ay2, glue-qnli, ade\\_corpus\\_v2-classification, hatexplain, emotion, glue-qqp, kilt\\_fever, dbpedia\\_14, glue-mnli, discovery, gigaword, amazon\\_polarity, tab\\_fact, tweet\\_eval-emoji, tweet\\_eval-offensive, tweet\\_eval-sentiment, imdb, liar, anli, wikisql, xsum, yahoo\\_answers\\_topics, yelp\\_polarity, yelp\\_review\\_full\\\\ \n", " \\midrule\n", " Setting: QA Target \\\\\n", " ai2\\_arc, codah, cosmos\\_qa, dream, hellaswag, openbookqa, qasc, quail, quarel, quartz-no\\_knowledge, quartz-with\\_knowledge, sciq, superglue-copa, swag, wino\\_grande, wiqa, unifiedqa:qasc, unifiedqa:qasc\\_with\\_ir, unifiedqa:openbookqa, unifiedqa:openbookqa\\_with\\_ir, \\textbf{unifiedqa:mctest}, unifiedqa:ai2\\_science\\_middle\\\\\n", " \\midrule\n", " Setting: Non-NLI Meta-train \\\\\n", " ade\\_corpus\\_v2-classification, ag\\_news, amazon\\_polarity, circa, climate\\_fever, dbpedia\\_14, discovery, emo, emotion, ethos-directed\\_vs\\_generalized,\n", " ethos-disability, ethos-gender, ethos-national\\_origin, ethos-race, ethos-religion, ethos-sexual\\_orientation, financial\\_phrasebank, glue-cola, glue-mrpc,\n", " glue-qqp, glue-sst2, google\\_wellformed\\_query, hate\\_speech18, hate\\_speech\\_offensive, hatexplain, health\\_fact, imdb, kilt\\_fever, liar, \\\\\n", " medical\\_questions\\_pairs, onestop\\_english, paws, poem\\_sentiment, rotten\\_tomatoes, sick, sms\\_spam, superglue-wic, superglue-wsc, tab\\_fact,\n", " trec, trec-finegrained, tweet\\_eval-emoji, tweet\\_eval-emotion, tweet\\_eval-hate, tweet\\_eval-irony, tweet\\_eval-offensive, tweet\\_eval-sentiment,\n", " tweet\\_eval-stance\\_abortion, tweet\\_eval-stance\\_atheism, tweet\\_eval-stance\\_climate, tweet\\_eval-stance\\_feminist, tweet\\_eval-stance\\_hillary, wiki\\_qa, yahoo\\_answers\\_topics, yelp\\_polarity\n", " \\\\\n", " Setting: NLI Target \\\\\n", " anli, glue-mnli, glue-qnli, glue-rte, glue-wnli, \\textbf{scitail}, sick, superglue-cb\n", " \\\\\n", " \\midrule\n", " Setting: Non-Paraphrase Meta-train \\\\\n", " ade\\_corpus\\_v2-classification, ag\\_news, amazon\\_polarity, anli, circa, climate\\_fever, dbpedia\\_14, discovery, emo, emotion, ethos-directed\\_vs\\_generalized,\n", " ethos-disability, ethos-gender, ethos-national\\_origin, ethos-race, ethos-religion, ethos-sexual\\_orientation, financial\\_phrasebank, glue-cola, glue-mnli, glue-qnli, \n", " glue-rte, glue-sst2, glue-wnli, google\\_wellformed\\_query, hate\\_speech18, hate\\_speech\\_offensive, hatexplain, health\\_fact, imdb, kilt\\_fever, liar, onestop\\_english, \n", " poem\\_sentiment, rotten\\_tomatoes, scitail, sick, sms\\_spam, superglue-cb, superglue-rte, superglue-wic, superglue-wsc, tab\\_fact, trec, trec-finegrained, tweet\\_eval-emoji, \n", " tweet\\_eval-emotion, tweet\\_eval-hate, tweet\\_eval-irony, tweet\\_eval-offensive, tweet\\_eval-sentiment, tweet\\_eval-stance\\_abortion, tweet\\_eval-stance\\_atheism, \n", " tweet\\_eval-stance\\_climate, tweet\\_eval-stance\\_feminist, tweet\\_eval-stance\\_hillary, wiki\\_qa, yahoo\\_answers\\_topics, yelp\\_polarity\n", " \\\\\n", " \\midrule\n", " Setting: Non-Paraphrase Target \\\\\n", " Target: glue-mrpc, glue-qqp, \\textbf{medical\\_questions\\_pairs}, paws\n", " \\\\\n", " \\midrule\n", " Setting: \\main\\ Diverse Meta-train \\\\\n", " glue-mnli, glue-qqp, glue-sst2, hate\\_speech\\_offensive, kilt\\_hotpotqa, kilt\\_zsre, lama-trex, race-high, scitail, tweet\\_eval-offensive, wino\\_grande, yahoo\\_answers\\_topics, yelp\\_review\\_full \\\\\n", " \\midrule\n", " Setting: \\main\\ No Diverse Meta-train \\\\\n", " ag\\_news, amazon\\_polarity, dbpedia\\_14, emo, emotion, glue-sst2, imdb, tweet\\_eval-emoji, tweet\\_eval-offensive, tweet\\_eval-sentiment, yahoo\\_answers\\_topics, yelp\\_polarity, yelp\\_review\\_full \\\\\n", " \\midrule\n", " Setting: \\main\\ Instructions Meta-train \\\\\n", " ag\\_news, amazon\\_polarity, anli, art, circa, cosmos\\_qa, dbpedia\\_14, discovery, emo, emotion, freebase\\_qa, gigaword, google\\_wellformed\\_query, hellaswag, imdb, liar, paws, piqa, quail, quoref, ropes, sciq, scitail, social\\_i\\_qa, swag, tab\\_fact, wiki\\_qa, wiqa, xsum, yahoo\\_answers\\_topics, yelp\\_polarity, yelp\\_review\\_full \\\\\n", " \\midrule\n", " Setting: \\main\\ Instructions Target \\\\\n", " ai2\\_arc, \\textbf{climate\\_fever}, codah, commonsense\\_qa, dream, \\textbf{financial\\_phrasebank}, \\textbf{medical\\_questions\\_pairs}, openbookqa, \\textbf{poem\\_sentiment}, qasc, quarel, sick \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{Full datasets for all settings.\n", " The first 10 rows are for main settings described in Section~\\ref{subsec:dataset}; the last four rows are settings used for ablations in Section~\\ref{subsec:ablations}.\n", " Splits and dataname names consistent to those in \\citet{ye2021crossfit} and \\citet{khashabi2020unifiedqa}.\n", " \\textbf{Bold} indicates the test dataset with no overlap in domain with meta-training tasks. A prefix \\texttt{unifiedqa:} indicates that the dataset taken is from \\textsc{UnifiedQA}; otherwise, from \\textsc{CrossFit}.\n", " References for all datasets are provided in Table~\\ref{tab:full-datasets-citations}.\n", " }\\label{tab:full-datasets}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[t]\n", " \\centering \\footnotesize\n", " \n", " \\begin{tabular}{p{\\textwidth}}\n", " \\toprule\n", " ade\\_corpus\\_v2-classification~\\citep{GURULINGAPPA2012885}, ade\\_corpus\\_v2-dosage~\\citep{GURULINGAPPA2012885}, ag\\_news~\\href{http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html}{Gulli (link)}, ai2\\_arc~\\citep{Clark2018ThinkYH}, amazon\\_polarity~\\citep{McAuley2013HiddenFA}, anli~\\citep{nie-etal-2020-adversarial}, art~\\citep{bhagavatula2020abductive}, biomrc~\\citep{pappas-etal-2020-biomrc}, blimp-anaphor\\_number\\_agreement~\\citep{warstadt2019blimp}, blimp-ellipsis\\_n\\_bar\\_2~\\citep{warstadt2019blimp}, blimp-sentential\\_negation\\_npi\\_licensor\\_present~\\citep{warstadt2019blimp}, blimp-sentential\\_negation\\_npi\\_scope~\\citep{warstadt2019blimp}, boolq~\\citep{clark-etal-2019-boolq}, circa~\\citep{louis-etal-2020-id}, climate\\_fever~\\citep{Diggelmann2020CLIMATEFEVERAD}, codah~\\citep{chen-etal-2019-codah}, commonsense\\_qa~\\citep{talmor-etal-2019-commonsenseqa}, cosmos\\_qa~\\citep{huang-etal-2019-cosmos}, crows\\_pairs~\\citep{nangia-etal-2020-crows}, dbpedia\\_14~\\citep{Lehmann2015DBpediaA}, discovery~\\citep{sileo-etal-2019-mining}, dream~\\citep{sun-etal-2019-dream}, emo~\\citep{chatterjee-etal-2019-semeval}, emotion~\\citep{saravia-etal-2018-carer}, ethos-directed\\_vs\\_generalized~\\citep{Mollas2020ETHOSAO}, ethos-disability~\\citep{Mollas2020ETHOSAO}, ethos-gender~\\citep{Mollas2020ETHOSAO}, ethos-national\\_origin~\\citep{Mollas2020ETHOSAO}, ethos-race~\\citep{Mollas2020ETHOSAO}, ethos-religion~\\citep{Mollas2020ETHOSAO}, ethos-sexual\\_orientation~\\citep{Mollas2020ETHOSAO}, financial\\_phrasebank~\\citep{financial-phrasebank}, freebase\\_qa~\\citep{jiang-etal-2019-freebaseqa}, gigaword~\\citep{napoles-etal-2012-annotated}, glue-cola~\\citep{warstadt-etal-2019-neural}, glue-mnli~\\citep{williams-etal-2018-broad}, glue-mrpc~\\citep{dolan-brockett-2005-automatically}, glue-qnli~\\citep{rajpurkar-etal-2016-squad}, glue-qqp~(\\url{data.quora.com/First-Quora-Dataset-Release-Question-Pairs}), glue-rte~\\begin{tabular}[c]{@{}l@{}}\\citep{dagan2005pascal, bar2006second}\\citep{giampiccolo2007third, bentivogli2009fifth}\\end{tabular}, glue-sst2~\\citep{socher-etal-2013-recursive}, glue-wnli~\\citep{levesque2012winograd}, google\\_wellformed\\_query~\\citep{faruqui-das-2018-identifying}, hate\\_speech18~\\citep{gibert2018hate}, hate\\_speech\\_offensive~\\citep{hateoffensive}, hatexplain~\\citep{mathew2020hatexplain}, health\\_fact~\\citep{kotonya-toni-2020-explainable-automated}, hellaswag~\\citep{zellers-etal-2019-hellaswag}, hotpot\\_qa~\\citep{yang-etal-2018-hotpotqa}, imdb~\\citep{maas-etal-2011-learning}, kilt\\_ay2~\\citep{hoffart-etal-2011-robust}, kilt\\_fever~\\citep{thorne-etal-2018-fever}, kilt\\_hotpotqa~\\citep{yang-etal-2018-hotpotqa}, kilt\\_nq~\\citep{kwiatkowski-etal-2019-natural}, kilt\\_trex~\\citep{elsahar-etal-2018-rex}, kilt\\_zsre~\\citep{levy-etal-2017-zero}, lama-conceptnet~\\citep{petroni-etal-2019-language,petroni2020how}, lama-google\\_re~\\citep{petroni-etal-2019-language,petroni2020how}, lama-squad~\\citep{petroni-etal-2019-language,petroni2020how}, lama-trex~\\citep{petroni-etal-2019-language,petroni2020how}, liar~\\citep{wang-2017-liar}, mc\\_taco~\\citep{zhou-etal-2019-going}, medical\\_questions\\_pairs~\\citep{medical-qqp}, numer\\_sense~\\citep{lin-etal-2020-birds}, onestop\\_english~\\citep{vajjala-lucic-2018-onestopenglish}, openbookqa~\\citep{mihaylov-etal-2018-suit}, paws~\\citep{zhang-etal-2019-paws}, piqa~\\citep{bisk2019piqa}, poem\\_sentiment~\\citep{sheng-uthus-2020-investigating}, proto\\_qa~\\citep{boratko-etal-2020-protoqa}, qa\\_srl~\\citep{he-etal-2015-question}, qasc~\\citep{Khot_Clark_Guerquin_Jansen_Sabharwal_2020}, quail~\\citep{Rogers_Kovaleva_Downey_Rumshisky_2020}, quarel~\\citep{Tafjord_Clark_Gardner_Yih_Sabharwal_2019}, quartz-no\\_knowledge~\\citep{tafjord-etal-2019-quartz}, quartz-with\\_knowledge~\\citep{tafjord-etal-2019-quartz}, quoref~\\citep{dasigi-etal-2019-quoref}, race-high~\\citep{lai-etal-2017-race}, race-middle~\\citep{lai-etal-2017-race}, ropes~\\citep{lin-etal-2019-reasoning}, rotten\\_tomatoes~\\citep{pang-lee-2005-seeing}, sciq~\\citep{welbl-etal-2017-crowdsourcing}, scitail~\\citep{scitail}, search\\_qa~\\citep{Dunn2017SearchQAAN}, sick~\\citep{marelli-etal-2014-sick}, sms\\_spam~\\citep{sms_spam}, social\\_i\\_qa~\\citep{sap-etal-2019-social}, spider~\\citep{yu-etal-2018-spider}, squad-no\\_context~\\citep{rajpurkar-etal-2016-squad}, squad-with\\_context~\\citep{rajpurkar-etal-2016-squad}, superglue-cb~\\citep{Marneffe_Simons_Tonhauser_2019}, superglue-copa~\\citep{gordon-etal-2012-semeval}, superglue-multirc~\\citep{khashabi-etal-2018-looking}, superglue-record~\\citep{Zhang2018ReCoRDBT}, superglue-rte~\\begin{tabular}[c]{@{}l@{}}\\citep{dagan2005pascal, bar2006second}\\citep{giampiccolo2007third, bentivogli2009fifth}\\end{tabular}, superglue-wic~\\citep{pilehvar-camacho-collados-2019-wic}, superglue-wsc~\\citep{levesque2012winograd}, swag~\\citep{zellers-etal-2018-swag}, tab\\_fact~\\citep{Chen2020TabFact}, trec~\\citep{li-roth-2002-learning,hovy-etal-2001-toward}, trec-finegrained~\\citep{li-roth-2002-learning,hovy-etal-2001-toward}, tweet\\_eval-emoji~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-emotion~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-hate~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-irony~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-offensive~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-sentiment~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-stance\\_abortion~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-stance\\_atheism~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-stance\\_climate~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-stance\\_feminist~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_eval-stance\\_hillary~\\citep{barbieri-etal-2020-tweeteval}, tweet\\_qa~\\citep{xiong-etal-2019-tweetqa}, unifiedqa:ai2\\_science\\_middle~(\\url{data.allenai.org/ai2-science-questions}), unifiedqa:boolq~\\citep{clark-etal-2019-boolq}, unifiedqa:commonsenseqa~\\citep{talmor-etal-2019-commonsenseqa}, unifiedqa:drop~\\citep{dua-etal-2019-drop}, unifiedqa:mctest~\\citep{richardson-etal-2013-mctest}, unifiedqa:narrativeqa~\\citep{kocisky-etal-2018-narrativeqa}, unifiedqa:natural\\_questions~\\citep{kwiatkowski-etal-2019-natural}, unifiedqa:newsqa~\\citep{trischler-etal-2017-newsqa}, unifiedqa:openbookqa~\\citep{mihaylov-etal-2018-suit}, unifiedqa:physical\\_iqa~\\citep{bisk2019piqa}, unifiedqa:qasc~\\citep{khot2019qasc}, unifiedqa:quoref~\\citep{dasigi-etal-2019-quoref}, unifiedqa:race\\_string~\\citep{lai-etal-2017-race}, unifiedqa:ropes~\\citep{lin-etal-2019-reasoning}, unifiedqa:social\\_iqa~\\citep{sap2019socialiqa}, unifiedqa:squad1\\_1~\\citep{rajpurkar-etal-2016-squad}, unifiedqa:squad2~\\citep{rajpurkar-etal-2018-know}, unifiedqa:winogrande\\_xl~\\citep{sakaguchi2019winogrande}, web\\_questions~\\citep{berant-etal-2013-semantic}, wiki\\_qa~\\citep{yang-etal-2015-wikiqa}, wikisql~\\citep{zhongSeq2SQL2017}, wino\\_grande~\\citep{Sakaguchi_Le_Bras_Bhagavatula_Choi_2020}, wiqa~\\citep{tandon-etal-2019-wiqa}, xsum~\\citep{narayan-etal-2018-dont}, yahoo\\_answers\\_topics~\\href{https://webscope.sandbox.yahoo.com/catalog.php?datatype=l}{(link)}, yelp\\_polarity~\\citep{zhang2015character}, yelp\\_review\\_full~\\citep{zhang2015character} \\\\\n", " \\bottomrule\n", " \\end{tabular}\n", " \\caption{\n", " References for 142 datasets used in the paper.\n", " A prefix \\texttt{unifiedqa:} indicates that the dataset taken is from \\textsc{UnifiedQA}; otherwise, from \\textsc{CrossFit}.\n", " }\\label{tab:full-datasets-citations}\n", "\\end{table*}\n", "\n", "\\section{Potential Risks}\n", "\\ours\\ is based on the large language model that is pretrained on a web corpus, which potentially includes harmful and biased context, despite the original authors' best efforts to mine the text.\n", "There are also potential risks in privacy and security---for instance, \\citet{carlini2021extracting} reported that it is possible to design the attack algorithm to extract a substantial amount of training data.\n", "We thus highlight that \\ours\\ should be considered as a research prototype rather than a deployable system to real users, and continuing efforts are needed to reduce potential risks of the model.\n", "\n", "\\end{document}\n" ], "del_percentage": 0.14912 } }