{ "id": "2212.10511", "annotator": "jianxie", "input": [ "\\pdfoutput=1\n", "\\documentclass[11pt]{article}\n", "\\usepackage[]{acl}\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{graphicx}\n", "\\usepackage[T1]{fontenc}\n", "\\usepackage[utf8]{inputenc}\n", "\\usepackage{microtype}\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{tcolorbox}\n", "\\usepackage{multirow}\n", "\\usepackage{tikz}\n", "\\usepackage{listings}\n", "\\usepackage{capt-of}\n", "\\usepackage{graphicx} \n", "\\usepackage{pgfplots}\n", "\\usepackage{overpic}\n", "\\pgfplotsset{compat=1.12}\n", "\\usepackage{amsmath}\n", "\\usepackage{multicol}\n", "\\usepackage{booktabs}\n", "\\usepackage{colortbl,array,xcolor}\n", "\\usepackage{enumitem}\n", "\\usepackage{xspace}\n", "\\usepackage{caption}\n", "\\usepackage{subcaption}\n", "\\usepackage{graphicx}\n", "\\usepackage{amsfonts}\n", "\\usepackage{booktabs}\n", "\\usepackage{xcolor}\n", "\\usepackage{soul}\n", "\\usepackage{graphicx}\n", "\\usepackage{xspace}\n", "\\usepackage{multirow} \n", "\\usepackage{amsmath}\n", "\\newcommand{\\nar}{NAR\\xspace}\n", "\\newcommand{\\ar}{AR\\xspace}\n", "\\newcommand{\\rib}{BERRI\\xspace}\n", "\\newcommand{\\model}{TART\\xspace}\n", "\\newcommand{\\entityquestions}{\\textsc{EntityQuestions}}\n", "\\newcommand{\\ours}{\\textsc{PopQA}}\n", "\\DeclareMathOperator*{\\argmin}{argmin} \n", "\\DeclareMathOperator*{\\argmax}{argmax} \n", "\\DeclareMathOperator*{\\argandmax}{(arg)max} \n", "\\DeclareMathOperator*{\\topk}{topk} \n", "\\usetikzlibrary{intersections} \n", "\\DeclareRobustCommand{\\hlg}[1]{{\\sethlcolor{lightgray}\\hl{#1}}}\n", "\\DeclareRobustCommand{\\hlp}[1]{{\\sethlcolor{magenta}\\hl{#1}}}\n", "\\newcommand{\\Sref}[1]{Section~\\ref{#1}}\n", "\\newcommand{\\Tref}[1]{Table~\\ref{#1}}\n", "\\newcommand{\\Fref}[1]{Figure~\\ref{#1}}\n", "\\newcommand{\\Eref}[1]{Eq.~\\ref{#1}}\n", "\\newcommand{\\subscript}[2]{$#1 _ #2$}\n", "\\newcommand{\\hlc}[2][yellow]{{\n", " \\colorlet{foo}{#1}\n", " \\sethlcolor{foo}\\hl{#2}}\n", "}\n", "\\definecolor{Gray}{gray}{0.85}\n", "\\definecolor{LightCyan}{rgb}{0.88,1,1}\n", "\\newcolumntype{a}{>{\\columncolor{Gray}}c}\n", "\\newcolumntype{b}{>{\\columncolor{LightCyan}}c}\n", "\\usetikzlibrary{shapes.geometric}\n", "\\newcommand{\\smax}{\\text{$\\text{S}_{\\text{max}}$}\\xspace}\n", "\\newcommand{\\smin}{\\text{$\\text{S}_{\\text{1}}$}\\xspace}\n", "\\newcommand{\\changed}[1]{{\\color{red}#1}}\n", "\\newcommand{\\draftonly}[1]{#1}\n", "\\newcommand{\\draftcomment}[3]{\\draftonly{{\\textcolor{#3}{[\\textbf{#1--\\textsc{#2}}]}}}}\n", "\\newcommand{\\mr}[1]{\\draftcomment{\\small #1}{MR}{magenta}}\n", "\\newcommand{\\akari}[1]{\\draftcomment{\\small #1}{akari}{purple}}\n", "\\newcommand{\\victor}[1]{\\draftcomment{\\small #1}{victor}{red}}\n", "\\newcommand{\\daniel}[1]{\\draftcomment{\\small #1}{daniel}{cyan}}\n", "\\newcommand{\\timo}[1]{\\draftcomment{\\small #1}{timo}{purple}}\n", "\\newcommand{\\alex}[1]{\\draftcomment{\\small #1}{alex}{blue}}\n", "\\newcommand{\\raj}[1]{\\draftcomment{\\small #1}{raj}{red}}\n", "\\usepackage{pifont}\n", "\\newcommand{\\xmark}{\\ding{55}}\n", "\\lstset{\n", " language=python,\n", " basicstyle=\\small,\n", " breaklines=true\n", " }\n", "\\usepackage{algorithm}\n", "\\usepackage{algpseudocode}\n", "\\title{\\vspace*{-0.5in}\n", "{{\\small \\hfill ACL 2023}\\\\\n", "\\vspace*{.25in}}\n", "\\renewcommand\\ttdefault{cmtt}\n", "\\author{\\parbox{0.9\\linewidth}{\n", "\\centering{Alex Mallen$^{*\\diamondsuit}$ ~~~ Akari Asai$^{*\\diamondsuit}$ ~~~ Victor Zhong$^{\\diamondsuit}$ ~~~ Rajarshi Das$^{\\diamondsuit}$ \\\\\n", "Daniel Khashabi$^\\spadesuit$ ~~~ Hannaneh Hajishirzi$^{\\diamondsuit\\heartsuit}$ \n", "} \\\\\n", "{\\rm $^\\diamondsuit$University of Washington~~$^\\spadesuit$Johns Hopkins University\\\\$^\\heartsuit$Allen Institute for AI} \\\\\n", "\\texttt{\\{atmallen,akari,vzhong,rajarshi,hannaneh\\}@cs.washington.edu} \\\\\n", "\\texttt{danielk@jhu.edu} \\\\\n", "}\n", "}\n", "\\begin{document}\n", "\\maketitle\n", "\\begin{abstract}\n", "Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world knowledge in their parameters. \n", "This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments on two open-domain entity-centric QA datasets: \\ours, our new dataset with 14k questions about long-tail entities, and EntityQuestions, a widely used open-domain QA dataset.\n", "\\end{abstract}\n", "\\section{Introduction}\n", "Large language models (LMs; ~\\citealt{brown2020language,raffel2020exploring}) have been shown to be competitive on diverse NLP tasks, including knowledge-intensive tasks that require fine-grained memorization of factual knowledge~\\cite{chowdhery2022palm,yu2022generate}. \n", "Meanwhile, LMs have also been shown to have limited memorization for less frequent entities~\\cite{llm_longtail}, \n", "are prone to hallucinations~\\cite{shuster2021retrieval}, and suffer from temporal degradation~\\cite{kasai2022realtime,jang2022temporalwiki}.\n", "Understanding when we should \\emph{not} trust LMs' outputs is also crucial to safely deploying them in real-world applications~\\cite{kadavath2022language}.\n", "\\begin{figure}[t!]\n", "\\includegraphics[width=7.6cm, trim=0cm 0.6cm 0cm 0.2cm]{figures/teaser.pdf}\\caption{\n", "} \\label{fig:teaser}\n", "\\end{figure}\n", "In particular, we aim to address the following research questions: \n", "\\vspace{-0.2cm}\n", "\\begin{enumerate}[label=(\\subscript{RQ}{{\\arabic*}}),leftmargin=35pt]\n", "\\itemsep-0.4em \n", " \\item \\label{Q1} \n", " (Section~\\ref{sec:analysis_non_parametric})\n", "\\end{enumerate}\n", "\\vspace{-0.2cm}\n", "{\n", "We construct a new dataset, \\ours, consisting of 14k questions to cover factual information in the long tail that might have been missed in popular QA datasets~\\cite{kwiatkowski-etal-2019-natural}. \n", "{We also use EntityQuestions~\\cite{sciavolino2021simple}, an open-domain QA dataset with a long-tail distribution. \n", "}\n", "On both datasets, LMs' memorization {\\ref{Q1}} is often limited to the popular factual knowledge and even GPT-3 \\texttt{davinci-003} fails to answer the majority of the long-tail questions. \n", "{This also suggests that we can predict if LMs memorize certain knowledge based on the information presented in the input question only. }\n", "{\\ref{Q3}}.\n", "This method further improves performance on \\ours\\ by up to {10\\\n", "{\n", "indicating the potential for future research in more efficient and powerful retrieval-augmented LMs.\n", "}\n", "\\section{Related Work}\n", "\\paragraph{Parametric and non-parametric knowledge.}\n", "{\\citet{petroni2019language} demonstrate that large pre-trained LMs such as BERT~\\cite{devlin2018bert} memorize the significant amount of world knowledge in their parameters ({\\it parametric knowledge}), and \\citet{roberts2020much} show that fine-tuned T5 without any reference documents (closed-book QA) can achieve competitive performance on open-domain QA. \n", "More recent and powerful LMs~\\cite{brown2020language,chowdhery2022palm} further improve performance on diverse knowledge-intensive tasks, leveraging their strong parametric memories~\\cite{llm_longtail,yu2022generate}.}\n", "However, relying solely on their parameters to encode a wealth of world knowledge requires a prohibitively large number of parameters and the knowledge can become obsolete quickly~\\cite{kasai2022realtime,jang2022temporalwiki}. \n", "}\n", "\\paragraph{Understanding memorization. }\n", "Several prior work establishes a positive relationship between string frequency in pre-training corpora and memorization~\\cite{Carlini2022QuantifyingMA,Razeghi2022ImpactOP}.\n", "Concurrent to our work, ~\\citet{llm_longtail} show that the co-occurrence of the question and answer entities in pretraining corpora has a positive correlation with models' QA accuracy on popular open-domain QA benchmarks such as Natural Questions~\\cite{kwiatkowski-etal-2019-natural}.\n", "This work, instead, attempts to predict memorization using the variables available in the input question only and uses popularity to obtain a proxy for how frequently an entity is likely to be discussed on the web.\n", "}\n", "Prior work investigates the effectiveness of deciding when to use non-parametric memories at the token level in $k$NN LM~\\cite{he-etal-2021-efficient,drozdov-etal-2022-cant}. \n", "\\if0{\n", "\\paragraph{LM Predictions Calibration.}\n", "Predicting the reliability of models' outputs is an outstanding issue in QA~\\cite{rajpurkar2018know,asai2020challenges}, LM memorization~\\cite{10.1162/tacl_a_00324}, and the wider NLP and machine learning community. \n", "Despite its impressive performance on wider tasks, large LMs often hallucinate while their outputs often look plausible, making it difficult to understand when their outputs are unreliable~\\cite{liu2021token}. \n", "Recently, \\citet{kadavath2022language} show that large LMs can predict when they do not know the answers to some extent, especially after fine-tuning on in-domain data. \n", "}\\fi\n", "\\begin{figure}[t!]\n", " \\centering\n", "\\includegraphics[width=0.95\\linewidth,keepaspectratio]{figures/figure2_final.pdf}\n", " \\caption{\\ours~is created by sampling knowledge triples from Wikidata and converting them to natural language questions, followed by popularity calculation. \n", " }\n", " \\label{fig:background}\n", "\\end{figure}\n", "\\section{Evaluation Setup}\n", "\\label{sec:dataset}\n", "{\n", "We evaluate LMs' ability to memorize factual knowledge through closed-book QA tasks with few-shot samples. \n", "We evaluate LMs on our new dataset, \\ours~(Figure~\\ref{fig:background}), and EntityQuestions, both of which have long-tail distributions (Figure~\\ref{fig:pop_distr}).\n", "}\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.9\\linewidth]{figures/popularity_distributions.pdf}\n", " \\caption{Distribution of subject entity popularity for EntityQuestions, \\ours,\\ and for NQ-open for reference. Details on NQ entities can be found in Appendix~\\ref{app_sec:dataset}.\n", " }\n", " \\label{fig:pop_distr}\n", "\\end{figure}\n", "\\subsection{Focus and Task}\n", "\\noindent {\\bf Focus: factual knowledge. }\n", "Among diverse types of world knowledge, this work focuses on factual knowledge~\\cite{adams2015bloom} of entities---knowledge about specific details of the target entities. \n", "We define factual knowledge as a triplet of (\\hlc[pink]{subject}, \\hlc[cyan!20]{relationship}, \\hlc[green!20]{object}) as in Figure~\\ref{fig:background} left.\n", "\\if0{\n", "We consider a model to have memorized factual knowledge when it generates the object entity given a subject entity and a relationship type in the form of a natural language question, as shown in Figure~\\ref{fig:background}.\\akari{this sentence is somewhat duplicated with the metrics paragraph. } \n", "}\\fi\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Task format: open-domain QA. } \n", "We formulate the task as open-domain QA~\\cite{roberts2020much}: given a question, a model predicts an answer without any pre-given ground-truth paragraph.\\footnote{Some work conducts knowledge probing of encoder-only models by filling out \\texttt{[MASK]} tokens~\\cite{petroni2019language}. \n", "We use decoder-only models and thus do not use this \\texttt{fill-in-the-blank} scheme.} \n", "{As in \\citet{llm_longtail}, we study few-shot settings and prompt LMs without any parameter updates, instead of fine-tuning them on QA datasets such as in \\citet{roberts2020much}. }\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Metrics: accuracy.}\n", "We mark a prediction as correct if any substring of the prediction is an exact match of any of the gold answers.\n", "\\subsection{Dimensions of Analysis}\n", "We hypothesize that factual knowledge that is less frequently discussed on the web may not be well-memorized by LMs.\n", "Previous research often uses the term frequency of object entities in pretraining corpora to understand memorization~\\cite{fevry2020entities,llm_longtail, Razeghi2022ImpactOP}. \n", "{Instead, we investigate whether it's possible to predict memorization based on the input information only, and then apply the findings for modeling improvements, unlike prior analyses.}\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Subject entity popularity.}\n", "We use the popularity of the entities measured by Wikipedia monthly page views as a proxy for how frequently the entities are likely to be discussed on the web, instead of using the occurrence of entities or strings in the pretraining corpus~\\cite{Carlini2022QuantifyingMA,llm_longtail,Razeghi2022ImpactOP}. \n", "{Calculating frequencies over large pretraining corpora requires massive computations to link entities over billions of tokens, or can result in noisy estimations.\\footnote{Moreover, several recent models like GPT-3 do not release their pretraining corpora, and it is an open question whether the frequencies in pretraining corpora reflect the frequencies in their private corpora. } }\n", "{Our initial studies show that this is much cheaper\\footnote{We can get page views by calling Wikipedia API. } and aligns well with our intuition.}\n", "\\vspace{.1cm}\n", "\\subsection{Benchmarks}\n", "\\vspace{.1cm}\n", "\\noindent {\\bf \\ours.}\n", "In our preliminary studies, we found that existing common open-domain QA datasets such as Natural Questions (NQ; \\citealt{kwiatkowski-etal-2019-natural}) are often dominated by subject entities with high popularity, and it is often hard to identify relationship types due to diverse question surface forms.\n", "To enable a fine-grained analysis of memorization based on the aforementioned analysis dimensions, we construct \\ours, a new large-scale entity-centric open-domain QA dataset about entities with a wide variety of popularity, as shown in Figure~\\ref{fig:pop_distr}.\n", "To construct \\ours, we randomly sample knowledge triples of 16 diverse relationship types from Wikidata and convert them into natural language questions, using a natural language template (depicted in Figure~\\ref{fig:background}). \n", "We verbalize a knowledge triple \\((S, R, O)\\) into a question that involves substituting the subject \\(S\\) into a template manually written for the relationship type \\(R\\). \n", "The full list of templates is found in Table~\\ref{tab:list_of_instructions_ours} of the Appendix. \n", "The set of acceptable answers to the question is the set of entities \\(E\\) such that \\((S, R, E)\\) exists in the knowledge graph. \n", "We tried various templates and found that the results were fairly robust to the templates. \n", "\\vspace{.1cm}\n", "\\noindent {\\bf EntityQuestions.}\n", "We test on another popular open-domain QA dataset, EntityQuestions~\\cite{sciavolino2021simple}, which also covers a long-tail entity distribution. \n", "They use Wikipedia hyperlink counts as a proxy of the frequency of entities and sample knowledge triples from WikiData, from the frequency distributions.\n", "Unlike \\ours, EntityQuestions doesn't provide entity annotations, so we only use 82\\\n", "\\label{sec:analysis_parametric}\n", "affect those memorization behaviors \\ref{Q1}. \n" ], "output": { "What experiments do you suggest doing?": [ "1. The potential factors that might influence the accuracy: The authors should provide LLMs' accuracy regarding other factors ( e.g., relationship type).", "2. scaling result with regard to LLMs' performance: The authors should provide an analysis about whether models' scale sizes would influence models' performance.", "3. Accuracy with different augmentation methods: The authors should provide detailed LLMs' performance with different augmentation methods.", "4. Retrieval-based method vs LM's parametric memory: The authors should provide a detailed analysis of traditional retrieval models' performance vs LM's parametric memory.", "5. Results of LLMs with the proposed method: The authors should provide the proportion of questions for which various models use retrieval in their proposed setup on the dataset." ], "Why do you suggest these experiments?": [ "1. To examine whether there are factors influencing the LLMs\u2018 performance.", "2. To examine whether LLMs' scale sizes would influence LLMs' performance.", "3. To show the efficiency of current augmented strategies on PopQA.", "4. To give an in-depth understanding of what influences the performance of retrieval-augmented LMs.", "5. To demonstrate the improvement of the author's proposed adaptive retrieval method." ] }, "paper_info": { "title": "When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories", "authors": [ "Alex Mallen", "Akari Asai", "Victor Zhong", "Rajarshi Das", "Daniel Khashabi", "Hannaneh Hajishirzi" ], "abstract": "Despite their impressive performance on diverse tasks, large language models\n(LMs) still struggle with tasks requiring rich world knowledge, implying the\nlimitations of relying solely on their parameters to encode a wealth of world\nknowledge. This paper aims to understand LMs' strengths and limitations in\nmemorizing factual knowledge, by conducting large-scale knowledge probing\nexperiments of 10 models and 4 augmentation methods on PopQA, our new\nopen-domain QA dataset with 14k questions. We find that LMs struggle with less\npopular factual knowledge, and that scaling fails to appreciably improve\nmemorization of factual knowledge in the long tail. We then show that\nretrieval-augmented LMs largely outperform orders of magnitude larger LMs,\nwhile unassisted LMs remain competitive in questions about high-popularity\nentities. Based on those findings, we devise a simple, yet effective, method\nfor powerful and efficient retrieval-augmented LMs, which retrieves\nnon-parametric memories only when necessary. Experimental results show that\nthis significantly improves models' performance while reducing the inference\ncosts.", "comments": "ACL 2023; Code and data available at\n https://github.com/AlexTMallen/adaptive-retrieval" }, "raw_data": { "context_before_exp": [ "\n", "\\pdfoutput=1\n", "\n", "\n", "\\documentclass[11pt]{article}\n", "\n", "\n", "\\usepackage[]{acl}\n", "\n", "\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{graphicx}\n", "\n", "\n", "\\usepackage[T1]{fontenc}\n", "\n", "\n", "\n", "\n", "\n", "\\usepackage[utf8]{inputenc}\n", "\n", "\n", "\n", "\n", "\\usepackage{microtype}\n", "\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage{tcolorbox}\n", "\\usepackage{multirow}\n", "\\usepackage{tikz}\n", "\\usepackage{listings}\n", "\\usepackage{capt-of}\n", "\\usepackage{graphicx} \n", "\\usepackage{pgfplots}\n", "\\usepackage{overpic}\n", "\\pgfplotsset{compat=1.12}\n", "\\usepackage{amsmath}\n", "\n", "\n", "\n", "\\usepackage{multicol}\n", "\\usepackage{booktabs}\n", "\n", "\\usepackage{colortbl,array,xcolor}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\usepackage{enumitem}\n", "\\usepackage{xspace}\n", "\\usepackage{caption}\n", "\\usepackage{subcaption}\n", "\\usepackage{graphicx}\n", "\\usepackage{amsfonts}\n", "\\usepackage{booktabs}\n", "\\usepackage{xcolor}\n", "\\usepackage{soul}\n", "\\usepackage{graphicx}\n", "\\usepackage{xspace}\n", "\\usepackage{multirow} \n", "\\usepackage{amsmath}\n", "\n", "\\newcommand{\\nar}{NAR\\xspace}\n", "\\newcommand{\\ar}{AR\\xspace}\n", "\\newcommand{\\rib}{BERRI\\xspace}\n", "\\newcommand{\\model}{TART\\xspace}\n", "\\newcommand{\\entityquestions}{\\textsc{EntityQuestions}}\n", "\\newcommand{\\ours}{\\textsc{PopQA}}\n", "\\DeclareMathOperator*{\\argmin}{argmin} \n", "\\DeclareMathOperator*{\\argmax}{argmax} \n", "\\DeclareMathOperator*{\\argandmax}{(arg)max} \n", "\\DeclareMathOperator*{\\topk}{topk} \n", "\\usetikzlibrary{intersections} \n", "\\DeclareRobustCommand{\\hlg}[1]{{\\sethlcolor{lightgray}\\hl{#1}}}\n", "\\DeclareRobustCommand{\\hlp}[1]{{\\sethlcolor{magenta}\\hl{#1}}}\n", "\\newcommand{\\Sref}[1]{Section~\\ref{#1}}\n", "\\newcommand{\\Tref}[1]{Table~\\ref{#1}}\n", "\\newcommand{\\Fref}[1]{Figure~\\ref{#1}}\n", "\\newcommand{\\Eref}[1]{Eq.~\\ref{#1}}\n", "\n", "\\newcommand{\\subscript}[2]{$#1 _ #2$}\n", "\n", "\\newcommand{\\hlc}[2][yellow]{{\n", " \\colorlet{foo}{#1}\n", " \\sethlcolor{foo}\\hl{#2}}\n", "}\n", "\n", "\\definecolor{Gray}{gray}{0.85}\n", "\\definecolor{LightCyan}{rgb}{0.88,1,1}\n", "\n", "\\newcolumntype{a}{>{\\columncolor{Gray}}c}\n", "\\newcolumntype{b}{>{\\columncolor{LightCyan}}c}\n", "\n", "\\usetikzlibrary{shapes.geometric}\n", "\\newcommand{\\smax}{\\text{$\\text{S}_{\\text{max}}$}\\xspace}\n", "\\newcommand{\\smin}{\\text{$\\text{S}_{\\text{1}}$}\\xspace}\n", "\n", "\\newcommand{\\changed}[1]{{\\color{red}#1}}\n", "\\newcommand{\\draftonly}[1]{#1}\n", "\\newcommand{\\draftcomment}[3]{\\draftonly{{\\textcolor{#3}{[\\textbf{#1--\\textsc{#2}}]}}}}\n", "\\newcommand{\\mr}[1]{\\draftcomment{\\small #1}{MR}{magenta}}\n", "\\newcommand{\\akari}[1]{\\draftcomment{\\small #1}{akari}{purple}}\n", "\\newcommand{\\victor}[1]{\\draftcomment{\\small #1}{victor}{red}}\n", "\\newcommand{\\daniel}[1]{\\draftcomment{\\small #1}{daniel}{cyan}}\n", "\\newcommand{\\timo}[1]{\\draftcomment{\\small #1}{timo}{purple}}\n", "\\newcommand{\\alex}[1]{\\draftcomment{\\small #1}{alex}{blue}}\n", "\\newcommand{\\raj}[1]{\\draftcomment{\\small #1}{raj}{red}}\n", "\\usepackage{pifont}\n", "\n", "\\newcommand{\\xmark}{\\ding{55}}\n", "\\lstset{\n", " language=python,\n", " basicstyle=\\small,\n", " breaklines=true\n", " }\n", "\n", "\\usepackage{algorithm}\n", "\\usepackage{algpseudocode}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\title{\\vspace*{-0.5in}\n", "{{\\small \\hfill ACL 2023}\\\\\n", "\\vspace*{.25in}}\n", "When Not to Trust Language Models: Investigating Effectiveness of Parametric and Non-Parametric Memories}\n", "\n", "\n", "\n", "\n", "\n", "\\renewcommand\\ttdefault{cmtt}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\author{\\parbox{0.9\\linewidth}{\n", "\\centering{Alex Mallen$^{*\\diamondsuit}$ ~~~ Akari Asai$^{*\\diamondsuit}$ ~~~ Victor Zhong$^{\\diamondsuit}$ ~~~ Rajarshi Das$^{\\diamondsuit}$ \\\\\n", "Daniel Khashabi$^\\spadesuit$ ~~~ Hannaneh Hajishirzi$^{\\diamondsuit\\heartsuit}$ \n", "} \\\\\n", "{\\rm $^\\diamondsuit$University of Washington~~$^\\spadesuit$Johns Hopkins University\\\\$^\\heartsuit$Allen Institute for AI} \\\\\n", "\\texttt{\\{atmallen,akari,vzhong,rajarshi,hannaneh\\}@cs.washington.edu} \\\\\n", "\\texttt{danielk@jhu.edu} \\\\\n", "}\n", "}\n", "\n", "\\begin{document}\n", "\\maketitle\n", "\n", "\\begin{abstract}\n", "Despite their impressive performance on diverse tasks, large language models (LMs) still struggle with tasks requiring rich world knowledge, implying the difficulty of encoding a wealth of world knowledge in their parameters. \n", "This paper aims to understand LMs' strengths and limitations in memorizing factual knowledge, by conducting large-scale knowledge probing experiments on two open-domain entity-centric QA datasets: \\ours, our new dataset with 14k questions about long-tail entities, and EntityQuestions, a widely used open-domain QA dataset.\n", "We find that LMs struggle with less popular factual knowledge, and that retrieval augmentation helps significantly in these cases. Scaling, on the other hand, mainly improves memorization of popular knowledge, and fails to appreciably improve memorization of factual knowledge in the long tail.\n", "Based on those findings, we devise a new method for retrieval augmentation that improves performance and reduces inference costs by only retrieving non-parametric memories when necessary.\\footnote{Our code and data are available at \\url{https://github.com/AlexTMallen/adaptive-retrieval}. } \n", "\n", "\n", "\\end{abstract}\n", "\n", "\\section{Introduction}\n", "Large language models (LMs; ~\\citealt{brown2020language,raffel2020exploring}) have been shown to be competitive on diverse NLP tasks, including knowledge-intensive tasks that require fine-grained memorization of factual knowledge~\\cite{chowdhery2022palm,yu2022generate}. \n", "Meanwhile, LMs have also been shown to have limited memorization for less frequent entities~\\cite{llm_longtail}, \n", "are prone to hallucinations~\\cite{shuster2021retrieval}, and suffer from temporal degradation~\\cite{kasai2022realtime,jang2022temporalwiki}.\n", "Incorporating \\emph{non-parametric knowledge} (i.e., retrieved text chunks) largely helps address those issues stemming from reliance on LMs' \\emph{parametric knowledge}---knowledge stored in their parameters~\\cite{izacard2022few}---but it is unclear whether it is strictly superior or complementary to parametric knowledge. \n", "Understanding when we should \\emph{not} trust LMs' outputs is also crucial to safely deploying them in real-world applications~\\cite{kadavath2022language}.\n", "\n", "\\begin{figure}[t!]\n", "\\includegraphics[width=7.6cm, trim=0cm 0.6cm 0cm 0.2cm]{figures/teaser.pdf}\\caption{\n", "Relationship between subject entity popularity in a question and GPT-3 performance in open-domain QA, with and without retrieved passages. Adaptive Retrieval only retrieves when necessary (orange bars) based on the heuristically-decided threshold (red line). \n", "} \\label{fig:teaser}\n", "\\end{figure}\n", "\n", "This work conducts a large-scale knowledge probing of LMs on factual knowledge memorization, to understand when we should and should \\emph{not} rely on LMs' parametric knowledge, and how scaling and non-parametric memories (e.g., retrieval-augmented LMs) can help.\n", "In particular, we aim to address the following research questions: \n", "\\vspace{-0.2cm}\n", "\\begin{enumerate}[label=(\\subscript{RQ}{{\\arabic*}}),leftmargin=35pt]\n", "\\itemsep-0.4em \n", " \\item \\label{Q1} \n", " How much factual knowledge is memorized by LMs and what factors affect the memorization? (Section~\\ref{sec:analysis_parametric})\n", " \\item \\label{Q2} To what extent can non-parametric memories alleviate the shortcomings of parametric memories of LMs?\n", " (Section~\\ref{sec:analysis_non_parametric})\n", " \\item \\label{Q3} Can we build a system to adaptively combine non-parametric and parametric memories? (Section~\\ref{sec:adaptive_retrieval})\n", "\\end{enumerate}\n", "\\vspace{-0.2cm}\n", "\n", "We hypothesize that factual knowledge frequently discussed on the web is easily memorized by LMs, while the knowledge that is less discussed may not be well captured and thus they require retrieving external non-parametric memories.\n", "{\n", "We evaluate ten large LMs of three families (i.e., GPT-Neo, OPT, and GPT-3) with varying scales on the open-domain question answering (QA) task in a zero- or few-shot prompting manner.} \n", "\n", "We construct a new dataset, \\ours, consisting of 14k questions to cover factual information in the long tail that might have been missed in popular QA datasets~\\cite{kwiatkowski-etal-2019-natural}. \n", "{We use Wikipedia page views as a measure of popularity and convert knowledge triples from Wikidata, with diverse levels of popularity, into natural language questions, anchored to the original entities and relationship types.}\n", "{We also use EntityQuestions~\\cite{sciavolino2021simple}, an open-domain QA dataset with a long-tail distribution. \n", "\n", "}\n", "\n", "On both datasets, LMs' memorization {\\ref{Q1}} is often limited to the popular factual knowledge and even GPT-3 \\texttt{davinci-003} fails to answer the majority of the long-tail questions. \n", "\n", "Moreover, on such questions, scaling up models does \\emph{not} significantly improve the performance (e.g., for the 4,000 least popular questions in \\ours, GPT-j 6B has 16\\\n", "{This also suggests that we can predict if LMs memorize certain knowledge based on the information presented in the input question only. }\n", "\n", "\n", "\n", "\n", "\n", "\n", "We next investigate whether a semi-parametric approach that augments LMs with retrieved evidence can mitigate the low performance on questions about less popular entities {\\ref{Q2}}. \n", "\n", "Non-parametric memories largely improve performance on long-tail distributions across models. \n", "Specifically, we found that retrieval-augmented LMs are particularly competitive when subject entities are not popular: {a neural dense retriever}~\\cite{izacard2021towards}-augmented GPT-neo 2.7B outperforms GPT-3 \\texttt{davinci-003} on the 4,000 least popular questions.\n", "\n", "\n", "\n", "\n", "{Surprisingly, we also find that retrieval augmentation can hurt the performance of large LMs on questions about popular entities as the retrieved context can be misleading.} \n", "\n", "As a result, we devise a simple-yet-effective retrieval-augmented LM method, Adaptive Retrieval, which adaptively combines parametric and non-parametric memories based on {popularity}\n", " \n", "{\\ref{Q3}}.\n", "This method further improves performance on \\ours\\ by up to {10\\\n", "{\n", "indicating the potential for future research in more efficient and powerful retrieval-augmented LMs.\n", "\n", "}\n", "\\section{Related Work}\n", "\n", "\\paragraph{Parametric and non-parametric knowledge.}\n", "{\\citet{petroni2019language} demonstrate that large pre-trained LMs such as BERT~\\cite{devlin2018bert} memorize the significant amount of world knowledge in their parameters ({\\it parametric knowledge}), and \\citet{roberts2020much} show that fine-tuned T5 without any reference documents (closed-book QA) can achieve competitive performance on open-domain QA. \n", "More recent and powerful LMs~\\cite{brown2020language,chowdhery2022palm} further improve performance on diverse knowledge-intensive tasks, leveraging their strong parametric memories~\\cite{llm_longtail,yu2022generate}.}\n", "However, relying solely on their parameters to encode a wealth of world knowledge requires a prohibitively large number of parameters and the knowledge can become obsolete quickly~\\cite{kasai2022realtime,jang2022temporalwiki}. \n", "Recent work shows that augmenting LMs with non-parametric memories (i.e., retrieved text chunks) enables much smaller models to match the performance of larger models~\\cite{izacard2022few,khandelwal2019generalization,min2022nonparametric}{, although \\citet{chen2022rich} and \\citet{longpre-etal-2021-entity} show that even those models can ignore non-parametric knowledge and rely on parametric knowledge.\n", "}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{Understanding memorization. }\n", "Several prior work establishes a positive relationship between string frequency in pre-training corpora and memorization~\\cite{Carlini2022QuantifyingMA,Razeghi2022ImpactOP}.\n", "\n", "\n", "\n", " \n", "\n", "Concurrent to our work, ~\\citet{llm_longtail} show that the co-occurrence of the question and answer entities in pretraining corpora has a positive correlation with models' QA accuracy on popular open-domain QA benchmarks such as Natural Questions~\\cite{kwiatkowski-etal-2019-natural}.\n", "\n", "\n", "This work, instead, attempts to predict memorization using the variables available in the input question only and uses popularity to obtain a proxy for how frequently an entity is likely to be discussed on the web.\n", "Importantly, by constructing a new dataset, we can conduct fine-grained controlled experiments across a wide range of popularities, allowing the investigation of hypotheses that might have been missed in prior analysis using existing open QA datasets.\n", "We further analyze the effectiveness and limitations of retrieval-augmented LMs and {introduce Adaptive Retrieval. \n", "}\n", "Prior work investigates the effectiveness of deciding when to use non-parametric memories at the token level in $k$NN LM~\\cite{he-etal-2021-efficient,drozdov-etal-2022-cant}. \n", "This work is the first work to study the effectiveness of deciding whether to retrieve for each query and show their effectiveness in retrieval-augmented LM prompting.\n", "\n", "\n", "\n", "\n", "\\if0{\n", "\\paragraph{LM Predictions Calibration.}\n", "Predicting the reliability of models' outputs is an outstanding issue in QA~\\cite{rajpurkar2018know,asai2020challenges}, LM memorization~\\cite{10.1162/tacl_a_00324}, and the wider NLP and machine learning community. \n", "Despite its impressive performance on wider tasks, large LMs often hallucinate while their outputs often look plausible, making it difficult to understand when their outputs are unreliable~\\cite{liu2021token}. \n", "Recently, \\citet{kadavath2022language} show that large LMs can predict when they do not know the answers to some extent, especially after fine-tuning on in-domain data. \n", "In this work, we show that adaptively combining retrieval based on when LMs are likely to fail can be an efficient and powerful approach, and those learned calibrations can be integrated with our adaptive retrieval framework. \n", "}\\fi\n", "\\begin{figure}[t!]\n", " \\centering\n", "\\includegraphics[width=0.95\\linewidth,keepaspectratio]{figures/figure2_final.pdf}\n", " \\caption{\\ours~is created by sampling knowledge triples from Wikidata and converting them to natural language questions, followed by popularity calculation. \n", " \n", " }\n", " \\label{fig:background}\n", "\\end{figure}\n", "\\section{Evaluation Setup}\n", "\\label{sec:dataset}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "{\n", "We evaluate LMs' ability to memorize factual knowledge through closed-book QA tasks with few-shot samples. \n", "We evaluate LMs on our new dataset, \\ours~(Figure~\\ref{fig:background}), and EntityQuestions, both of which have long-tail distributions (Figure~\\ref{fig:pop_distr}).\n", "\n", "\n", "\n", "}\n", "\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.9\\linewidth]{figures/popularity_distributions.pdf}\n", " \\caption{Distribution of subject entity popularity for EntityQuestions, \\ours,\\ and for NQ-open for reference. Details on NQ entities can be found in Appendix~\\ref{app_sec:dataset}.\n", " }\n", " \\label{fig:pop_distr}\n", "\\end{figure}\n", "\n", "\\subsection{Focus and Task}\n", "\\noindent {\\bf Focus: factual knowledge. }\n", "Among diverse types of world knowledge, this work focuses on factual knowledge~\\cite{adams2015bloom} of entities---knowledge about specific details of the target entities. \n", "We define factual knowledge as a triplet of (\\hlc[pink]{subject}, \\hlc[cyan!20]{relationship}, \\hlc[green!20]{object}) as in Figure~\\ref{fig:background} left.\n", "\n", "\\if0{\n", "We consider a model to have memorized factual knowledge when it generates the object entity given a subject entity and a relationship type in the form of a natural language question, as shown in Figure~\\ref{fig:background}.\\akari{this sentence is somewhat duplicated with the metrics paragraph. } \n", "}\\fi\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Task format: open-domain QA. } \n", "We formulate the task as open-domain QA~\\cite{roberts2020much}: given a question, a model predicts an answer without any pre-given ground-truth paragraph.\\footnote{Some work conducts knowledge probing of encoder-only models by filling out \\texttt{[MASK]} tokens~\\cite{petroni2019language}. \n", "We use decoder-only models and thus do not use this \\texttt{fill-in-the-blank} scheme.} \n", "{As in \\citet{llm_longtail}, we study few-shot settings and prompt LMs without any parameter updates, instead of fine-tuning them on QA datasets such as in \\citet{roberts2020much}. }\n", "\n", "\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Metrics: accuracy.}\n", "We mark a prediction as correct if any substring of the prediction is an exact match of any of the gold answers.\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\subsection{Dimensions of Analysis}\n", "\n", "We hypothesize that factual knowledge that is less frequently discussed on the web may not be well-memorized by LMs.\n", "Previous research often uses the term frequency of object entities in pretraining corpora to understand memorization~\\cite{fevry2020entities,llm_longtail, Razeghi2022ImpactOP}. \n", "{Instead, we investigate whether it's possible to predict memorization based on the input information only, and then apply the findings for modeling improvements, unlike prior analyses.}\n", "Therefore, our work focuses on the other two variables in a factual knowledge triple: the subject entity and the relationship type. \n", " \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Subject entity popularity.}\n", "We use the popularity of the entities measured by Wikipedia monthly page views as a proxy for how frequently the entities are likely to be discussed on the web, instead of using the occurrence of entities or strings in the pretraining corpus~\\cite{Carlini2022QuantifyingMA,llm_longtail,Razeghi2022ImpactOP}. \n", "{Calculating frequencies over large pretraining corpora requires massive computations to link entities over billions of tokens, or can result in noisy estimations.\\footnote{Moreover, several recent models like GPT-3 do not release their pretraining corpora, and it is an open question whether the frequencies in pretraining corpora reflect the frequencies in their private corpora. } }\n", "\n", "{Our initial studies show that this is much cheaper\\footnote{We can get page views by calling Wikipedia API. } and aligns well with our intuition.}\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf Relationship type.}\n", "We also consider the relationship types as key factors for factual knowledge memorization. \n", "For example, even given the same combinations of the subject and object entities, model performance can depend on the relationship types;\n", "\n", "relationship types widely discussed can be easier to be memorized, while types that are less discussed may not be memorized much. \n", "\n", "\\subsection{Benchmarks}\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf \\ours.}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "In our preliminary studies, we found that existing common open-domain QA datasets such as Natural Questions (NQ; \\citealt{kwiatkowski-etal-2019-natural}) are often dominated by subject entities with high popularity, and it is often hard to identify relationship types due to diverse question surface forms.\n", "To enable a fine-grained analysis of memorization based on the aforementioned analysis dimensions, we construct \\ours, a new large-scale entity-centric open-domain QA dataset about entities with a wide variety of popularity, as shown in Figure~\\ref{fig:pop_distr}.\n", "\n", "To construct \\ours, we randomly sample knowledge triples of 16 diverse relationship types from Wikidata and convert them into natural language questions, using a natural language template (depicted in Figure~\\ref{fig:background}). \n", "We verbalize a knowledge triple \\((S, R, O)\\) into a question that involves substituting the subject \\(S\\) into a template manually written for the relationship type \\(R\\). \n", "The full list of templates is found in Table~\\ref{tab:list_of_instructions_ours} of the Appendix. \n", "The set of acceptable answers to the question is the set of entities \\(E\\) such that \\((S, R, E)\\) exists in the knowledge graph. \n", "We tried various templates and found that the results were fairly robust to the templates. \n", "Since \\ours\\ is grounded to a knowledge base, links to Wikidata entities allow for reliable analysis of popularity and relationship types.\n", "\n", "\n", "\\vspace{.1cm}\n", "\\noindent {\\bf EntityQuestions.}\n", "We test on another popular open-domain QA dataset, EntityQuestions~\\cite{sciavolino2021simple}, which also covers a long-tail entity distribution. \n", "They use Wikipedia hyperlink counts as a proxy of the frequency of entities and sample knowledge triples from WikiData, from the frequency distributions.\n", "Unlike \\ours, EntityQuestions doesn't provide entity annotations, so we only use 82\\\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\section{Memorization Depends on Popularity and Relationship Type }\n", "\\label{sec:analysis_parametric}\n", "We evaluate a range of LMs with varying numbers of parameters, to quantify how much factual knowledge they memorize and how different factors \n", "\n", "affect those memorization behaviors \\ref{Q1}. \n", "\n" ], "context_after_exp": [ "\\subsection{Experimental Setup}\n", "\\paragraph{Models.}\n", "We evaluate ten models with a varying scale of model size: OPT (\\citealt{zhang2022opt}; 1.3, 2.7, 6.7, and 13 billion), GPT-Neo (\\citealt{black2022gpt}; 1.3, 2.7, 6, and 20 billion), and GPT-3 (\\citealt{brown2020language}; \\texttt{davinci-002}, \\texttt{davinci-003}) on our benchmark without any fine-tuning.\\footnote{We did not explore widely-used encoder-decoder models such as T5, as their supervised pretraining consists of QA.} \n", "\n", "\\paragraph{Instructions and demonstrations.}\n", "We use a simple template ``\\texttt{Q:~~A:}'' to format all of our questions for generative prediction. More sophisticated instructions were attempted in preliminary experiments but they did not improve upon the simple template significantly enough to merit using them, especially given that they may overfit to the model. \n", "While we use zero-shot prompting for GPT-3 to reduce API costs,\\footnote{Using 15-shot prompts for GPT-3 would cost upwards of \\$3000 for the combination of vanilla, Contriever, BM25, and GenRead evaluations on \\texttt{davinci-002} and \\texttt{davinci-003}.} we use 15-shot prompting for all GPT-neo and OPT models. \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\begin{figure*}[t!]\n", " \\centering\n", "\\includegraphics[width=\\linewidth,trim=0cm 0.9cm 0cm 0cm]{figures/relationship_and_model_correlations.pdf}\n", " \\caption{Per relationship type ($n =$ the number of questions) results on \\ours{} by model, showing overall accuracy and the correlation between accuracy and log popularity. \n", " \n", " We report the correlation between the question's subject entity log-popularity and the binary indicator of whether the question was answered correctly.\n", " We see that {\\bf both subject entity popularity and relationship type are strong predictors of memorization across models.}\n", " The correlation with popularity {\\bf exists across relationship types and is stronger for larger LMs}. \n", " We show a representative subset of relationship types and the \n", " \n", " complete results are in Figures~\\ref{fig:appendix_accuracy_breakdown} and \\ref{fig:appendix_correlation_breakdown} in Appendix~\\ref{app_sec:lm_results}{, including results on EntityQuestions.}}\n", " \\label{fig:relationship__all}\n", "\\end{figure*}\n", "\\subsection{Results}\n", "\n", "\\paragraph{Overall model performance.}\n", "The top left column of Figure~\\ref{fig:relationship__all} illustrates the overall performance on \\ours. \n", "As shown, even without using in-context examples, larger LMs exhibit reasonable performance: GPT-3 achieves 35\\\n", "This indicates that large LMs memorize factual knowledge in their parameters to some extent. \n", "This section examines which types of knowledge are better memorized and what factors influence memorization.\n", "\n", "\\paragraph{Subject entity popularity predicts memorization.}\n", "\n", "Figure~\\ref{fig:relationship__all} (bottom) shows that there is a positive correlation between subject entity popularity and models' accuracy for almost all relationship types. \n", "This supports our hypothesis that subject entity popularity can be a reliable indicator of LMs' factual knowledge memorization. \n", "In general, the correlations between subject entity popularity and accuracy are stronger for larger LMs; GPT-3 003 shows the highest positive correlation (roughly 0.4) while GPT-Neo-1.3B shows relatively weak positive correlations (approximately 0.1). \n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{Relationship types affects memorization.}\n", "\n", "\n", "We find that models have a higher average performance for some relationship types than for others. \n", "While this is evidence that factual knowledge of some relationship types are more easily memorized than others, we also observe that questions of certain relationship types can be easily {\\it guessed} without memorizing the knowledge triple. \n", "Specifically, certain relationship types (e.g., nationalities) allow models to exploit surface-level artifacts in subject entity names~\\cite{poerner-etal-2020-e,cao-etal-2021-knowledgeable}. \n", "\n", "Additionally, models often output the most dominant answer entities for questions about relationship types with fewer answer entities (e.g., red for the color relationship type). \n", "In Figure~\\ref{fig:relationship__all}, relationships with lower correlation (e.g., country, sport) often shows higher accuracy, indicating that on those relationship types, models may exploit surface-level clues. \n", "\n", "On the other hand, for relationship types with relatively low accuracy (e.g., occupation, author, director), larger LMs often show a high correlation. \n", "Further details are in Appendix~\\ref{app_sec:lm_results}.\n", "\n", "\n", "\n", "\n", "\\paragraph{{Scaling may not help with tail knowledge.}}\n", "\n", "As seen in the left column of Figure~\\ref{fig:relationship__all}, there are clear overall performance improvements with scale on the \\ours{} dataset. \n", "However, Figure~\\ref{fig:lm_scale} shows that on both \\ours~and EntityQuestions, most of scaling's positive effect on parametric knowledge comes from questions with high popularity. \n", "Specifically, for the questions about the entities whose $\\log_{10}{(\\rm popularity)}$ is larger than 4, there is an improvement in accuracy as model size increases (red and yellow lines), while performance on questions with lower popularity remains relatively constant (blue and green lines). \n", "For the 4,000 least popular questions, GPT-Neo 6B, 20B, and GPT-3 \\texttt{davinci-003} have 15\\\n", "\n", " \\begin{figure}[b!]\n", "\\centering\n", " \\includegraphics[width=0.99\\linewidth,keepaspectratio]{figures/PopQA_acc_by_model_size.pdf}\n", " \\includegraphics[width=0.99\\linewidth,keepaspectratio]{figures/EntityQuestions_acc_by_model_size.pdf}\n", "\n", "\n", "\n", " \\captionsetup{width=0.95\\linewidth}\n", "\\caption{\\ours \\ scaling results, broken down by question popularity level. \n", "{\\bf Scaling mostly improves memorization of more popular factual knowledge.}\n", "Error bars are 95\\\n", "\\label{fig:lm_scale}\n", "\\end{figure}\n", "This somewhat dampens prior works' findings that scaling up models significantly improves their factual knowledge memorization~\\cite{roberts2020much,llm_longtail}.\n", "We hypothesize that this is because their evaluations are often conducted on QA datasets with popular entities.\n", "\n", "\\footnotetext{30 \\ours\\ \n", " and 26 EntityQuestions questions had popularity less than the smallest popularity bin, and are excluded to avoid showing results for small sample sizes. }\n", "{In sum, scaling lowers the threshold of popularity for knowledge to be reliably memorized, but is not projected to move the threshold far into the long tail for practical model scales.}\n", "\n", "\\paragraph{Relationship type results breakdown.}\n", "Figure~\\ref{fig:relationship_breakdown} provides a closer look at the relationship between popularity, accuracy, and relationship type; it shows model accuracy over the popularity distributions for director and country. \n", "For the first two types, we can see a clear positive trend between popularity and accuracy across models, and as the model size gets larger, the LMs memorize more. \n", "\n", "On the other hand, in the ``country'' relationship type, no models show trends, while overall the accuracy is high, indicating the LMs often exploit artifacts to answer less popular questions. \n", "\n", "We show example models' predictions in Appendix Section~\\ref{sec:qualitatitve_results}.\n", "\\section{Non-parametric Memory Complements Parametric Memory}\n", "\\label{sec:analysis_non_parametric}\n", "Our analysis indicates that even the current state-of-the-art LMs struggle with less popular subjects or certain relationship types, and increasing the model size does not lead to further performance improvements.\n", "In light of this, we extend our analysis to non-parametric sources of knowledge, as outlined in Section~\\ref{Q2}. Specifically, we investigate the effectiveness of retrieval-augmented LMs~\\cite{borgeaud2021improving,lewis2020retrieval}, which leverage non-parametric memories (i.e., retrieved text) to improve performance.\n", "\n", "\\subsection{Experimental Setup}\n", "\n", "\\noindent {\\bf Augmenting input.} \n", "In this work, we try a simple retrieval-augmented LM approach, where we run an off-the-shelf retrieval system off-line to retrieve context from Wikipedia relevant to a question,\\footnote{We use Wikipedia dump from December 2018.} and then we concatenate the retrieved context with the original question. \n", "Although increasing the context size often leads to performance gains~\\cite{izacard2020leveraging,asai2021evidentiality}, we only use the top one retrieved paragraph for simplicity. \n", "\n", "\\begin{figure}[t!]\n", " \\includegraphics[width=\\linewidth]{figures/relationship_and_model_breakdown.pdf}\n", " \\caption{\n", "Memorization versus popularity for three models and the relationship types with the largest and smallest correlations. \n", "Within a relationship type, generally, there is a {\\bf monotonically increasing link between popularity and performance}, except for ``country''. Error bars show Wilson 95\\\n", "} \n", "\\label{fig:relationship_breakdown}\n", "\\end{figure}\n", "\\vspace{.15cm}\n", "\\noindent {\\bf Retrieval models.} \n", "We use two widely-used retrieval systems: {\\bf BM25}~\\cite{robertson2009probabilistic} and {\\bf Contriever}~\\cite{izacard2021towards}. \n", "BM25 is a static term-based retriever without training, while Contriever is pretrained on large unlabeled corpora, followed by fine-tuning on MS MARCO~\\cite{bajaj2016ms}.\n", "We also experiment with a \\emph{parametric} augmentation method, {\\bf GenRead}~\\cite{yu2022generate}, which prompts LMs to generate rather than retrieve a contextual document to answer a question. We use the ten LMs in Section~\\ref{sec:analysis_parametric}, resulting in 40 LMs and retrieval-augmented LMs.\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.99\\linewidth]{figures/accuracy_by_method_and_model.pdf}\n", " \\caption{\\ours~accuracy of LMs augmented with BM25, Contriever, GenRead, and unassisted (vanilla). \n", " {\\bf Retrieving non-parametric memories significantly improves the performance of smaller models. }\n", " {Complete results on \\ours\\ are found in Figure~\\ref{fig:appendix_acc_by_model}. EntityQuestions results are in Figure~\\ref{fig:appendix_EQ_acc_by_model} of the Appendix. }}\n", " \\label{fig:acc_by_model}\n", "\\end{figure}\n", "\n", "\\subsection{Results}\n", "\n", "\\paragraph{Retrieval largely improves performance.} \n", "Figure~\\ref{fig:acc_by_model} shows that augmenting LMs with non-parametric memories significantly outperforms unassisted vanilla LMs. \n", "\n", "\n", "\n", "A much smaller LM (e.g., GPT-Neo 2.7B) augmented by the Contriever retrieval results outperforms vanilla GPT-3. \n", "Large LMs such as GPT-3 also enjoy the benefits of non-parametric memories. Contriever gives 7\\\n", "GenRead shows little-to-no performance improvement over vanilla parametric knowledge for smaller models, while the technique shows sizeable gains for GPT-3, especially \\texttt{davinci-003}. \n", "\n", "{In addition to its limited effectiveness with smaller LMs, }\n", "\n", "GenRead has potentially prohibitive inference time costs, with GPT-NeoX 20B taking 70 seconds per query.\n", "\n", "\n", "\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.95\\linewidth]{figures/PopQA_legend.pdf}\n", " \\includegraphics[width=0.95\\linewidth]{figures/PopQA_relative_popularity_linecharts.pdf}\n", " \\includegraphics[width=0.95\\linewidth]{figures/EntityQuestions_relative_popularity_linecharts.pdf}\n", " \n", " \\caption{ \n", " GPT-3 \\texttt{davinci-003} accuracy versus relative popularity \n", " (how popular a question is relative to other questions of its relationship type). \\textbf{Retrieval-augmented LMs (dashed) outperform LMs' parametric memory (solid) for less popular entities, while parametric memory is competitive for more popular entities.} \n", " \n", " \n", " Relative popularity is defined as the log-popularity of a question, normalized by the mean and standard deviation of log-popularity for the question's relationship type (smaller for less popular entities).\\footnotemark\\ Figure~\\ref{fig:appendix_correlation_breakdown} shows per-relationship results. \n", " \n", " \n", " }\n", " \\label{fig:retrieval_how_helps}\n", "\\end{figure}\n", "\\footnotetext{Error bars show Wilson 95\\\n", "\n", "\\paragraph{Non-parametric memories are effective for less popular facts.} \n", "How does retrieval augmentation lead to such significant improvements? \n", "Figure~\\ref{fig:retrieval_how_helps} shows the relationship between the entity popularity and models' QA performance. \n", "It can be seen that retrieval-augmented LMs guided by Contriever or BM25 have a clear advantage over unassisted vanilla LMs, especially on less popular entities, resulting in a significant performance gain. \n", "Overall, Contriever-guided LMs outperform BM25-based ones on \\ours, while the BM25-based models perform better on the least popular entities, consistent with the findings from \\citet{sciavolino2021simple}.\n", "On the other hand, for more popular entities, parametric knowledge shows equal or higher accuracy, indicating that the state-of-the-art LMs have already memorized the answers, and augmenting input with retrieved-context doesn't help much or even hurts the performance. \n", "Interestingly, GenRead generally outperforms vanilla LMs despite relying on LMs' parametric memory. This demonstrates the effectiveness of elicitive prompting~\\cite{wei2022chain,sun2022recitation}\n", "as observed in prior work. \n", "However, like vanilla LMs, GenRead shows low performance on less popular entities. \n", "\n", "\\begin{table}[t!]\n", "\\centering\n", "\\footnotesize\n", "\n", "\\begin{tabular}{l|ll}\\toprule\n", "\n", "\n", "\n", "& \\multicolumn{2}{c}{Contriever-augmented LM} \\\\ & succeeded & failed \\\\ \\midrule\n", "LM succeeded & 0.83 (24\\\n", "LM failed & 0.88 (17\\\n", "\\bottomrule\n", "\\end{tabular}\n", " \\caption{ The recall@1 of Contriever for questions that GPT-3 \\texttt{davinci-003} answered correctly and incorrectly with and without retrieval on \\ours. The percent of questions falling in each category is shown in parentheses. \\textbf{For 10\\\n", " \n", " \n", " }\n", " \\label{tab:four_quad}\n", "\\end{table}\n", "\n", "\\paragraph{Non-parametric memories can mislead LMs.} \n", "\n", "We conduct an in-depth analysis of why retrieval-augmented models suffer in more popular entities. \n", "We hypothesize that retrieval results may not always be correct or helpful, and can mislead LMs. \n", "{To test this hypothesis, we group the questions based on two axes: whether unassisted GPT-3 \\texttt{davinci-003} predict correctly or not, and whether retrieval-augmented predictions are correct or not. For each of the four categories, we calculate recall@1 (whether a gold answer is included in the top 1 document; ~\\citealt{karpukhin2020dense}). }\n", "\n", "Table~\\ref{tab:four_quad} shows recall@1 for each group with percentages of the questions falling into each of the categories.\n", "For 10\\\n", "We found that on those questions, recall@1 is significantly lower than the overall recall@1 (0.14 vs 0.42 overall), indicating that failed retrieval can result in performance drops. Conversely, for the 17\\\n", "We include examples of both cases in Appendix Section~\\ref{sec:qualitatitve_results}.\n", "\n", "\n", "\\section{Adaptive Retrieval: Using Retrieval Only Where It Helps}\n", "\\label{sec:adaptive_retrieval}\n", "While incorporating non-parametric memories helps in long-tail distributions, powerful LMs have already memorized factual knowledge for popular entities, and retrieval augmentation can be harmful. \n", "As outlined in \\ref{Q3}, can we achieve the best of both worlds?\n", "We propose a simple-yet-effective method, Adaptive Retrieval, which decides when to retrieve passages only based on input query information and augments the input with retrieved non-parametric memories only when necessary. \n", "We show that this is not only more powerful than LMs or retrieval-augmented LMs always retrieving context, but also more efficient than the standard retrieval-augmented setup.\n", "\n", "\n", "\\subsection{Method}\n", "Adaptive Retrieval is based on our findings: as the current best LMs have already memorized more popular knowledge, we can use retrieval only when they do not memorize the factual knowledge and thus need to find external non-parametric knowledge. \n", "In particular, we use retrieval for questions whose popularity is lower than a threshold ({\\it popularity threshold}), and for more popular entities, do not use retrieval at all. \n", "\n", "Using a development set, the threshold is chosen to maximize the adaptive accuracy, which we define as the accuracy attained by taking the predictions of the retrieval-augmented system for questions below the popularity threshold and the predictions based on parametric knowledge for the rest.\n", "We \n", "\n", "determine the popularity threshold independently for each relationship type.\n", "\n", "\n", "\n", "\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.99\\linewidth]{figures/PopQA_adaptive_accuracy.pdf}\n", " \\caption{\\ours\\ performance of GPT-neo models and GPT3 \\texttt{davinci-003}, with different retrieval methods. \n", " \n", " \n", " \\textbf{Adaptive Retrieval robustly outperforms approaches that always retrieve, especially for larger LMs}. \n", " \n", " \n", " }\n", " \\label{fig:popQA_adaptive}\n", "\\end{figure}\n", "\\subsection{Results}\n", "\n", "\\paragraph{Adaptive Retrieval improves performance.}\n", "Figure~\\ref{fig:popQA_adaptive} shows the results when we adaptively retrieve non-parametric memories based on the per-relationship type thresholds. \n", "We can see that adaptively retrieving non-parametric memories is effective for larger models.\n", "The best performance on \\ours~is using GPT-3 \\texttt{davinci-003} adaptively with GenRead and Contriever, yielding 46.5\\\n", "\n", "\n", "\\begin{figure}[ht!]\n", " \\centering\n", " \\includegraphics[width=0.99\\linewidth, trim=0cm 0.5cm 0cm 0.3cm]{figures/PopQA_adaptive_proportion_retrieval_model_size.pdf}\n", " \\caption{The proportion of questions for which various models use retrieval in the Adaptive Retrieval setup on \\ours. When using Adaptive Retrieval, small models must still rely on non-parametric memory for most questions, while larger models have more reliable parametric memories enabling them to use retrieval less often.\n", " \\label{fig:retr_savings_scale}}\n", "\\end{figure}\n", "\n", "\n", "\\paragraph{The threshold shifts with LM scale.}\n", "While Adaptive Retrieval shows performance gains for larger models, smaller models do not realize the same benefits; as shown in Figure~\\ref{fig:popQA_adaptive}, the performance gain from Adaptive Retrieval is much smaller when we use models smaller than 10 billion. Why does this happen? Figure~\\ref{fig:retr_savings_scale} shows that smaller LMs almost always retrieve, indicating that there are not many questions for which small LMs' parametric knowledge is more reliable than non-parametric memory.\n", "In contrast, large models typically retrieve much less. For example, GPT-3 \\texttt{davinci-003} only retrieves for 40\\\n", "\n", "\\paragraph{Adaptive Retrieval reduces inference-time costs.}\n", "We also found that Adaptive Retrieval improves efficiency; if we know we do not need to retrieve documents, we can skip retrieval components and the input length becomes shorter, which improves latency in both retrieval and language model components.\n", "Figure~\\ref{fig:adaptive_costs} shows the inference latency of GPT-J 6B and GPT-neox 20B, and API costs of GPT-3. \n", "Especially for larger LMs, concatenating retrieved context results in significantly increased latency (e.g., for GPT-J 6B, the inference time latency almost doubles). \n", "Adaptive retrieval enables reducing inference time up to 9\\\n", "{We also observe cost reduction on EntityQuestions, as shown in Figure~\\ref{fig:EQ_adaptive}.}\n", "\n", "\\begin{figure}[t!]\n", " \\centering\n", " \\includegraphics[width=0.99\\linewidth]{figures/PopQA_adaptive_costs.pdf}\n", " \\caption{\\ours\\ latency for large GPT-neo models that were run on our machines, and API costs for GPT3. \\bf{Adaptive retrieval reduces latency and API costs.}}\n", " \\label{fig:adaptive_costs}\n", "\\end{figure}\n", "\n", "\\begin{figure}[t!]\n", " \\centering\n", " \\includegraphics[width=0.9\\linewidth]{figures/EntityQuestions_adaptive_acc_and_costs.pdf}\n", " \\caption{Accuracy and cost savings of Adaptive Retrieval for EntityQuestions. Despite EntityQuestions's lack of popular entities (see Figure~\\ref{fig:pop_distr}), Adaptive Retrieval is able to reduce API costs by 15\\\n", "\\end{figure}\n", "\\section{Discussion and Conclusions}\n", "This work conducts large-scale knowledge probing to examine the effectiveness and limitations of relying on LMs' parameters to memorize factual knowledge and to understand what factors affect factual knowledge memorization.\n", "Our results show that memorization has a strong correlation with entity popularity and that scaling up models on long-tail distributions may only provide marginal improvements. \n", "We also demonstrate that non-parametric memories can greatly aid LMs on these long-tail distributions, but can also mislead LMs on questions about well-known entities, as powerful LMs have already memorized them in their parameters. \n", "Based on those findings, we devise simple-yet-effective Adaptive Retrieval, which only retrieves when necessary, using a heuristic based on entity popularity and relationship types. Our experimental results show that this method is not only more powerful than LMs or previous retrieval-augmented LMs but also more efficient. \n", "\\section*{Limitations}\n", "This work focuses on entity-centric factual knowledge and demonstrates that LMs' memorization is heavily affected by the popularity of the entities and the aspect of the entities being asked in the questions. \n", "It is important to emphasize that for running controlled experiments, we have relied on two synthetic datasets, and the extent to which our results apply to naturally occurring factual knowledge has not been firmly established. While we can be fairly confident about the relationship between scaling, retrieval, popularity, relationship type, and performance for the kinds of knowledge studied here, the effectiveness of Adaptive Retrieval will depend on many details of the question answering pipeline. \n", "Moreover, our work depends on a definition of popularity that is time-dependent and may not perfectly reflect how frequently entities are discussed on the web. Wikipedia page views are one possible definition of popularity for which we observe our results, and we invite others to improve upon it in future work. \n", "Further research can expand upon this simple approach, perhaps drawing on insights from~\\citet{kadavath2022language} to improve the effectiveness of Adaptive Retrieval.\n", "\n", "It is an open question if the same findings are applicable to other types of world knowledge such as commonsense. \n", "We conjecture that the concept of the subject topic (entity), as well as the aspect (relationship type), can be applied with some minor modifications, which future work can quantify memorization following our scheme. \n", "\n", "\\section*{Ethical Considerations}\n", "{\n", "Recent work~\\cite{huang2022large} shows that LMs memorize personal information available on the web, which has significant security issues. \n", "Our evaluation focuses on the memorization of general entity-centric knowledge, but our findings can be applicable to those areas. \n", "}\n", "{Our findings suggest that LMs are likely to have less reliable knowledge of minority groups. \\citet{parrish-etal-2022-bbq} established that models often rely on stereotypes to answer in uncertain cases, so our results indicate that LMs are likely to rely on stereotypes disproportionately for minority groups. Future work could investigate whether retrieval augmentation reduces bias in these cases.}\n", "\n", "\n", "\\section*{Acknowledgements}\n", "We thank the UW NLP group members for their helpful discussions, and Joongwon Kim, Wenya Wang, and Sean Welleck for their insightful feedback on this paper. \n", "This research was supported by NSF IIS-2044660, ONR N00014-18-1-2826, ONR MURI N00014- 18-1-2670, and Allen Distinguished Award. \n", "AM is funded by a Goldwater Scholarship and AA is funded by the IBM PhD Fellowship. \n", "\n", "\\bibliography{anthology,custom}\n", "\\bibliographystyle{acl_natbib}\n", "\n", "\\clearpage\n", "\\appendix\n", "\n", "\\section*{Appendix}\n", "\\label{sec:appendix}\n", "\\section{Details of \\ours~Constructions}\n", "\n", "\\label{app_sec:dataset}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{List of the relationship types and templates.}\n", "In this work, we use the following 16 relationship types, and the authors of this paper manually annotated templates to verbalize knowledge triple to natural language questions. \n", "We show the final list of the templates used to create \\ours~in Table~\\ref{tab:list_of_instructions_ours}. \n", "\n", "Figure~\\ref{fig:pop_distr} shows the distribution of subject popularity of \\ours and EntityQuestions versus the popular NQ benchmark. NQ may have multiple entities so the distribution of the least popular entity per question is shown. Subject entities from NQ were extracted using TagMe~\\cite{Ferragina2010TAGMEOA} on the NQ-open development set with a score threshold of 0.22. TagMe returns the title of a Wikidata entity which can be directly used to find popularity.\n", "\n", "\n", "\\begin{table}[h!]\n", "\\footnotesize\n", "\\renewcommand{\\arraystretch}{1.2}\n", "\\setlength{\\tabcolsep}{2pt}\n", " \\centering\n", " \\begin{tabular}{lr}\n", "\\toprule\n", "\\textbf{Relationship } & \\textbf{Template} \\\\\\midrule\n", "occupation & What is \\texttt{[subj]} 's occupation? \\\\\n", "place of birth & In what city was \\texttt{[subj]} born? \\\\\n", "genre & What genre is \\texttt{[subj]}? \\\\\n", "father & Who is the father of \\texttt{[subj]} ?\\\\\n", "country & In what country is \\texttt{[subj]} ? \\\\\n", "producer & Who was the producer of \\texttt{[subj]} ? \\\\\n", "director & Who was the director of \\texttt{[subj]} ? \\\\\n", "capital of & What is \\texttt{[subj]} the capital of? \\\\\n", "screenwriter & Who was the screenwriter for \\texttt{[subj]} ?\\\\\n", "composer & Who was the composer of \\texttt{[subj]} ? \\\\\n", "color & What color is \\texttt{[subj]} ? \\\\\n", "religion & What is the religion of \\texttt{[subj]} ? \\\\\n", "sport & What sport does \\texttt{[subj]} play? \\\\\n", "author & Who is the author of \\texttt{[subj]} ? \\\\\n", "mother & Who is the mother of \\texttt{[subj]} ? \\\\\n", "capital &What is the capital of \\texttt{[subj]} ? \\\\\n", "\\bottomrule\n", " \\end{tabular}\n", " \\caption{Full list of the manually annotated templated used for \\ours creations. \\texttt{[subj]} denotes a placeholder for subject entities. }\\label{tab:list_of_instructions_ours}\n", "\\end{table}\n", "\n", "\\paragraph{Knowledge triples sampling.} \n", "{In the construction of the \\ours dataset, knowledge triples are sampled with higher weight given to more popular entities, otherwise, the distribution would be dominated by the tail and we would not have enough high-popularity entities to complete our analysis. \n", "Specifically, when considering whether to sample a particular knowledge triple, we include the knowledge triple if and only if \\(f > \\exp(8R - 6)\\), where \\(R \\sim U(0, 1)\\) is a unit uniform pseudo-random number and \\(f\\) is the exact match term frequency of the subject entity's aliases in an 800 MB random sample of C4. \n", "To increase diversity, once 2000 knowledge triples of a particular relation type have been sampled, they are no longer sampled.} \n", "\n", "\n", "\n", "\\section{Experimental Details}\n", "\n", "\n", "\n", "\\paragraph{Computational resources and API costs.}\n", "\n", "\n", "\n", "\n", "\n", "GPT-3 API usage totaled to \\$275. We ran 14,282 questions through two GPT-3 \\texttt{davinci} models using four different methods: vanilla experiments cost \\$13 (\\$0.46 per 1000 questions), Contriever-augmented experiments cost \\$88 (\\$3.08 per 1000 questions), BM25-augmented experiments cost \\$81 (\\$2.80 per 1000 questions), and GenRead experiments cost \\$93 (\\$3.25 per 1000 questions).\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "To run experiments using LMs larger than two billion parameters, we use a single V100 Volta GPU with 32GB GPU memories. We use int8bit~\\cite{zeng2022glm} quantization with OPT 13 billion and GPT-Neo 20 billion models to make them fit our GPUs. In our preliminary experiments using GPT-Neo 6 billion, we did not observe a notable performance drop by using the quantization. \n", "\n", "\\paragraph{Constructing few-shot contexts.} For \\ours, we sample few-shot examples stratified by relationship type to diversify the samples: for each of the 15 relationship types other than the one in the test question, we sample one random question-answer pair to include in the context. For EntityQuestions, we take a simple random sample of 15 question-answer pairs because there are more than 16 relationship types. \n", "\n", "\\paragraph{Details of deciding thresholds.} \n", "We 75\\\n", "\n", "We then evaluate adaptive accuracy using the learned thresholds on the remaining 25\\\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\section{Detailed Results}\n", "\\subsection{LM results}\n", "\\label{app_sec:lm_results}\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\n", "\\paragraph{Full results of per-relationship type accuracy and correlation.}\n", "Figure~\\ref{fig:appendix_accuracy_breakdown} shows the full result of per-relationship type accuracy for all relationship types in \\ours. \n", "Figure~\\ref{fig:appendix_correlation_breakdown} shows the correlations for all relation types. Figures~\\ref{fig:appendix_EQ_accuracy_breakdown} and~\\ref{fig:appendix_EQ_correlation_breakdown} show the same results for the EntityQuestions dataset. \n", "\n", "\\begin{figure}[t!]\n", " \\centering\n", " \\includegraphics[width=\\linewidth]{figures/appendix_accuracy_by_method_and_model.pdf}\n", " \\caption{Accuracy by LMs and retrieval-augmented LMs on \\ours. This is an extension of Figure~\\ref{fig:acc_by_model} }\n", " \\label{fig:appendix_acc_by_model}\n", "\\end{figure}\n", "\n", "\\begin{figure}[t!]\n", " \\centering\n", " \\includegraphics[width=\\linewidth]{figures/EQ_acc_by_method_and_model.pdf}\n", " \\caption{Accuracy by LMs and retrieval-augmented LMs on EntityQuestions.}\n", " \\label{fig:appendix_EQ_acc_by_model}\n", "\\end{figure}\n", "\n", "\\begin{figure}[t!]\n", " \\centering\n", " \\includegraphics[width=\\linewidth]{figures/EQ_adaptive_proportion_retrieval_model_size.pdf}\n", " \\caption{The proportion of questions for which Adaptive Retrieval uses retrieval versus model size for EntityQuestions.}\n", " \\label{fig:appendix_EQ_thresh_by_model}\n", "\\end{figure}\n", "\n", "\n", "\n", "\n", "\n", "\\begin{figure*}\n", " \\centering\n", " \\includegraphics[width=0.9\\textwidth]{figures/appendix_relationship_and_model_accuracies.pdf}\n", " \\caption{Accuracy on PopQA for all relationship types and models. This is an extension of Figure~\\ref{fig:relationship__all}.}\n", " \\label{fig:appendix_accuracy_breakdown}\n", "\\end{figure*}\n", "\\begin{figure*}\n", " \\centering\n", " \\includegraphics[width=0.9\\textwidth]{figures/appendix_relationship_and_model_correlations.pdf}\n", " \\caption{Correlations on PopQA for all relationship types and models. This is an extension of Figure~\\ref{fig:relationship__all}.}\n", " \\label{fig:appendix_correlation_breakdown}\n", "\\end{figure*}\n", "\n", "\n", "\\begin{figure*}\n", " \\centering\n", " \\includegraphics[width=0.9\\textwidth]{figures/appendix_EQ_relationship_and_model_accs.pdf}\n", " \\caption{Accuracy on EntityQuestions for all relationship types and models.}\n", " \\label{fig:appendix_EQ_correlation_breakdown}\n", "\\end{figure*}\n", "\n", "\\begin{figure*}\n", " \\centering\n", " \\includegraphics[width=0.9\\textwidth]{figures/appendix_EQ_relationship_and_model_correlations.pdf}\n", " \\caption{Correlations on EntityQuestions for all relationship types and models.}\n", " \\label{fig:appendix_EQ_accuracy_breakdown}\n", "\\end{figure*}\n", "\n", "\\paragraph{{Negative correlations of capital on EntityQuestions.}}\n", "{As shown in Figure~\\ref{fig:appendix_EQ_accuracy_breakdown}, the capital relationship types on in EntityQuestions, while on \\ours, this relationship shows relatively high correlations. We found that in EntityQuestions, this capital relationship type has many low-popularity questions whose answers are included in subject entity names (e.g., subject=\"canton of Marseille-Belsunce\", object=\"Marseille\").} \n", "{This causes performance to have a U-shaped relationship with popularity for the capital relationship type, so if most of the questions sampled come from the top half of popularity, the linear correlation will be positive, and vice versa.}\n", "\n", "\n", "\\subsection{Retrieval-augmented LM results} \\label{app:retr_results}\n", "\n", "\\paragraph{Overall performance of retrieval-augmented LMs. }\n", "Figure~\\ref{fig:appendix_acc_by_model} shows the overall performance of 40 LMs and retrieval-augmented LMs on \\ours. \n", "Retrieval-augmentation largely improves performance across different LMs, and much smaller models (GPT-Neo 1.3B) can perform on per with GPT-3. \n", "{Figure~\\ref{fig:appendix_EQ_acc_by_model} shows the results on EntityQuestions. } Due to computational and time constraints, we were only able to run vanilla and Contriever results for most models.\n", "\n", "\\paragraph{Adaptive Retrieval for EntityQuestions.} Figure~\\ref{fig:appendix_EQ_thresh_by_model} shows the proportion of questions above the retrieval threshold for various models using Adaptive Retrieval on EntityQuestions. Because EntityQuestions has a large quantity of low-popularity questions, models (especially smaller ones) must rely heavily on retrieval.\n", "\n", "\\paragraph{Full results on all relationship types.}\n", "Figure~\\ref{fig:appendix_pop_linecharts} shows the full results on \\ours\\ of the retrieval-augmented LMs and unassisted LMs on 16 relationship types using three different LMs as backbones. Figure~\\ref{fig:appendix_EQ_linecharts} shows these results for GPT-3 \\texttt{davinci-003} on EntityQuestions.\n", "\n", "\n", "\n", "\n", "\\begin{figure*}\n", " \\centering\n", " \\begin{overpic}[width=0.95\\textwidth]{figures/appendix_PopQA_popularity_linecharts.pdf}\n", " \\put(100.5, 70){\\rotatebox{-90}{\\small{GPT-neo 1.3B}}}\n", " \\put(100.5, 44){\\rotatebox{-90}{\\small{OPT 13B}}}\n", " \\put(100.5, 22){\\rotatebox{-90}{\\small{GPT-3 DaVinci 003}}}\n", " \\end{overpic}\n", " \\caption{Accuracy for 3 models on \\ours\\ versus popularity as shown in Figure~\\ref{fig:retrieval_how_helps} broken down by relationship type. Popularity bins with less than 5 samples are excluded to avoid cluttering the figures with noisy results that have wide error bars.}\n", " \\label{fig:appendix_pop_linecharts}\n", "\\end{figure*}\n", "\n", "\\begin{figure*}\n", " \\centering\n", " \\includegraphics[width=0.95\\textwidth]{figures/appendix_EQ_popularity_linecharts.pdf}\n", " \\caption{Accuracy versus popularity for GPT-3 \\texttt{davinci-003} on EntityQuestions broken down by relationship type. Popularity bins with less than 5 samples are excluded.}\n", " \\label{fig:appendix_EQ_linecharts}\n", "\\end{figure*}\n", "\n", "\\subsection{Qualitative Results}\n", "\\label{sec:qualitatitve_results}\n", "\n", "Table~\\ref{tab:group_b} shows several examples on \\ours, where GPT-3 \\texttt{davinci-003} answers correctly while the Contriever-augmented version fails to answer. Along with the low recall@1 of 0.14 for this group, Table~\\ref{tab:group_b} suggests that the most common reason retrieval can be harmful is that it retrieves a document about a mistaken entity, such as a person with the same name as the subject, or an entity that simply is not relevant to the question (as in the case of ``Noel Black'').\n", "\n", "Table~\\ref{tab:group_c} shows several examples on \\ours, where GPT-3 \\texttt{davinci-003} answers correctly only when augmented with Contriever. The recall@1 for this case is 0.88, which is significantly higher than the overall recall. Note that in the second example, the retrieval caused the LM to answer correctly, but only by coincidence: the subject entity ``Pierre'' actually refers to the city in South Dakota, not the Basketball player. Otherwise, retrieval appears to be helpful because it provides the relevant information directly.\n", "\n", "\\begin{table*}[]\n", " \\centering\n", " \\small\n", " \\begin{tabular}{p{3cm}p{2cm}p{2cm}p{7cm}}\n", " \\toprule\n", " \n", " Question (answer) & LM & Retrieval-LM & Retrieval \\\\\\midrule\n", " Who was the director of Black? (Sanjay Leela Bhansali) &\n", "The director of Black is Sanjay Leela Bhansali. \\color{green}{\\checkmark} &\n", " Noel Black \\color{red}{\\xmark} & \n", "Noel Black (June 30, 1937 \u2013 July 5, 2014) was an American film and television director, screenwriter, and producer. Black was born in Chicago, Illinois. He won awards at the 1966 ... and subsequently concentrated on directing for television, occasionally directing films such as \"Private School\".\\\\\n", "\\\\\n", " Who was the producer of The Faculty? (Robert Rodriguez) \n", " & The Faculty was produced by Elizabeth Avellan and Robert Rodriguez. \\color{green}{\\checkmark} & The producer of The Faculty was Elizabeth Avellan. \\textcolor{red}{\\xmark} & The Faculty is a 1998 American science fiction teen horror film written by Kevin Williamson, directed by \\textcolor{blue}{\\bf \\underline {Robert Rodriguez}}, and starring Elijah Wood,... The film was released on Christmas Day, 1998, by Miramax Films through Dimension Films. ... The film has developed a cult following in the years since its release. \\\\ \n", " \\\\\n", " What is Michael Shelley's occupation?\n", " (singer-songwriter) & Michael Shelley is a singer-songwriter and musician. \\color{green}{\\checkmark} & \n", " Michael Shelley is an American applied mathematician. \\color{red}{\\xmark} &\n", "Michael Shelley (mathematician) Michael J. Shelley (born August 17, 1959) is an American applied mathematician who works on the modeling and simulation of complex systems arising in physics and biology. This has included free-boundary problems in fluids and materials science... He is also the co-founder and co-director of the Courant Institute's Applied Mathematics Lab. \\\\\n", "\\\\\n", "In what city was Zijah Sokolovi\u0107 born? (Sarajevo) & \n", "Zijah Sokolovi\u0107 was born in Sarajevo \\color{green}{\\checkmark} & Zijah Sokolovi\u0107 was born in Orahovac, Kingdom \\color{red}{\\xmark} & Ali Sokol ... (born 8 May 1921 in Orahovac, Kingdom of Serbs, Croats and Slovenes, died 23 September 1974) was a Yugoslav pulmonologist . Ali Sokol was born into an agricultural family. He was the third of four children of father and mother Hatixhes Solomon. It is unknown the exact date of birth but the most reliable date is May 8 year in 1921.\n", "\\\\ \n", "\\bottomrule\n", " \\end{tabular}\n", " \\caption{Qualitative examples of the questions where only retrieval-augmented LMs \\textit{fail} to answer correctly. The blue underlined text indicates the sub-strings matching the gold answers in the retrieved context. }\n", " \\label{tab:group_b}\n", "\\end{table*}\n", "\n", "\n", "\\begin{table*}[]\n", " \\centering\n", " \\small\n", " \\begin{tabular}{p{3cm}p{2cm}p{2cm}p{7cm}}\n", " \\toprule\n", " Question (answer) & LM & Retrieval-LM & Retrieval \\\\\\midrule\n", " What genre is Unknown? (fantasy) &\n", "Unknown is not a specific genre of music. It could refer to \\color{red}{\\xmark} &\n", " Unknown is a pulp fantasy fiction magazine. \\color{green}{\\checkmark} &\n", "Unknown (magazine) Unknown (also known as Unknown Worlds) was an American pulp \\textcolor{blue}{\\bf \\underline {fantasy}} fiction magazine, published from 1939 to 1943 by Street \\& Smith, and edited by John W. Campbell. ... The leading fantasy magazine in the 1930s was \"Weird Tales\", which focused on shock and horror.\\\\\n", "\\\\\n", " In what country is Pierre? (United States) &\n", "Pierre is in France. \\color{red}{\\xmark} & \n", " Pierre is from the United States. \\color{green}{\\checkmark} & \n", "Jonas Farley Pierre (born May 30, 1980) is an American professional basketball player. A center, he has played with three teams in the National Basketball League of Canada (NBL). Pierre last played with the Saint John Mill Rats in Canada. ... He then joined Lincoln University in Missouri for his third year. In the summer of 2003, he signed a letter of intent to play with Nicholls State in the NCAA Division I. \\\\ \n", "\\\\\n", "Who was the producer of The Cocoanuts? (Walter Wanger)\n", " & The Cocoanuts was produced by Florenz Ziegfeld. \\color{red}{\\xmark} &\n", " The Cocoanuts was produced for Paramount Pictures by Walter Wanger, who \\color{green}{\\checkmark} &\n", "The Cocoanuts is a 1929 musical comedy film starring the Marx Brothers. Produced for Paramount Pictures by \\textcolor{blue}{\\bf \\underline {Walter Wanger}}, who is not credited, the film stars the four Marx Brothers, Oscar Shaw, Mary Eaton, and Margaret Dumont. It was the first sound film to credit more than one director (Robert Florey and Joseph Santley), and was adapted to the screen by Morrie Ryskind from the George S. Kaufman Broadway musical play. ... \\\\\n", "\\\\\n", "Who was the director of The White Suit?\n", "(Lazar Ristovski) &\n", "The White Suit was directed by Sachin Kundalkar. \\color{red}{\\xmark} &\n", " Lazar Ristovski \\color{green}{\\checkmark}\n", "& In 1999 \"The White Suit\" an auteur film by Ristovski (director, writer, lead actor, and producer) was at the Cannes Film Festival in the Critics Week program. \"The White Suit\" was the Serbian entry for the 1999 Academy Awards. \\textcolor{blue}{\\bf \\underline {Lazar Ristovski}} is the sole owner of Zillion Film Company In 2006, he made a small appearance in the James Bond film \"Casino Royale\". He played Caruso in the 2004 movie \"King of Thieves\". He starred as \u0110or\u0111e in the award-winning 2009 film \"St. George Shoots the Dragon\".\n", "\\\\ \n", "\\bottomrule\n", " \\end{tabular}\n", " \\caption{Qualitative examples of the questions where only retrieval-augmented LMs \\textit{successfully} answer correctly.\n", " The blue underlined text indicates the sub-strings matching the gold answers in the retrieved context. }\n", " \\label{tab:group_c}\n", "\\end{table*}\n", "\n", "\n", "\\end{document}\n" ], "del_percentage": 0.12757 } }