{ "id": "2208.12306", "annotator": "xiaoxin", "input": [ "\\pdfoutput=1\n", "\\documentclass[11pt]{article}\n", "\\usepackage[]{ACL2023}\n", "\\usepackage{times}\n", "\\usepackage{latexsym}\n", "\\usepackage[T1]{fontenc}\n", "\\usepackage[utf8]{inputenc}\n", "\\usepackage{microtype}\n", "\\usepackage{inconsolata}\n", "\\usepackage{longtable}\n", "\\usepackage{tabu}\n", "\\usepackage{arydshln}\n", "\\usepackage{graphicx}\n", "\\usepackage{wrapfig}\n", "\\usepackage{CJK,algorithm,algorithmic,amssymb,amsmath,array,epsfig,graphics,float,subcaption,verbatim,epstopdf}\n", "\\usepackage{enumitem}\n", "\\usepackage{color,soul}\n", "\\usepackage{hhline}\n", "\\usepackage{multirow}\n", "\\usepackage{xcolor}\n", "\\usepackage{hyperref}\n", "\\usepackage{cleveref}\n", "\\usepackage{wrapfig,lipsum}\n", "\\usepackage{xurl}\n", "\\usepackage{tabularx}\n", "\\usepackage{bm}\n", "\\usepackage{seqsplit}\n", "\\usepackage{stmaryrd}\n", "\\usepackage{booktabs}\n", "\\usepackage{comment} \n", "\\usepackage{xparse}\n", "\\title{Multimedia Generative Script Learning for Task Planning \n", "}\n", "\\author{\n", "Qingyun Wang$^{1}$, \\ Manling Li$^1$, \\ Hou Pong Chan$^{2}$, \\ Lifu Huang$^{3}$, \\\\ \\ \\textbf{Julia Hockenmaier}$^1$, \\ \\textbf{Girish Chowdhary}$^1$, \\ \\textbf{Heng Ji}$^{1}$\\\\ \n", "$^{1}$ University of Illinois at Urbana-Champaign $^{2}$ University of Macau $^{3}$ Virginia Tech\\\\\n", "$^{1}$\\texttt{\\fontfamily{pcr}\\selectfont\\{qingyun4,manling2,juliahmr,girishc,hengji\\}@illinois.edu}\\\\\n", "$^{2}${\\fontfamily{pcr}\\selectfont hpchan@um.edu.mo},$^{3}${\\fontfamily{pcr}\\selectfont lifuh@vt.edu}\n", "}\n", "\\begin{document}\n", "\\maketitle\n", "\\begin{abstract}\n", "\\end{abstract}\n", "\\section{Introduction}\n", "\\begin{figure}[htb!]\n", "\\centering\n", "\\includegraphics[width=0.9\\linewidth]{fig/task.pdf}\n", "\\caption{\\textbf{Multimedia Generative Script Learning:} The upper box shows the task input, including the goal and multimedia step history. Each step contains a text description and an illustrative image. The output is the next step. \n", "\\label{img:task_example}\n", "\\end{figure}\n", "Robots rely on understanding the present real-world state and predicting the subsequent steps to better assist humans in daily stereotypical tasks such as meal preparation and gardening~\\citep{10.1007/978-981-15-7345-3_30,9720489}. As an example, Robohow~\\citep{robohow} uses articles from WikiHow\\footnote{\\url{https://www.wikihow.com} contains steps for a variety of tasks. } to assist robots in everyday tasks in human working and living environments. However, the problem is that not all daily tasks are well documented. Thus, generating a sequence of steps that lead to a given goal (i.e., goal-oriented generative script learning)~\\citep{lyu-etal-2021-goal,huang2022language, zoey1, zoey2, zoey3} has a fundamental importance in allowing robots to perform unseen tasks by understanding the patterns in previously observed similar tasks. \n", "\\begin{figure}[!bt]\n", "\\centering\n", "\\includegraphics[width=0.9\\linewidth]{fig/overview}\n", "\\caption{Architecture overview. We use the example in Figure \\ref{img:task_example} as the walking-through example. }\n", "\\label{img:overview}\n", "\\end{figure}\n", "Existing multimedia script learning work seeks to bridge this cross-media gap, but the task settings are multi-choice selection~\\citep{yang-etal-2021-visual} or ordering~\\citep{wu-etal-2022-understanding}, which require candidate steps as input so it is not a practical setting for real-life robots. \n", "To address these problems, we propose a new task, \\textbf{Multimedia Generative Script Learning} (Figure~\\ref{img:task_example}), that requires systems to generate future steps based on the goal and previous steps with visual scenes depicting their states. Specifically, given the goal and previous step history in the form of natural language sentences paired with descriptive images, the model should automatically generate the natural language instruction for the next step.\n", "A good script has three hallmarks:\n", "(3) \\underline{\\textit{Diverse}}: it displays distinct information at each step. We call it \\textit{diversity challenge}. \n", "Therefore, we introduce a novel \\textbf{diversity-oriented contrastive learning objective} to control all subsequent steps to convey different information. \n", "We treat all other steps in the given input and retrieved steps in other tasks similar to the given input as \\textit{hard} negatives.\n", "While the model design can be applied to any domain of interest, we experiment with the model on two domains \\textit{Gardening} and \\textit{Crafts}, where task planning has not been well researched. \n", "The contributions are threefold: \n", "\\begin{enumerate}\n", "\\item We propose the first \\textit{multimedia goal-oriented generative script learning task} to record historical steps in both text and images. We also release a new benchmark from WikiHow, featuring 5,652 tasks and 79,089 multimedia steps.\n", "\\item We propose a novel approach to produce \\textit{visually trackable}, \\textit{inductive}, and \\textit{diverse} scripts through a selective multimedia encoder, a retrieval augmented decoder, and a diversity-oriented contrastive learning objective. \n", "\\item We propose a new \\textit{multimodal-retrieval based metric} to evaluate the cross-modal semantic similarity and the inductive ability by checking factual correctness.\n", "\\end{enumerate}\n", "\\section{Problem Formulation}\n", "We propose a new multimedia generative script learning task: given an activity goal $G$, an optional subgoal $M$ that specifies the concrete needs, and the previous multimedia step history $\\mathcal{H}_n=\\{(S_1,V_1),...,(S_n,V_n)\\}$ with length $n$, a model is expected to predict the next possible step $S_{n+1}$, where $S_i$ is a text sequence and $V_i$ is an image. \n", "\\begin{table}[!htb]\n", "\\centering\n", "\\small\n", "\\begin{tabularx}{\\linewidth}{>{\\hsize=1.2\\hsize}X>{\\arraybackslash\\hsize=0.9\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=1\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n", "\\toprule\n", "\\textbf{Domain}&\\textbf{Split} & \\textbf{\\#Task} & \\textbf{\\#Pair}& $\\mathbf{\\overline{\\#Step}}$ & $\\mathbf{\\overline{\\#Token}}$ \\\\\n", "\\midrule\n", " &Train & 1,857 & 20,258 & 3.10 & 11.6 \\\\\n", "Gardening &Valid. & 237 & 2,428 & 3.03 & 10.6\\\\\n", " &Test & 238 & 2,684 & 2.88 & 11.2 \\\\\n", "\\hdashline\n", " &Train & 2,654 & 32,082 & 6.06 & 8.98 \\\\\n", "Crafts &Valid. & 3,33 & 4,061 & 6.12 & 9.10 \\\\\n", " &Test & 3,33 & 3,937 & 5.91 & 9.00 \\\\\n", "\\bottomrule\n", "\\end{tabularx}\n", "\\caption{Statistics of our dataset. $\\mathbf{\\overline{\\#Step}}$ denotes average number of steps per sample. $\\mathbf{\\overline{\\#Token}}$ denotes average number of words per step.\\label{tab:stat} }\n", "\\end{table}\n", "\\section{Dataset Collection}\n", "Using articles from \\textit{Gardening} and \\textit{Crafts} categories as case studies, we create a new dataset based on the English WikiHow dump (2021/05). There are typically three levels of hierarchy in a WikiHow article: \\textit{goals} which describe the overall task, \\textit{subgoals} which represent the intermediate process to accomplish a \\textit{goal}, and \\textit{steps} which are the specific actions to complete a \\textit{subgoal}. For each WikiHow article, we collect step-image pairs as well as their goals and methods\\footnote{We only keep steps that contain both images and texts.}. We split the whole dataset based on the task categories. Therefore, the validation and test sets contain tasks not included in the training set. Table \\ref{tab:stat} shows the detailed data statistics.\n", "\\section{Method}\n", "\\subsection{Model Architecture}\n", "The overall framework is illustrated in Figure \\ref{img:overview}. \n", "Given the activity goal $G$, optional subgoal $M$, and multimedia step history $\\mathcal{H}_n$, \n", "Then we propose a \\textit{selective multimedia encoder} by extending the BART encoder with a gated fusion layer to learn contextualized representations for the step history. \n", "The entire model is trained by our proposed \\textit{diversity-oriented contrastive loss} and cross-entropy loss.\n", "\\subsection{Selective Multimedia Encoder}\n", "\\textbf{Image Encoding} Compared to step descriptions which focus more on action description, captions provide more visual environment/object information such as \\textit{beads} in Step 1 from Figure \\ref{img:overview}.\n", "Because we are more concerned with the overall semantics of the salient objects in the image rather than the details of every object, we adopt image captioners to encode visual features and track visual state changes. For instance, while multiple objects are present in Step 3 in Figure \\ref{img:task_example}, the \\textit{finger} object can be ignored in the third step as it does not represent the key information conveyed by the image. Specifically, we use the state-of-the-art image captioner BLIP~\\citep{li2022blip}, which is pretrained on a large-scale vision-and-language corpus with 129M images to generate a caption $C_{i}$ for each image $V_{i}$ in the input step history $\\mathcal{H}_{n}$. After that, we obtain the \\textit{caption-enhanced step history} $\\mathcal{\\hat{H}}_{n}=\\{(S_1,C_1),...,(S_n,C_n)\\}$, where $C_i$ is the caption of the image $V_i$ in step $i$. \n", "\\textbf{Selective Multimedia Encoding}\n", "To help the encoder capture the activity goal and subgoal information, we concatenate goal $G$ and optional subgoal $M$ to serve as the first sequence in the history $X_0= [G, M]$. For the subsequent steps in the history, we concatenate each step and caption as $X_{2i-1}=S_i$ and $X_{2i}=C_i$. To summarize the step history, we prepend a learnable $\\mathtt{[CLS]}$ token to the sequence as a contextualized vector. The entire text sequence is then represented as $\\mathcal{X}=\\{\\mathtt{[CLS]},X_0,X_1,...,X_{2n}\\}$. We pass the text sequence $\\mathcal{X}$ into a BART encoder to get the contextualized hidden representation $\\mathbf{H}=\\{\\mathbf{h}_0,...,\\mathbf{h}^{2n}_{L_{X_{2n}}}\\}=\\mathrm{Enc}(\\mathcal{X})$. We denote $\\mathbf{H}_{X_j}=\\{\\mathbf{h}^j_1,...,\\mathbf{h}^j_{L_{X_j}}\\}$ as the hidden states for sequence $X_j$, where $L_{X_j}$ is the length of $X_j$.\n", "Since the input sequence contains steps or captions not directly relevant to the future step, we need to mask those sentences based on the step/caption representations. For instance, in Figure \\ref{img:overview}, the step description for Step 1 is vague and needs to be masked. We treat the representation of the $\\mathtt{[CLS]}$ token, $\\mathbf{h}_0$, as the contextualized representation of the entire step history and use it to compute a mask that filters out the irrelevant step/caption information. Specifically, we use $\\mathbf{h}_0$ as query and $\\mathbf{H}_{X_j}$ as both the key and value to compute Multi-Headed Attention ($\\mathrm{MultiHead}$) \\citep{NIPS2017_3f5ee243} for each sequence hidden states $\\mathbf{H}_{X_j}$: $\\hat{\\mathbf{h}}_{X_j} = \\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{X_j},\\mathbf{H}_{X_j})$,\n", "where $\\mathbf{\\hat{h}}_{X_j}$ is the weighted representation for text sequence $X_j$.\n", "Then, for each sequence $X_j$, we can calculate the mask probability as:$\n", "\\alpha_j=\\sigma (\\mathbf{W}_\\alpha[\\mathbf{\\mathbf{h}_0;\\hat{h}}_{X_j}])$, where $\\mathbf{W}_\\alpha$ is a learnable parameter. Similar to \\citet{sengupta-etal-2021-gated-transformer}, \n", "we update the hidden states for each sequence $X_j$ as $\n", "\\mathbf{\\bar{H}}_{X_j} = \\alpha_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\alpha_j )\\mathbf{H}_{X_j}\n", "$,\n", "where $\\mathbf{emb}_{\\mathtt{[MASK]}}$ is the embedding of the $\\mathtt{[MASK]}$ token. \n", "The final hidden state sequences are $\\mathbf{\\bar{H}}=[h_0;\\mathbf{\\bar{H}}_1;...;\\mathbf{\\bar{H}}_{2n}]$.\n", "\\label{sec:retrieve}\n", "Similarly, we use $\\mathbf{h}_0$ in multimedia encoder as the query and $\\mathbf{H}_{R_i}$ as both the key and value to compute multi-headed attention for each sequence hidden states: $\\hat{\\mathbf{h}}_{R_i}=\\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{R_i},\\mathbf{H}_{R_i})$,\n", "where $\\mathbf{\\hat{h}}_{R_i}$ is the weighted representation for step sequence $R_i$. \n", "Similarly, we can calculate the mask probability as: $\\beta_j=\\sigma (\\mathbf{W}_\\beta[\\mathbf{h}_0;\\mathbf{\\hat{h}}_{R_j}])$, \n", "where $\\mathbf{W}_\\beta $ is a learnable parameter. We then update the hidden states for each sequence $R_j$ as $\\mathbf{\\bar{H}}_{R_i} = \\beta_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\beta_j )\\mathbf{H}_{R_i}$.\n", "The final hidden state sequences is $\\mathbf{\\bar{H}}_R=[\\mathbf{\\bar{H}}_{R_1};...;\\mathbf{\\bar{H}}_{R_k}]$.\n", "In the decoder, we compute the probability $P\\left(s_{q}|s_{