Reza8848's picture
Track large files with Git LFS
837b615
{
"id": "2208.12306",
"annotator": "xiaoxin",
"input": [
"\\pdfoutput=1\n",
"\\documentclass[11pt]{article}\n",
"\\usepackage[]{ACL2023}\n",
"\\usepackage{times}\n",
"\\usepackage{latexsym}\n",
"\\usepackage[T1]{fontenc}\n",
"\\usepackage[utf8]{inputenc}\n",
"\\usepackage{microtype}\n",
"\\usepackage{inconsolata}\n",
"\\usepackage{longtable}\n",
"\\usepackage{tabu}\n",
"\\usepackage{arydshln}\n",
"\\usepackage{graphicx}\n",
"\\usepackage{wrapfig}\n",
"\\usepackage{CJK,algorithm,algorithmic,amssymb,amsmath,array,epsfig,graphics,float,subcaption,verbatim,epstopdf}\n",
"\\usepackage{enumitem}\n",
"\\usepackage{color,soul}\n",
"\\usepackage{hhline}\n",
"\\usepackage{multirow}\n",
"\\usepackage{xcolor}\n",
"\\usepackage{hyperref}\n",
"\\usepackage{cleveref}\n",
"\\usepackage{wrapfig,lipsum}\n",
"\\usepackage{xurl}\n",
"\\usepackage{tabularx}\n",
"\\usepackage{bm}\n",
"\\usepackage{seqsplit}\n",
"\\usepackage{stmaryrd}\n",
"\\usepackage{booktabs}\n",
"\\usepackage{comment} \n",
"\\usepackage{xparse}\n",
"\\title{Multimedia Generative Script Learning for Task Planning \n",
"}\n",
"\\author{\n",
"Qingyun Wang$^{1}$, \\ Manling Li$^1$, \\ Hou Pong Chan$^{2}$, \\ Lifu Huang$^{3}$, \\\\ \\ \\textbf{Julia Hockenmaier}$^1$, \\ \\textbf{Girish Chowdhary}$^1$, \\ \\textbf{Heng Ji}$^{1}$\\\\ \n",
"$^{1}$ University of Illinois at Urbana-Champaign $^{2}$ University of Macau $^{3}$ Virginia Tech\\\\\n",
"$^{1}$\\texttt{\\fontfamily{pcr}\\selectfont\\{qingyun4,manling2,juliahmr,girishc,hengji\\}@illinois.edu}\\\\\n",
"$^{2}${\\fontfamily{pcr}\\selectfont [email protected]},$^{3}${\\fontfamily{pcr}\\selectfont [email protected]}\n",
"}\n",
"\\begin{document}\n",
"\\maketitle\n",
"\\begin{abstract}\n",
"\\end{abstract}\n",
"\\section{Introduction}\n",
"\\begin{figure}[htb!]\n",
"\\centering\n",
"\\includegraphics[width=0.9\\linewidth]{fig/task.pdf}\n",
"\\caption{\\textbf{Multimedia Generative Script Learning:} The upper box shows the task input, including the goal and multimedia step history. Each step contains a text description and an illustrative image. The output is the next step. \n",
"\\label{img:task_example}\n",
"\\end{figure}\n",
"Robots rely on understanding the present real-world state and predicting the subsequent steps to better assist humans in daily stereotypical tasks such as meal preparation and gardening~\\citep{10.1007/978-981-15-7345-3_30,9720489}. As an example, Robohow~\\citep{robohow} uses articles from WikiHow\\footnote{\\url{https://www.wikihow.com} contains steps for a variety of tasks. } to assist robots in everyday tasks in human working and living environments. However, the problem is that not all daily tasks are well documented. Thus, generating a sequence of steps that lead to a given goal (i.e., goal-oriented generative script learning)~\\citep{lyu-etal-2021-goal,huang2022language, zoey1, zoey2, zoey3} has a fundamental importance in allowing robots to perform unseen tasks by understanding the patterns in previously observed similar tasks. \n",
"\\begin{figure}[!bt]\n",
"\\centering\n",
"\\includegraphics[width=0.9\\linewidth]{fig/overview}\n",
"\\caption{Architecture overview. We use the example in Figure \\ref{img:task_example} as the walking-through example. }\n",
"\\label{img:overview}\n",
"\\end{figure}\n",
"Existing multimedia script learning work seeks to bridge this cross-media gap, but the task settings are multi-choice selection~\\citep{yang-etal-2021-visual} or ordering~\\citep{wu-etal-2022-understanding}, which require candidate steps as input so it is not a practical setting for real-life robots. \n",
"To address these problems, we propose a new task, \\textbf{Multimedia Generative Script Learning} (Figure~\\ref{img:task_example}), that requires systems to generate future steps based on the goal and previous steps with visual scenes depicting their states. Specifically, given the goal and previous step history in the form of natural language sentences paired with descriptive images, the model should automatically generate the natural language instruction for the next step.\n",
"A good script has three hallmarks:\n",
"(3) \\underline{\\textit{Diverse}}: it displays distinct information at each step. We call it \\textit{diversity challenge}. \n",
"Therefore, we introduce a novel \\textbf{diversity-oriented contrastive learning objective} to control all subsequent steps to convey different information. \n",
"We treat all other steps in the given input and retrieved steps in other tasks similar to the given input as \\textit{hard} negatives.\n",
"While the model design can be applied to any domain of interest, we experiment with the model on two domains \\textit{Gardening} and \\textit{Crafts}, where task planning has not been well researched. \n",
"The contributions are threefold: \n",
"\\begin{enumerate}\n",
"\\item We propose the first \\textit{multimedia goal-oriented generative script learning task} to record historical steps in both text and images. We also release a new benchmark from WikiHow, featuring 5,652 tasks and 79,089 multimedia steps.\n",
"\\item We propose a novel approach to produce \\textit{visually trackable}, \\textit{inductive}, and \\textit{diverse} scripts through a selective multimedia encoder, a retrieval augmented decoder, and a diversity-oriented contrastive learning objective. \n",
"\\item We propose a new \\textit{multimodal-retrieval based metric} to evaluate the cross-modal semantic similarity and the inductive ability by checking factual correctness.\n",
"\\end{enumerate}\n",
"\\section{Problem Formulation}\n",
"We propose a new multimedia generative script learning task: given an activity goal $G$, an optional subgoal $M$ that specifies the concrete needs, and the previous multimedia step history $\\mathcal{H}_n=\\{(S_1,V_1),...,(S_n,V_n)\\}$ with length $n$, a model is expected to predict the next possible step $S_{n+1}$, where $S_i$ is a text sequence and $V_i$ is an image. \n",
"\\begin{table}[!htb]\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=1.2\\hsize}X>{\\arraybackslash\\hsize=0.9\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=1\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\textbf{Domain}&\\textbf{Split} & \\textbf{\\#Task} & \\textbf{\\#Pair}& $\\mathbf{\\overline{\\#Step}}$ & $\\mathbf{\\overline{\\#Token}}$ \\\\\n",
"\\midrule\n",
" &Train & 1,857 & 20,258 & 3.10 & 11.6 \\\\\n",
"Gardening &Valid. & 237 & 2,428 & 3.03 & 10.6\\\\\n",
" &Test & 238 & 2,684 & 2.88 & 11.2 \\\\\n",
"\\hdashline\n",
" &Train & 2,654 & 32,082 & 6.06 & 8.98 \\\\\n",
"Crafts &Valid. & 3,33 & 4,061 & 6.12 & 9.10 \\\\\n",
" &Test & 3,33 & 3,937 & 5.91 & 9.00 \\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Statistics of our dataset. $\\mathbf{\\overline{\\#Step}}$ denotes average number of steps per sample. $\\mathbf{\\overline{\\#Token}}$ denotes average number of words per step.\\label{tab:stat} }\n",
"\\end{table}\n",
"\\section{Dataset Collection}\n",
"Using articles from \\textit{Gardening} and \\textit{Crafts} categories as case studies, we create a new dataset based on the English WikiHow dump (2021/05). There are typically three levels of hierarchy in a WikiHow article: \\textit{goals} which describe the overall task, \\textit{subgoals} which represent the intermediate process to accomplish a \\textit{goal}, and \\textit{steps} which are the specific actions to complete a \\textit{subgoal}. For each WikiHow article, we collect step-image pairs as well as their goals and methods\\footnote{We only keep steps that contain both images and texts.}. We split the whole dataset based on the task categories. Therefore, the validation and test sets contain tasks not included in the training set. Table \\ref{tab:stat} shows the detailed data statistics.\n",
"\\section{Method}\n",
"\\subsection{Model Architecture}\n",
"The overall framework is illustrated in Figure \\ref{img:overview}. \n",
"Given the activity goal $G$, optional subgoal $M$, and multimedia step history $\\mathcal{H}_n$, \n",
"Then we propose a \\textit{selective multimedia encoder} by extending the BART encoder with a gated fusion layer to learn contextualized representations for the step history. \n",
"The entire model is trained by our proposed \\textit{diversity-oriented contrastive loss} and cross-entropy loss.\n",
"\\subsection{Selective Multimedia Encoder}\n",
"\\textbf{Image Encoding} Compared to step descriptions which focus more on action description, captions provide more visual environment/object information such as \\textit{beads} in Step 1 from Figure \\ref{img:overview}.\n",
"Because we are more concerned with the overall semantics of the salient objects in the image rather than the details of every object, we adopt image captioners to encode visual features and track visual state changes. For instance, while multiple objects are present in Step 3 in Figure \\ref{img:task_example}, the \\textit{finger} object can be ignored in the third step as it does not represent the key information conveyed by the image. Specifically, we use the state-of-the-art image captioner BLIP~\\citep{li2022blip}, which is pretrained on a large-scale vision-and-language corpus with 129M images to generate a caption $C_{i}$ for each image $V_{i}$ in the input step history $\\mathcal{H}_{n}$. After that, we obtain the \\textit{caption-enhanced step history} $\\mathcal{\\hat{H}}_{n}=\\{(S_1,C_1),...,(S_n,C_n)\\}$, where $C_i$ is the caption of the image $V_i$ in step $i$. \n",
"\\textbf{Selective Multimedia Encoding}\n",
"To help the encoder capture the activity goal and subgoal information, we concatenate goal $G$ and optional subgoal $M$ to serve as the first sequence in the history $X_0= [G, M]$. For the subsequent steps in the history, we concatenate each step and caption as $X_{2i-1}=S_i$ and $X_{2i}=C_i$. To summarize the step history, we prepend a learnable $\\mathtt{[CLS]}$ token to the sequence as a contextualized vector. The entire text sequence is then represented as $\\mathcal{X}=\\{\\mathtt{[CLS]},X_0,X_1,...,X_{2n}\\}$. We pass the text sequence $\\mathcal{X}$ into a BART encoder to get the contextualized hidden representation $\\mathbf{H}=\\{\\mathbf{h}_0,...,\\mathbf{h}^{2n}_{L_{X_{2n}}}\\}=\\mathrm{Enc}(\\mathcal{X})$. We denote $\\mathbf{H}_{X_j}=\\{\\mathbf{h}^j_1,...,\\mathbf{h}^j_{L_{X_j}}\\}$ as the hidden states for sequence $X_j$, where $L_{X_j}$ is the length of $X_j$.\n",
"Since the input sequence contains steps or captions not directly relevant to the future step, we need to mask those sentences based on the step/caption representations. For instance, in Figure \\ref{img:overview}, the step description for Step 1 is vague and needs to be masked. We treat the representation of the $\\mathtt{[CLS]}$ token, $\\mathbf{h}_0$, as the contextualized representation of the entire step history and use it to compute a mask that filters out the irrelevant step/caption information. Specifically, we use $\\mathbf{h}_0$ as query and $\\mathbf{H}_{X_j}$ as both the key and value to compute Multi-Headed Attention ($\\mathrm{MultiHead}$) \\citep{NIPS2017_3f5ee243} for each sequence hidden states $\\mathbf{H}_{X_j}$: $\\hat{\\mathbf{h}}_{X_j} = \\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{X_j},\\mathbf{H}_{X_j})$,\n",
"where $\\mathbf{\\hat{h}}_{X_j}$ is the weighted representation for text sequence $X_j$.\n",
"Then, for each sequence $X_j$, we can calculate the mask probability as:$\n",
"\\alpha_j=\\sigma (\\mathbf{W}_\\alpha[\\mathbf{\\mathbf{h}_0;\\hat{h}}_{X_j}])$, where $\\mathbf{W}_\\alpha$ is a learnable parameter. Similar to \\citet{sengupta-etal-2021-gated-transformer}, \n",
"we update the hidden states for each sequence $X_j$ as $\n",
"\\mathbf{\\bar{H}}_{X_j} = \\alpha_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\alpha_j )\\mathbf{H}_{X_j}\n",
"$,\n",
"where $\\mathbf{emb}_{\\mathtt{[MASK]}}$ is the embedding of the $\\mathtt{[MASK]}$ token. \n",
"The final hidden state sequences are $\\mathbf{\\bar{H}}=[h_0;\\mathbf{\\bar{H}}_1;...;\\mathbf{\\bar{H}}_{2n}]$.\n",
"\\label{sec:retrieve}\n",
"Similarly, we use $\\mathbf{h}_0$ in multimedia encoder as the query and $\\mathbf{H}_{R_i}$ as both the key and value to compute multi-headed attention for each sequence hidden states: $\\hat{\\mathbf{h}}_{R_i}=\\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{R_i},\\mathbf{H}_{R_i})$,\n",
"where $\\mathbf{\\hat{h}}_{R_i}$ is the weighted representation for step sequence $R_i$. \n",
"Similarly, we can calculate the mask probability as: $\\beta_j=\\sigma (\\mathbf{W}_\\beta[\\mathbf{h}_0;\\mathbf{\\hat{h}}_{R_j}])$, \n",
"where $\\mathbf{W}_\\beta $ is a learnable parameter. We then update the hidden states for each sequence $R_j$ as $\\mathbf{\\bar{H}}_{R_i} = \\beta_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\beta_j )\\mathbf{H}_{R_i}$.\n",
"The final hidden state sequences is $\\mathbf{\\bar{H}}_R=[\\mathbf{\\bar{H}}_{R_1};...;\\mathbf{\\bar{H}}_{R_k}]$.\n",
"In the decoder, we compute the probability $P\\left(s_{q}|s_{<q},\\mathcal{\\hat{H}},G,M\\right)$ for the $q$-th token $s_q\\in S_{n+1}$.\n",
"Our retrieval-augmented decoder is similar to \\cite{liu-etal-2021-three}, which aims to capture historically relevant steps related to the next step based on previous decoder hidden states. Given $z_q^l$ which is the hidden state of $s_{q}$ in layer $l$, we first use a multi-head cross-attention to fuse the hidden states from the retrieved steps $\\mathbf{\\bar{H}_R}$: $ {z'_q}^l = \\mathrm{MultiHead}(z_q^l,\\mathbf{\\bar{H}}_R,\\mathbf{\\bar{H}}_R)$.\n",
"We also append a gating mechanism to control the knowledge from the retrieved steps and previous hidden states:\n",
"\\begin{equation}\n",
"\\begin{split}\n",
" \\gamma &= \\sigma(\\mathbf{W}_\\gamma[z_q^l;{z'_q}^l]) \\\\\n",
" {\\tilde{z}_q}^l &= \\gamma \\cdot \\mathrm{LN}({z'_q}^l) + (1-\\gamma)\\cdot (z_q^l)\n",
"\\end{split}\n",
"\\end{equation}\n",
"where $\\mathbf{W}_\\gamma$ is a learnable parameter and $\\mathrm{LN}(*)$ is the layer norm function. \n",
"Finally, the fused hidden states in the top layer are used to compute the generation probability.\n",
"We supervise the next step generation using the standard cross-entropy loss: \n",
"\\begin{equation}\n",
"\\mathcal{L}_{\\mathrm{gen}} = \\sum_{q=1}^{|S_{n+1}|} \\log P\\left(s_{q}|s_{<q},\\mathcal{\\hat{H}},G,M\\right)\n",
"\\end{equation}\n",
"\\subsection{Diversity-Oriented Contrastive Learning}\n",
"In the experiment, we observe that the model tends to keep generating similar future steps in a row given the beginning steps as input or just paraphrases the input steps. \n",
"Therefore, we propose a contrastive learning-based loss to encourage the model to return diverse step prediction results. \n",
"\\noindent\\textbf{Negative Sampling} Sequence-to-sequence models suffer from the ``exposure bias'' problem \\citep{ranzato2015sequence,an2022cont} because of \\textit{teacher forcing}. Contrastive loss provides an additional sequence level loss which can help models increase the diversity of the output steps. We adopt two types of negative sampling strategies to discourage the model from paraphrasing the previous step as the future step: \n",
"\\textit{self-negatives} \\citep{wang-etal-2022-simkgc} where we consider the input steps as negative samples and \\textit{retrieved-negatives} where we consider the retrieved steps from training corpus which are similar to the input step as negative samples. \n",
"For example, in Figure \\ref{img:task_example}, the goals and steps from the step history serve as the self-negatives. Given the last step, \u201ccut the thread\u201d, we retrieve similar steps from the training set as retrieved negatives which include \u201ccut your thread\u201d, \"cut off the extra thread\", etc. \n",
"\\noindent\\textbf{Diversity-Oriented Contrastive Loss} Since the model needs to distinguish between the ground truth and those negative samples, we design a novel diversity-oriented contrastive loss. Specifically, given an input sequence $\\mathcal{\\hat{H}},G,M$, the ground truth next step $S_{n+1}$, and a set of $K$ negative samples $\\{S_{n+1}^1, S_{n+1}^2,...,S_{n+1}^K\\}$, we aim to maximize the probability of classifying the positive sample correctly with the InfoNCE loss \\citep{oord2018representation}:\n",
"\\begin{align}\n",
"\\begin{split}\n",
" \\mathcal{L}_{\\mathrm{cl}} &= \\frac{\\exp{\\left(y^+/\\tau\\right)}}{\\sum_k \\exp{\\left(y^-_k/\\tau\\right)} +\\exp{\\left(y^+/\\tau\\right)} } \\\\\n",
" y^+&=\\sigma(\\mathrm{Avg}(\\mathbf{W}_y\\mathbf{\\bar{H}}^++\\mathbf{b}_y))\\\\\n",
" y^-_k&=\\sigma(\\mathrm{Avg}(\\mathbf{W}_y\\mathbf{\\bar{H}}^-_k+\\mathbf{b}_y))\\\\\n",
"\\end{split}\n",
"\\end{align}\n",
"where $\\mathbf{\\bar{H}}^+$ and $\\mathbf{\\bar{H}}_k^-$ are decoder hidden states from the positive and $k$-th negative samples, $\\mathbf{W}_y$ is a learnable parameter, $\\tau$ is the temperature, and $\\mathrm{Avg}(*)$ denotes the average pooling function.\n",
"\\subsection{Training Objective}\n",
"We jointly optimize the cross-entropy loss and our proposed diversity-oriented contrastive loss: $\\mathcal{L} = \\mathcal{L}_{\\mathrm{gen}} + \\lambda \\mathcal{L}_{\\mathrm{cl}}$, \n",
"where $\\lambda$ is a hyperparameter that controls the weight of the contrastive loss. \n",
"\\section{Evaluation Metrics}\n",
"\\noindent\n",
"In addition to evaluating whether the generated step matches the next step, we also check whether the generated step matches any subsequent step. \n",
"This enables the model to earn credit if it generates a step that appears in the future.\n",
"Additional details of the evaluation setup are in the Appendix \\ref{sec:evalmetrics}.\n",
"\\begin{table}[!htb]\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{2}{c}{\\textbf{Gardening}}&\\multicolumn{2}{c}{\\textbf{Crafts}}\\\\\n",
" &\\textbf{I@1}$\\uparrow$&\\textbf{T@1}$\\uparrow$&\\textbf{I@1}$\\uparrow$&\\textbf{T@1}$\\uparrow$\\\\ \n",
"\\midrule\n",
"+CP & 48.5 & 39.2 &48.2 & 31.5\\\\ \n",
"+CP+M & \\textbf{49.8} & 41.0 &\\textbf{50.3} & \\textbf{37.8}\\\\ \n",
"+CP+M+R & 48.1 & 38.9 &48.9 & 31.8\\\\ \n",
"+CP+M+R+CL & 49.5 & \\textbf{43.0} &49.0 & 33.9\\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"}\n",
"\\label{tab:fut}\n",
"\\end{table}\n",
"\\begin{table*}[!htb]\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.4\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=1.1\\hsize}X>{\\centering\\arraybackslash\\hsize=0.7\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\midrule\n",
"GPT-2 & 13.2 & 5.03 & 1.87 & 0.72 & 7.38 & 12.5 & -4.73 & 0.239\\\\\n",
"T5 & 17.6 & 9.05 & 4.92 & 2.87 & 9.41 & 16.5 & -4.45 & 0.300\\\\\n",
"Naive Retrieval & 10.9 & 4.14 & 1.93 & 1.10 & 6.33 & 10.0 & -4.88 & 0.180\\\\\n",
"GPT2-SIF & 11.6 & 5.10 & 2.43 & 1.28 & 6.85 & 10.8 & -4.80 & 0.233\\\\\n",
"BART & 17.0 & 8.21 & 4.45 & 2.61 & 8.93 & 15.7 & -4.52 & 0.277\\\\\n",
"\\hdashline\n",
"\\hspace{1mm}+CP & 16.9 & 8.79 & 4.99 & 3.03 & 9.23 & 16.5 & -4.41 & 0.300\\\\\n",
"\\hspace{1mm}+CP+M & 17.8 & 9.36 & 5.30 & 3.19 & 9.61 & \\textbf{17.4} & -4.38 & 0.305\\\\\n",
"\\hspace{1mm}+CP+M+R & 17.5 & 9.22 & 5.25 & 3.13 & 9.60 & 17.2 & \\textbf{-4.36} & 0.309\\\\\n",
"\\hspace{1mm}+CP+M+R+CL & \\textbf{18.4} & \\textbf{9.72} & \\textbf{5.51} & \\textbf{3.31} & \\textbf{9.91} & 17.3 & -4.37 & \\textbf{0.310}\\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\label{tab:step} }\n",
"\\end{table*}\n",
"\\begin{table*}[!htb]\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.4\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=1.1\\hsize}X>{\\centering\\arraybackslash\\hsize=0.7\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\midrule\n",
"GPT-2 & 15.5 & 5.40 & 1.98 & 0.93 & 7.63 & 14.0 & -4.67 & 0.218\\\\\n",
"T5 & 20.8 & 11.1 & 6.43 & 4.07 & 10.54 & 19.6 & -4.38 & 0.300\\\\\n",
"Naive Retrieval & 13.5 & 5.26 & 2.38 & 1.28 & 6.81 & 12.3 & -4.83 & 0.163\\\\\n",
"GPT2-SIF & 14.8 & 6.70 & 3.05 & 1.58 & 7.74 & 13.2 & -4.69 & 0.234\\\\\n",
"BART & 19.7 & 10.8 & 6.22 & 4.11 & 10.44 & 20.0 & -4.29 & 0.299\\\\\n",
"\\hdashline\n",
"\\hspace{1mm}+CP & 20.1 & 11.1 & 6.48 & 4.24 & 10.61 & 20.1 & -4.29 & 0.303\\\\\n",
"\\hspace{1mm}+CP+M & 20.5 & 11.1 & 6.61 & 4.40 & 10.79 & 20.1 & -4.28 & 0.305\\\\\n",
"\\hspace{1mm}+CP+M+R & 20.7 & 11.5 & 6.93 & 4.66 & 11.02 & \\textbf{20.5} & \\textbf{-4.25} & 0.309\\\\\n",
"\\hspace{1mm}+CP+M+R+CL & \\textbf{21.3} & \\textbf{11.8} & \\textbf{7.12} & \\textbf{4.85} & \\textbf{11.25} & 20.3 & -4.26 & \\textbf{0.313}\\\\ \n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\label{tab:step1} }\n",
"\\end{table*}\n"
],
"output": {
"What experiments do you suggest doing?": [
"1. Goal-oriented script generation with advanced pre-trained text-only generation models: The authors should design and implement a set of text-only baselines (such as GPT-2, BART, etc.) to compare the quality of the script learned by their approach and those generated by these models.",
"2. Retrieval baselines: The authors should include a set of baselines that directly retrieve relevant sentences and learn scripts based on these retrieval results. They should then compare the quality of the script generated by their approach and scripts generated by these baselines.",
"3. Baselines transforming visual scenes into embeddings instead of captions: The authors should include baselines that encode visual states as vectors like CLIP-BART. They then should compare if the script jointly learned via history sentences and image captions are better than those learned via history sentences and image embeddings.",
"4. Goal-oriented script generation with advanced text-only generation models fine-tuned on their dataset: The authors should include text-only baselines and fine-tune the baseline models on their dataset. Then, they should compare if the script generated by their approach is better than those generated by the fine-tined text-only models.",
"5. Approach with alternative module choices: The authors should replace some modules in their approach (such as text encoding module) to conduct ablation studies. They then should compare if the script generated by their approach is better than those generated with alternative module choices.",
"6. Automatic evaluation: The authors should include automatic metrics to evaluate the generated scripts. For example, BLEU can measure the matching level of generation and ground truth.",
"7. Human evaluation: The authors should also show human evaluation of their generated scripts. They should consider different dimensions of such evaluations like correctness, executability, diversity, and so on."
],
"Why do you suggest these experiments?": [
"1. To see if the accompanied visual scenes can really enhance or improve the quality of the generated script, thus revealing the efficacy of incorporating visual states in script learning.",
"2. To see if scripts learned by open-ended generation are better than those constructed by retrieved sentences, thus revealing the limitation of received-based script generation.",
"3. To see if the design of transforming visual states into captions first and handling them in the aligned text feature space is better than using image embeddings directly, thus revealing the captions' information is more applicable than image embeddings.",
"4. Although pre-trained models are often capable of many linguistic tasks, fine-tuning can enhance the model's capability. This comparison can thus further show if the visual scene-enhanced script learning is better than the pure-text script learning setting.",
"5. To check the impact of different backbones, thus revealing their choices of modules to form an optimal design.",
"6. To see if the generated scripts match ground truth well.",
"7. To complement the automatic evaluation. Due to the open-ended feature of script learning, qualitative evaluation by humans is essential."
]
},
"paper_info": {
"title": "Multimedia Generative Script Learning for Task Planning",
"authors": [
"Qingyun Wang",
"Manling Li",
"Hou Pong Chan",
"Lifu Huang",
"Julia Hockenmaier",
"Girish Chowdhary",
"Heng Ji"
],
"abstract": "Goal-oriented generative script learning aims to generate subsequent steps to\nreach a particular goal, which is an essential task to assist robots or humans\nin performing stereotypical activities. An important aspect of this process is\nthe ability to capture historical states visually, which provides detailed\ninformation that is not covered by text and will guide subsequent steps.\nTherefore, we propose a new task, Multimedia Generative Script Learning, to\ngenerate subsequent steps by tracking historical states in both text and vision\nmodalities, as well as presenting the first benchmark containing 5,652 tasks\nand 79,089 multimedia steps. This task is challenging in three aspects: the\nmultimedia challenge of capturing the visual states in images, the induction\nchallenge of performing unseen tasks, and the diversity challenge of covering\ndifferent information in individual steps. We propose to encode visual state\nchanges through a selective multimedia encoder to address the multimedia\nchallenge, transfer knowledge from previously observed tasks using a\nretrieval-augmented decoder to overcome the induction challenge, and further\npresent distinct information at each step by optimizing a diversity-oriented\ncontrastive learning objective. We define metrics to evaluate both generation\nand inductive quality. Experiment results demonstrate that our approach\nsignificantly outperforms strong baselines.",
"comments": "21 pages, Accepted by Findings of the Association for Computational\n Linguistics: ACL 2023, Code and Resources at\n https://github.com/EagleW/Multimedia-Generative-Script-Learning"
},
"raw_data": {
"context_before_exp": [
"\n",
"\\pdfoutput=1\n",
"\n",
"\n",
"\\documentclass[11pt]{article}\n",
"\n",
"\n",
"\\usepackage[]{ACL2023}\n",
"\n",
"\n",
"\\usepackage{times}\n",
"\\usepackage{latexsym}\n",
"\n",
"\n",
"\\usepackage[T1]{fontenc}\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\usepackage[utf8]{inputenc}\n",
"\n",
"\n",
"\n",
"\n",
"\\usepackage{microtype}\n",
"\n",
"\n",
"\n",
"\n",
"\\usepackage{inconsolata}\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\usepackage{longtable}\n",
"\\usepackage{tabu}\n",
"\\usepackage{arydshln}\n",
"\n",
"\\usepackage{graphicx}\n",
"\\usepackage{wrapfig}\n",
"\\usepackage{CJK,algorithm,algorithmic,amssymb,amsmath,array,epsfig,graphics,float,subcaption,verbatim,epstopdf}\n",
"\\usepackage{enumitem}\n",
"\\usepackage{color,soul}\n",
"\\usepackage{hhline}\n",
"\\usepackage{multirow}\n",
"\\usepackage{xcolor}\n",
"\\usepackage{hyperref}\n",
"\\usepackage{cleveref}\n",
"\n",
"\\usepackage{wrapfig,lipsum}\n",
"\n",
"\\usepackage{xurl}\n",
"\n",
"\\usepackage{tabularx}\n",
"\\usepackage{bm}\n",
"\\usepackage{seqsplit}\n",
"\\usepackage{stmaryrd}\n",
"\\usepackage{booktabs}\n",
"\n",
"\\usepackage{comment} \n",
"\n",
"\n",
"\\usepackage{xparse}\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\title{Multimedia Generative Script Learning for Task Planning \n",
"}\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\author{\n",
"Qingyun Wang$^{1}$, \\ Manling Li$^1$, \\ Hou Pong Chan$^{2}$, \\ Lifu Huang$^{3}$, \\\\ \\ \\textbf{Julia Hockenmaier}$^1$, \\ \\textbf{Girish Chowdhary}$^1$, \\ \\textbf{Heng Ji}$^{1}$\\\\ \n",
"$^{1}$ University of Illinois at Urbana-Champaign $^{2}$ University of Macau $^{3}$ Virginia Tech\\\\\n",
"$^{1}$\\texttt{\\fontfamily{pcr}\\selectfont\\{qingyun4,manling2,juliahmr,girishc,hengji\\}@illinois.edu}\\\\\n",
"$^{2}${\\fontfamily{pcr}\\selectfont [email protected]},$^{3}${\\fontfamily{pcr}\\selectfont [email protected]}\n",
"}\n",
"\n",
"\\begin{document}\n",
"\\maketitle\n",
"\\begin{abstract}\n",
"\n",
"Goal-oriented generative script learning aims to generate subsequent steps to reach a particular goal, which is an essential task to assist robots or humans in performing stereotypical activities. An important aspect of this process is the ability to capture historical states visually, which provides detailed information that is not covered by text and will guide subsequent steps. Therefore, we propose a new task, Multimedia Generative Script Learning, to generate subsequent steps by tracking historical states in both text and vision modalities, as well as presenting the first benchmark containing 5,652 tasks and 79,089 multimedia steps. This task is challenging in three aspects: the multimedia challenge of capturing the visual states in images, the induction challenge of performing unseen tasks, and the diversity challenge of covering different information in individual steps. We propose to encode visual state changes through a selective multimedia encoder to address the multimedia challenge, transfer knowledge from previously observed tasks using a retrieval-augmented decoder to overcome the induction challenge, and further present distinct information at each step by optimizing a diversity-oriented contrastive learning objective. We define metrics to evaluate both generation and inductive quality. Experiment results demonstrate that our approach significantly outperforms strong baselines\\footnote{The programs, data, and resources are publicly available for research purposes at: \\url{https://github.com/EagleW/Multimedia-Generative-Script-Learning}.}. \n",
"\\end{abstract}\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\section{Introduction}\n",
"\n",
"\n",
"\\begin{figure}[htb!]\n",
"\\centering\n",
"\\includegraphics[width=0.9\\linewidth]{fig/task.pdf}\n",
"\\caption{\\textbf{Multimedia Generative Script Learning:} The upper box shows the task input, including the goal and multimedia step history. Each step contains a text description and an illustrative image. The output is the next step. \n",
"We retrieve historically relevant steps from the training corpus. } \n",
"\\label{img:task_example}\n",
"\\end{figure}\n",
"\n",
"\n",
"Robots rely on understanding the present real-world state and predicting the subsequent steps to better assist humans in daily stereotypical tasks such as meal preparation and gardening~\\citep{10.1007/978-981-15-7345-3_30,9720489}. As an example, Robohow~\\citep{robohow} uses articles from WikiHow\\footnote{\\url{https://www.wikihow.com} contains steps for a variety of tasks. } to assist robots in everyday tasks in human working and living environments. However, the problem is that not all daily tasks are well documented. Thus, generating a sequence of steps that lead to a given goal (i.e., goal-oriented generative script learning)~\\citep{lyu-etal-2021-goal,huang2022language, zoey1, zoey2, zoey3} has a fundamental importance in allowing robots to perform unseen tasks by understanding the patterns in previously observed similar tasks. \n",
"\n",
"\\begin{figure}[!bt]\n",
"\\centering\n",
"\\includegraphics[width=0.9\\linewidth]{fig/overview}\n",
"\\caption{Architecture overview. We use the example in Figure \\ref{img:task_example} as the walking-through example. }\n",
"\\label{img:overview}\n",
"\\end{figure}\n",
"\n",
"\n",
"Despite this, previous goal-oriented generative script learning focuses solely on text~\\citep{lyu-etal-2021-goal,huang2022language}, which is commonly affected by reporting bias~\\citep{10.1145/2509558.2509563} as important details may be omitted in the source text. However, such information is often implicitly contained in images. For example, in Figure~\\ref{img:task_example}, the image of Step 1 illustrates the items needed to \\textit{make a bracelet}, which is not mentioned in the text but helps predict the action of \\textit{threading beads} as a future step. \n",
"Existing multimedia script learning work seeks to bridge this cross-media gap, but the task settings are multi-choice selection~\\citep{yang-etal-2021-visual} or ordering~\\citep{wu-etal-2022-understanding}, which require candidate steps as input so it is not a practical setting for real-life robots. \n",
"\n",
"\n",
"To address these problems, we propose a new task, \\textbf{Multimedia Generative Script Learning} (Figure~\\ref{img:task_example}), that requires systems to generate future steps based on the goal and previous steps with visual scenes depicting their states. Specifically, given the goal and previous step history in the form of natural language sentences paired with descriptive images, the model should automatically generate the natural language instruction for the next step.\n",
"A good script has three hallmarks:\n",
"\n",
"(1) \\underline{\\textit{Visual-State Trackable}}: it records the historical visual scenes and recognizes significant changes that impact future steps. We call it \\textit{multimedia challenge}. To address this challenge, we focus on salient differences in visual scenes, and propose a novel \\textbf{selective multimedia encoder}. Rather than learning directly from the visual details of each object, we first leverage an image captioner as an abstract summary of the image about global interactions among multiple objects. We then introduce a selection gate to focus on the selected captions and steps closely related to the future step. For instance, the second caption \\textit{``a child's hand with a measuring tape on it''} in Figure ~\\ref{img:task_example} can be filtered out by the selection gate because it is not closely related to the future steps.\n",
"\n",
"(2) \\underline{\\textit{Inductive}}: it transfers knowledge from a previously observed task to similar unseen tasks. We call it \\textit{induction challenge}. To induce procedural knowledge from previously observed tasks, we propose a \\textbf{retrieval augmented decoder} to obtain relevant steps to guide the subsequent step generation. For example, the future step in Figure~\\ref{img:task_example} closely resembles the scripts used in previous retrieved steps about \\textit{threading items}, thus transferring script knowledge to an unseen task. \n",
"\n",
"(3) \\underline{\\textit{Diverse}}: it displays distinct information at each step. We call it \\textit{diversity challenge}. \n",
"Existing pre-trained transformer-based language models such as T5 \\citep{JMLR:v21:20-074}, BART \\citep{lewis-etal-2020-bart}, and GPT-2 \\citep{radford2019language} tend to generate repeated or highly similar future steps as shown in Figure~\\ref{img:task_example}.\n",
"Therefore, we introduce a novel \\textbf{diversity-oriented contrastive learning objective} to control all subsequent steps to convey different information. \n",
"We treat all other steps in the given input and retrieved steps in other tasks similar to the given input as \\textit{hard} negatives.\n",
"\n",
"In addition to traditional generation-based metrics to evaluate task performance, we propose a new \\textit{multimodal-retrieval based metric} to capture cross-modal semantic similarity. \n",
"While the model design can be applied to any domain of interest, we experiment with the model on two domains \\textit{Gardening} and \\textit{Crafts}, where task planning has not been well researched. \n",
"Automatic evaluation shows that our generated step predictions are close to the human written ground truth. Human evaluation further confirms that our diversity-oriented contrastive learning objective leads to diverse and correct steps.\n",
"\n",
"The contributions are threefold: \n",
"\\begin{enumerate}\n",
"\\item We propose the first \\textit{multimedia goal-oriented generative script learning task} to record historical steps in both text and images. We also release a new benchmark from WikiHow, featuring 5,652 tasks and 79,089 multimedia steps.\n",
"\\item We propose a novel approach to produce \\textit{visually trackable}, \\textit{inductive}, and \\textit{diverse} scripts through a selective multimedia encoder, a retrieval augmented decoder, and a diversity-oriented contrastive learning objective. \n",
"\\item We propose a new \\textit{multimodal-retrieval based metric} to evaluate the cross-modal semantic similarity and the inductive ability by checking factual correctness.\n",
"\\end{enumerate}\n",
"\n",
"\n",
"\n",
"\n",
"\\section{Problem Formulation}\n",
"We propose a new multimedia generative script learning task: given an activity goal $G$, an optional subgoal $M$ that specifies the concrete needs, and the previous multimedia step history $\\mathcal{H}_n=\\{(S_1,V_1),...,(S_n,V_n)\\}$ with length $n$, a model is expected to predict the next possible step $S_{n+1}$, where $S_i$ is a text sequence and $V_i$ is an image. \n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\n",
" \n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=1.2\\hsize}X>{\\arraybackslash\\hsize=0.9\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=0.8\\hsize}X>{\\centering\\arraybackslash\\hsize=1\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\textbf{Domain}&\\textbf{Split} & \\textbf{\\#Task} & \\textbf{\\#Pair}& $\\mathbf{\\overline{\\#Step}}$ & $\\mathbf{\\overline{\\#Token}}$ \\\\\n",
"\\midrule\n",
" &Train & 1,857 & 20,258 & 3.10 & 11.6 \\\\\n",
"Gardening &Valid. & 237 & 2,428 & 3.03 & 10.6\\\\\n",
" &Test & 238 & 2,684 & 2.88 & 11.2 \\\\\n",
"\\hdashline\n",
" &Train & 2,654 & 32,082 & 6.06 & 8.98 \\\\\n",
"Crafts &Valid. & 3,33 & 4,061 & 6.12 & 9.10 \\\\\n",
" &Test & 3,33 & 3,937 & 5.91 & 9.00 \\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Statistics of our dataset. $\\mathbf{\\overline{\\#Step}}$ denotes average number of steps per sample. $\\mathbf{\\overline{\\#Token}}$ denotes average number of words per step.\\label{tab:stat} }\n",
"\n",
"\\end{table}\n",
"\\section{Dataset Collection}\n",
"\n",
"\n",
"\n",
"Using articles from \\textit{Gardening} and \\textit{Crafts} categories as case studies, we create a new dataset based on the English WikiHow dump (2021/05). There are typically three levels of hierarchy in a WikiHow article: \\textit{goals} which describe the overall task, \\textit{subgoals} which represent the intermediate process to accomplish a \\textit{goal}, and \\textit{steps} which are the specific actions to complete a \\textit{subgoal}. For each WikiHow article, we collect step-image pairs as well as their goals and methods\\footnote{We only keep steps that contain both images and texts.}. We split the whole dataset based on the task categories. Therefore, the validation and test sets contain tasks not included in the training set. Table \\ref{tab:stat} shows the detailed data statistics.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\section{Method}\n",
"\n",
"\n",
"\n",
"\\subsection{Model Architecture}\n",
"\n",
"The overall framework is illustrated in Figure \\ref{img:overview}. \n",
"Given the activity goal $G$, optional subgoal $M$, and multimedia step history $\\mathcal{H}_n$, \n",
"we first use an image captioner to map each input image into a precise caption and produce the caption-enhanced step history $\\mathcal{\\hat{H}}_{n}$. \n",
"Then we propose a \\textit{selective multimedia encoder} by extending the BART encoder with a gated fusion layer to learn contextualized representations for the step history. \n",
"After that, a retrieval module retrieves historically relevant steps from the training corpus and encodes them with a \\textit{retrieved step encoder}. \n",
"Finally, we introduce a \\textit{retrieval-augmented decoder}, which enhances the BART decoder with a retrieval gate fusion layer to fuse the representations of the input step history and retrieved steps to generate the next step.\n",
"The entire model is trained by our proposed \\textit{diversity-oriented contrastive loss} and cross-entropy loss.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\subsection{Selective Multimedia Encoder}\n",
"\\textbf{Image Encoding} Compared to step descriptions which focus more on action description, captions provide more visual environment/object information such as \\textit{beads} in Step 1 from Figure \\ref{img:overview}.\n",
"Because we are more concerned with the overall semantics of the salient objects in the image rather than the details of every object, we adopt image captioners to encode visual features and track visual state changes. For instance, while multiple objects are present in Step 3 in Figure \\ref{img:task_example}, the \\textit{finger} object can be ignored in the third step as it does not represent the key information conveyed by the image. Specifically, we use the state-of-the-art image captioner BLIP~\\citep{li2022blip}, which is pretrained on a large-scale vision-and-language corpus with 129M images to generate a caption $C_{i}$ for each image $V_{i}$ in the input step history $\\mathcal{H}_{n}$. After that, we obtain the \\textit{caption-enhanced step history} $\\mathcal{\\hat{H}}_{n}=\\{(S_1,C_1),...,(S_n,C_n)\\}$, where $C_i$ is the caption of the image $V_i$ in step $i$. \n",
"\n",
"\n",
"\\textbf{Selective Multimedia Encoding}\n",
"To help the encoder capture the activity goal and subgoal information, we concatenate goal $G$ and optional subgoal $M$ to serve as the first sequence in the history $X_0= [G, M]$. For the subsequent steps in the history, we concatenate each step and caption as $X_{2i-1}=S_i$ and $X_{2i}=C_i$. To summarize the step history, we prepend a learnable $\\mathtt{[CLS]}$ token to the sequence as a contextualized vector. The entire text sequence is then represented as $\\mathcal{X}=\\{\\mathtt{[CLS]},X_0,X_1,...,X_{2n}\\}$. We pass the text sequence $\\mathcal{X}$ into a BART encoder to get the contextualized hidden representation $\\mathbf{H}=\\{\\mathbf{h}_0,...,\\mathbf{h}^{2n}_{L_{X_{2n}}}\\}=\\mathrm{Enc}(\\mathcal{X})$. We denote $\\mathbf{H}_{X_j}=\\{\\mathbf{h}^j_1,...,\\mathbf{h}^j_{L_{X_j}}\\}$ as the hidden states for sequence $X_j$, where $L_{X_j}$ is the length of $X_j$.\n",
"\n",
"Since the input sequence contains steps or captions not directly relevant to the future step, we need to mask those sentences based on the step/caption representations. For instance, in Figure \\ref{img:overview}, the step description for Step 1 is vague and needs to be masked. We treat the representation of the $\\mathtt{[CLS]}$ token, $\\mathbf{h}_0$, as the contextualized representation of the entire step history and use it to compute a mask that filters out the irrelevant step/caption information. Specifically, we use $\\mathbf{h}_0$ as query and $\\mathbf{H}_{X_j}$ as both the key and value to compute Multi-Headed Attention ($\\mathrm{MultiHead}$) \\citep{NIPS2017_3f5ee243} for each sequence hidden states $\\mathbf{H}_{X_j}$: $\\hat{\\mathbf{h}}_{X_j} = \\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{X_j},\\mathbf{H}_{X_j})$,\n",
"where $\\mathbf{\\hat{h}}_{X_j}$ is the weighted representation for text sequence $X_j$.\n",
"Then, for each sequence $X_j$, we can calculate the mask probability as:$\n",
"\\alpha_j=\\sigma (\\mathbf{W}_\\alpha[\\mathbf{\\mathbf{h}_0;\\hat{h}}_{X_j}])$, where $\\mathbf{W}_\\alpha$ is a learnable parameter. Similar to \\citet{sengupta-etal-2021-gated-transformer}, \n",
"we update the hidden states for each sequence $X_j$ as $\n",
"\\mathbf{\\bar{H}}_{X_j} = \\alpha_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\alpha_j )\\mathbf{H}_{X_j}\n",
"$,\n",
"where $\\mathbf{emb}_{\\mathtt{[MASK]}}$ is the embedding of the $\\mathtt{[MASK]}$ token. \n",
"The final hidden state sequences are $\\mathbf{\\bar{H}}=[h_0;\\mathbf{\\bar{H}}_1;...;\\mathbf{\\bar{H}}_{2n}]$.\n",
"\n",
"\n",
"\\subsection{Step Retrieval Augmentation} \n",
"\\label{sec:retrieve}\n",
"\\textbf{Historically Relevant Step Retrieval} In addition to the caption-enhanced step history, $\\mathcal{\\hat{H}}_{n}$, we retrieve historically relevant steps $\\mathcal{R}_{n+1}= \\{R_1, ..., R_k\\}$ from the training tasks, where $k$ is the number of retrieved relevant steps. We first use SentenceBERT~\\citep{reimers-gurevych-2019-sentence} to encode all steps. We then retrieve $k$ steps from the training corpus, which have the top-k highest cosine similarity to the previous step $S_n$ from the representation given by SentenceBERT\\footnote{We use the previous step $S_n$ instead of all history since it is more temporally correlated to the next step.}. Finally, we consider the immediate next step for each of those $k$ steps as potential relevant steps $\\mathcal{R}_{n+1}$. For instance, because Step 5 in Figure~\\ref{img:overview} is similar to \\textit{pull the thread out} in the training corpus, we choose its immediate next step \\textit{thread the bobbin} as a historically relevant step.\n",
"\n",
"\\noindent\\textbf{Retrieved Step Encoder} For historically relevant steps $\\mathcal{R}= \\{R_1, ..., R_k\\}$, we apply the BART encoder to get hidden states $\\mathbf{H}_R=\\{\\mathbf{H}_{R_1};....;\\mathbf{H}_{R_k}\\}$. \n",
"Similarly, we use $\\mathbf{h}_0$ in multimedia encoder as the query and $\\mathbf{H}_{R_i}$ as both the key and value to compute multi-headed attention for each sequence hidden states: $\\hat{\\mathbf{h}}_{R_i}=\\mathrm{MultiHead}(\\mathbf{h}_0,\\mathbf{H}_{R_i},\\mathbf{H}_{R_i})$,\n",
"where $\\mathbf{\\hat{h}}_{R_i}$ is the weighted representation for step sequence $R_i$. \n",
"Similarly, we can calculate the mask probability as: $\\beta_j=\\sigma (\\mathbf{W}_\\beta[\\mathbf{h}_0;\\mathbf{\\hat{h}}_{R_j}])$, \n",
"where $\\mathbf{W}_\\beta $ is a learnable parameter. We then update the hidden states for each sequence $R_j$ as $\\mathbf{\\bar{H}}_{R_i} = \\beta_j \\cdot \\mathbf{emb}_{\\mathtt{[MASK]}} + (1-\\beta_j )\\mathbf{H}_{R_i}$.\n",
"The final hidden state sequences is $\\mathbf{\\bar{H}}_R=[\\mathbf{\\bar{H}}_{R_1};...;\\mathbf{\\bar{H}}_{R_k}]$.\n",
"\n",
"\n",
"\\subsection{Retrieval-Augmented Decoder}\n",
"\n",
"In the decoder, we compute the probability $P\\left(s_{q}|s_{<q},\\mathcal{\\hat{H}},G,M\\right)$ for the $q$-th token $s_q\\in S_{n+1}$.\n",
"Our retrieval-augmented decoder is similar to \\cite{liu-etal-2021-three}, which aims to capture historically relevant steps related to the next step based on previous decoder hidden states. Given $z_q^l$ which is the hidden state of $s_{q}$ in layer $l$, we first use a multi-head cross-attention to fuse the hidden states from the retrieved steps $\\mathbf{\\bar{H}_R}$: $ {z'_q}^l = \\mathrm{MultiHead}(z_q^l,\\mathbf{\\bar{H}}_R,\\mathbf{\\bar{H}}_R)$.\n",
"We also append a gating mechanism to control the knowledge from the retrieved steps and previous hidden states:\n",
"\\begin{equation}\n",
"\\begin{split}\n",
" \\gamma &= \\sigma(\\mathbf{W}_\\gamma[z_q^l;{z'_q}^l]) \\\\\n",
" {\\tilde{z}_q}^l &= \\gamma \\cdot \\mathrm{LN}({z'_q}^l) + (1-\\gamma)\\cdot (z_q^l)\n",
"\\end{split}\n",
"\\end{equation}\n",
"where $\\mathbf{W}_\\gamma$ is a learnable parameter and $\\mathrm{LN}(*)$ is the layer norm function. \n",
"Finally, the fused hidden states in the top layer are used to compute the generation probability.\n",
"We supervise the next step generation using the standard cross-entropy loss: \n",
"\\begin{equation}\n",
"\\mathcal{L}_{\\mathrm{gen}} = \\sum_{q=1}^{|S_{n+1}|} \\log P\\left(s_{q}|s_{<q},\\mathcal{\\hat{H}},G,M\\right)\n",
"\\end{equation}\n",
"\n",
"\n",
"\\subsection{Diversity-Oriented Contrastive Learning}\n",
"\n",
"In the experiment, we observe that the model tends to keep generating similar future steps in a row given the beginning steps as input or just paraphrases the input steps. \n",
"Therefore, we propose a contrastive learning-based loss to encourage the model to return diverse step prediction results. \n",
"\n",
"\n",
"\\noindent\\textbf{Negative Sampling} Sequence-to-sequence models suffer from the ``exposure bias'' problem \\citep{ranzato2015sequence,an2022cont} because of \\textit{teacher forcing}. Contrastive loss provides an additional sequence level loss which can help models increase the diversity of the output steps. We adopt two types of negative sampling strategies to discourage the model from paraphrasing the previous step as the future step: \n",
"\\textit{self-negatives} \\citep{wang-etal-2022-simkgc} where we consider the input steps as negative samples and \\textit{retrieved-negatives} where we consider the retrieved steps from training corpus which are similar to the input step as negative samples. \n",
"For example, in Figure \\ref{img:task_example}, the goals and steps from the step history serve as the self-negatives. Given the last step, \u201ccut the thread\u201d, we retrieve similar steps from the training set as retrieved negatives which include \u201ccut your thread\u201d, \"cut off the extra thread\", etc. \n",
"\n",
"\n",
"\\noindent\\textbf{Diversity-Oriented Contrastive Loss} Since the model needs to distinguish between the ground truth and those negative samples, we design a novel diversity-oriented contrastive loss. Specifically, given an input sequence $\\mathcal{\\hat{H}},G,M$, the ground truth next step $S_{n+1}$, and a set of $K$ negative samples $\\{S_{n+1}^1, S_{n+1}^2,...,S_{n+1}^K\\}$, we aim to maximize the probability of classifying the positive sample correctly with the InfoNCE loss \\citep{oord2018representation}:\n",
"\\begin{align}\n",
"\\begin{split}\n",
" \\mathcal{L}_{\\mathrm{cl}} &= \\frac{\\exp{\\left(y^+/\\tau\\right)}}{\\sum_k \\exp{\\left(y^-_k/\\tau\\right)} +\\exp{\\left(y^+/\\tau\\right)} } \\\\\n",
" y^+&=\\sigma(\\mathrm{Avg}(\\mathbf{W}_y\\mathbf{\\bar{H}}^++\\mathbf{b}_y))\\\\\n",
" y^-_k&=\\sigma(\\mathrm{Avg}(\\mathbf{W}_y\\mathbf{\\bar{H}}^-_k+\\mathbf{b}_y))\\\\\n",
"\\end{split}\n",
"\\end{align}\n",
"where $\\mathbf{\\bar{H}}^+$ and $\\mathbf{\\bar{H}}_k^-$ are decoder hidden states from the positive and $k$-th negative samples, $\\mathbf{W}_y$ is a learnable parameter, $\\tau$ is the temperature, and $\\mathrm{Avg}(*)$ denotes the average pooling function.\n",
"\\subsection{Training Objective}\n",
"\n",
"We jointly optimize the cross-entropy loss and our proposed diversity-oriented contrastive loss: $\\mathcal{L} = \\mathcal{L}_{\\mathrm{gen}} + \\lambda \\mathcal{L}_{\\mathrm{cl}}$, \n",
"where $\\lambda$ is a hyperparameter that controls the weight of the contrastive loss. \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\section{Evaluation Metrics}\n",
"\n",
"\n",
"\n",
"\\textbf{Generation Quality Evaluation} Following common practice in text generation, we first evaluate our model with BLEU \\citep{papineni-etal-2002-bleu}, ROUGE \\citep{lin-2004-rouge}, and METEOR \\citep{denkowski-lavie-2014-meteor} scores to examine the content overlap between generated steps and ground truth. \n",
"\n",
"\\noindent\n",
"\\textbf{Inductive Quality Evaluation} \n",
"In order to determine whether the inferred subsequent steps are factually correct, we further evaluate the models with BARTScore \\citep{NEURIPS2021_e4d2b6e6} and the semantic similarity score \\citep{thakur-etal-2021-augmented}. The semantic similarity score uses a cross-encoder pretrained on STSBenchmark \\citep{cer-etal-2017-semeval} to calculate the semantic similarity between two sentences.\n",
"\n",
"\n",
"In addition to evaluating whether the generated step matches the next step, we also check whether the generated step matches any subsequent step. \n",
"This enables the model to earn credit if it generates a step that appears in the future.\n",
"We propose a \\textit{Multimodal-Retrieval based metric}: for each generated step, we use it as a query to search all corresponding step-image pairs under the same subgoal/goal from the testing set. We then compute HIT@1 for results that fall into ground-truth future step-image pairs. Similar to Section \\ref{sec:retrieve}, we use SBERT~\\citep{reimers-gurevych-2019-sentence} to rank the most similar steps under the same subgoal to get Text@1 (T@1). To compute Image@1 (I@1), we use CLIP \\citep{pmlr-v139-radford21a} to rank the most similar images under the same subgoal. If the top-1 retrieval results appear in the subsequent steps, we consider it a HIT. The retrieval-based metric captures normalized semantic similarity concerning all related steps under certain subgoals. The CLIP-based retrieval metric also enables the evaluation of the cross-modality semantic similarity. \n",
"Additional details of the evaluation setup are in the Appendix \\ref{sec:evalmetrics}.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{2}{c}{\\textbf{Gardening}}&\\multicolumn{2}{c}{\\textbf{Crafts}}\\\\\n",
" &\\textbf{I@1}$\\uparrow$&\\textbf{T@1}$\\uparrow$&\\textbf{I@1}$\\uparrow$&\\textbf{T@1}$\\uparrow$\\\\ \n",
"\\midrule\n",
"BART & 44.6 & 40.0 &48.2 & 29.9 \\\\\n",
"+CP & 48.5 & 39.2 &48.2 & 31.5\\\\ \n",
"+CP+M & \\textbf{49.8} & 41.0 &\\textbf{50.3} & \\textbf{37.8}\\\\ \n",
"+CP+M+R & 48.1 & 38.9 &48.9 & 31.8\\\\ \n",
"+CP+M+R+CL & 49.5 & \\textbf{43.0} &49.0 & 33.9\\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Multimodal-retrieval based evaluation (\\\n",
"}\n",
"\\label{tab:fut}\n",
"\n",
"\n",
"\\end{table}\n",
"\n",
"\\begin{table*}[!htb]\n",
"\\centering\n",
"\\small\n",
"\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.4\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=1.1\\hsize}X>{\\centering\\arraybackslash\\hsize=0.7\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\textbf{Model}&\\textbf{B-1}$\\uparrow$&\\textbf{B-2}$\\uparrow$&\\textbf{B-3}$\\uparrow$&\\textbf{B-4}$\\uparrow$&\\textbf{METEOR}$\\uparrow$&\\textbf{R-L}$\\uparrow$&\\textbf{BARTScore}$\\uparrow$&\\textbf{Semantic}$\\uparrow$\\\\ \n",
"\\midrule\n",
"GPT-2 & 13.2 & 5.03 & 1.87 & 0.72 & 7.38 & 12.5 & -4.73 & 0.239\\\\\n",
"T5 & 17.6 & 9.05 & 4.92 & 2.87 & 9.41 & 16.5 & -4.45 & 0.300\\\\\n",
"Naive Retrieval & 10.9 & 4.14 & 1.93 & 1.10 & 6.33 & 10.0 & -4.88 & 0.180\\\\\n",
"CLIP-BART & 14.4 & 7.10 & 3.77 & 2.22 & 8.28 & 13.8 & -4.44 & 0.256\\\\\n",
"Retrieval BART & 16.8 & 8.68 & 4.80 & 2.24 & 9.15 & 16.0 & -4.43 & 0.295\\\\\n",
"GPT2-SIF & 11.6 & 5.10 & 2.43 & 1.28 & 6.85 & 10.8 & -4.80 & 0.233\\\\\n",
"BART & 17.0 & 8.21 & 4.45 & 2.61 & 8.93 & 15.7 & -4.52 & 0.277\\\\\n",
"\\hdashline\n",
"\\hspace{1mm}+CP & 16.9 & 8.79 & 4.99 & 3.03 & 9.23 & 16.5 & -4.41 & 0.300\\\\\n",
"\\hspace{1mm}+CP+M & 17.8 & 9.36 & 5.30 & 3.19 & 9.61 & \\textbf{17.4} & -4.38 & 0.305\\\\\n",
"\\hspace{1mm}+CP+M+R & 17.5 & 9.22 & 5.25 & 3.13 & 9.60 & 17.2 & \\textbf{-4.36} & 0.309\\\\\n",
"\\hspace{1mm}+CP+M+R+CL & \\textbf{18.4} & \\textbf{9.72} & \\textbf{5.51} & \\textbf{3.31} & \\textbf{9.91} & 17.3 & -4.37 & \\textbf{0.310}\\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Results with automatic evaluation on next step prediction for the gardening domain (\\\n",
"\\label{tab:step} }\n",
"\\end{table*}\n",
"\n",
"\\begin{table*}[!htb]\n",
"\\centering\n",
"\\small\n",
"\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=2.4\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=1.1\\hsize}X>{\\centering\\arraybackslash\\hsize=0.7\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X>{\\centering\\arraybackslash\\hsize=1.2\\hsize}X}\n",
"\\toprule\n",
"\\textbf{Model}&\\textbf{B-1}$\\uparrow$&\\textbf{B-2}$\\uparrow$&\\textbf{B-3}$\\uparrow$&\\textbf{B-4}$\\uparrow$&\\textbf{METEOR}$\\uparrow$&\\textbf{R-L}$\\uparrow$&\\textbf{BARTScore}$\\uparrow$&\\textbf{Semantic}$\\uparrow$\\\\ \n",
"\\midrule\n",
"GPT-2 & 15.5 & 5.40 & 1.98 & 0.93 & 7.63 & 14.0 & -4.67 & 0.218\\\\\n",
"T5 & 20.8 & 11.1 & 6.43 & 4.07 & 10.54 & 19.6 & -4.38 & 0.300\\\\\n",
"Naive Retrieval & 13.5 & 5.26 & 2.38 & 1.28 & 6.81 & 12.3 & -4.83 & 0.163\\\\\n",
"CLIP-BART & 17.9 & 9.13 & 5.21 & 3.40 & 9.37 & 16.4 & -4.56 & 0.245\\\\\n",
"Retrieval BART & 18.7 & 9.78 & 5.52 & 3.52 & 9.89 & 18.2 & -4.38 & 0.285\\\\\n",
"GPT2-SIF & 14.8 & 6.70 & 3.05 & 1.58 & 7.74 & 13.2 & -4.69 & 0.234\\\\\n",
"BART & 19.7 & 10.8 & 6.22 & 4.11 & 10.44 & 20.0 & -4.29 & 0.299\\\\\n",
"\\hdashline\n",
"\\hspace{1mm}+CP & 20.1 & 11.1 & 6.48 & 4.24 & 10.61 & 20.1 & -4.29 & 0.303\\\\\n",
"\\hspace{1mm}+CP+M & 20.5 & 11.1 & 6.61 & 4.40 & 10.79 & 20.1 & -4.28 & 0.305\\\\\n",
"\\hspace{1mm}+CP+M+R & 20.7 & 11.5 & 6.93 & 4.66 & 11.02 & \\textbf{20.5} & \\textbf{-4.25} & 0.309\\\\\n",
"\\hspace{1mm}+CP+M+R+CL & \\textbf{21.3} & \\textbf{11.8} & \\textbf{7.12} & \\textbf{4.85} & \\textbf{11.25} & 20.3 & -4.26 & \\textbf{0.313}\\\\ \n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Automatic evaluation results on next step prediction for the crafts domain (\\\n",
"\\label{tab:step1} }\n",
"\\end{table*}\n",
"\n",
"\n",
"\n"
],
"context_after_exp": [
"\\section{Experiments}\n",
"\n",
"\n",
"\\subsection{Baselines}\n",
"\n",
"\n",
"\n",
"We first compare our model with \\textbf{(1) state-of-the-art pretrained text-only generation models} to examine the results without tracking visual states, including GPT-2 \\citep{radford2019language}, T5 \\citep{JMLR:v21:20-074}, and BART \\citep{lewis-etal-2020-bart}. We then compare our model with the \\textbf{(2) retrieval baselines} including a naive retrieval baseline which directly uses retrieved historically relevant sentences as discussed in Section \\ref{sec:retrieve}, and retrieval BART which takes in the concatenation of the retrieved historically relevant sentences with the original text input. We also include \\textbf{(3) multi-modal generation baselines} that can take image embedding instead of captions as input, which is equivalent to CLIP-BART \\citep{Sung_2022_CVPR}. The CLIP-BART has a similar backbone as VL-BART \\citep{pmlr-v139-cho21a} but instead replacing the Faster R-CNN \\citep{NIPS2015_14bfa6bb} with ViT-B/32 CLIP encoder \\citep{pmlr-v139-radford21a} which has a better image-text alignment. Additionally, we compare our model with a state-of-the-art script learning model: GPT2-SIF \\cite{sancheti-rudinger-2022-large} finetuned on our dataset. Finally, we include the variances of our model as \\textbf{(4) baselines for ablation}. \n",
"We select BART over T5 as the base model due to the performance and parameter size. Due to the large number of parameters in T5 (222M) compared to BART (139M), given similar model performance in Table \\ref{tab:step} and \\ref{tab:step1}, we choose BART instead of T5. \n",
"The hyperparameters, training details, and additional ablation study are presented in the Appendix \\ref{sec:hyper}, \\ref{sec:train}, and \\ref{sec:abl}. \n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=4.2\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{4}{c}{\\textbf{Gardening}}&\\multicolumn{4}{c}{\\textbf{Crafts}}\\\\\n",
"&\\textbf{1}$\\downarrow$&\\textbf{2}$\\downarrow$&\\textbf{3}$\\downarrow$&\\textbf{4}$\\downarrow$&\\textbf{1}$\\downarrow$&\\textbf{2}$\\downarrow$&\\textbf{3}$\\downarrow$&\\textbf{4}$\\downarrow$\\\\\n",
"\\midrule\n",
"Ground Truth & 37.0 & 3.08 & 0.42 & 0.18 & 30.6 & 1.07 & 0.05 & 0.00\\\\\n",
"\\hdashline\n",
"BART & 45.2 & 6.94 & 1.39 & 0.73 & 39.2 & 2.18 & 0.26 & 0.10\\\\\n",
"+CP & \\textbf{43.1} & 5.88 & 1.00 & 0.39 & \\textbf{36.0} & \\textbf{1.81} & 0.05 & 0.02 \\\\\n",
"+CP+M & 43.6 & \\textbf{5.75} & \\textbf{0.78} & \\textbf{0.20} & 36.4 & 1.97 & \\textbf{0.02} & 0.01 \\\\\n",
"+CP+M+R & 44.2 & 6.32 & 1.12 & 0.38 & 36.9 & 2.03 & 0.06 & \\textbf{0.01} \\\\\n",
"+CP+M+R+CL & 43.3 & 6.23 & 1.01 & 0.35 & 36.2 & 1.91 & 0.05 & 0.02 \\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Percent (\\\n",
"\\end{table}\n",
"\n",
"\n",
"\n",
"\n",
"\\subsection{Automatic Evaluation}\n",
"As shown in Table \\ref{tab:step} and \\ref{tab:step1}, our model outperforms baselines. Since our task is open-ended and we are testing on unseen activities, our generated sentences usually contain paraphrases. Therefore, the BLEU scores, which rely on the exact word $n$-grams match \\citep{inlg2018bleu}, are not high. In particular, because our ground truth only has an average length of 11 which contains less $4$-grams than the text in other tasks, our BLEU-4 is lower than other text generation tasks. The substantial gap between CLIP-BART and BART or BART with caption indicates that captions usually carry more specific information than images, and the current multimodal encoders still cannot perfectly embed text and images into the same semantic space. Meanwhile, the low performance of the retrieval baselines shows that simple retrieval methods are insufficient to predict accurate next steps. \n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\n",
"\\centering\n",
"\\small\n",
"\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=4.2\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{4}{c}{\\textbf{Gardening}}&\\multicolumn{4}{c}{\\textbf{Crafts}}\\\\\n",
"&\\textbf{1}$\\downarrow$&\\textbf{2}$\\downarrow$&\\textbf{3}$\\downarrow$&\\textbf{4}$\\downarrow$&\\textbf{1}$\\downarrow$&\\textbf{2}$\\downarrow$&\\textbf{3}$\\downarrow$&\\textbf{4}$\\downarrow$\\\\\n",
"\\midrule\n",
"Ground Truth & 87.1 & 60.1 & 36.1 & 23.6 & 91.3 & 68.7 & 41.6 & 27.7 \\\\\n",
"\\hdashline\n",
"BART & 93.7 & 84.3 & 72.9 & 64.2 & 96.9 & 90.6 & 80.6 & 73.5 \\\\\n",
"+CP & 92.8 & 81.3 & 68.9 & 60.5 & 96.3 & 89.3 & 79.2 & 72.5 \\\\\n",
"+CP+M & 96.2 & 89.9 & 81.4 & 73.9 & \\textbf{95.9} & \\textbf{87.8} & 76.6 & 68.5 \\\\\n",
"+CP+M+R & \\textbf{92.3} & \\textbf{80.5} & \\textbf{67.9} & \\textbf{57.8}& 96.9 & 89.6 & 78.6 & 71.1 \\\\\n",
"+CP+M+R+CL & 95.1 & 87.2 & 77.1 & 68.6 & 96.3 & 88.0 & \\textbf{75.8 }& \\textbf{67.3} \\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Self-BLEU (\\\n",
"\\end{table}\n",
"\n",
"\n",
"\n",
"Among our model variants, adding selective encoding leads to a further performance increase, showing that selective encoding helps the model focus on the content in step history that is most related to future steps. The superior performance on BARTScore and semantic similarity of the retrieval-augmented model indicates the effectiveness of the guidance from historically relevant steps. Our contrastive learning model achieves larger gains compared to baselines for BLEU and METEOR, suggesting that our contrastive loss helps the model generate results similar to the ground truth. \n",
"\n",
"\\noindent\\textbf{Automatic Evaluation with Future Steps} \n",
"We evaluate whether the predicted step is related to any future steps. Our contrastive learning model outperforms other ablations significantly on text retrieval for the Gardening domain, as shown in Table~\\ref{tab:fut}. These results imply that the contrastive learning objective encourages the model to generate more informative future steps. The decrease in n-gram overlap between input step history and step predictions~(Table \\ref{tab:over}) suggests that the contrastive learning objective also decreases the model's paraphrasing tendency. Interestingly, the performance decreases when adding the retrieval augmentation to the model because the retrieval model introduces additional information related to the step history, which makes the model generate results similar to previous steps~(Table \\ref{tab:over}).\n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\n",
"\\centering\n",
"\\small\n",
"\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=4.2\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{4}{c}{\\textbf{Gardening}}&\\multicolumn{4}{c}{\\textbf{Crafts}}\\\\\n",
"&\\textbf{1}$\\uparrow$&\\textbf{2}$\\uparrow$&\\textbf{3}$\\uparrow$&\\textbf{4}$\\uparrow$&\\textbf{1}$\\uparrow$&\\textbf{2}$\\uparrow$&\\textbf{3}$\\uparrow$&\\textbf{4}$\\uparrow$\\\\\n",
"\\midrule\n",
"Ground Truth & 11.4 & 50.9 & 80.8 & 92.2& 8.46 & 44.4 & 77.9 & 90.9\\\\\n",
"\\hdashline\n",
"BART & 4.75 & 17.7 & 32.4 & 42.6& 5.11 & 22.6 & 42.8 & 53.8 \\\\\n",
"+CP & \\textbf{5.17} & 19.2 & 33.7 & 42.7& 5.12 & 22.6 & 42.7 & 53.8\\\\\n",
"+CP+M & 4.94 & 18.6 & 32.8 & 41.8 & 4.92 & 22.4 & 42.3 & 53.8\\\\\n",
"+CP+M+R & 5.06 & 19.2 & 34.6 & 44.3 & \\textbf{5.23} & \\textbf{23.3} & 43.9 & 55.2\\\\\n",
"+CP+M+R+CL & 5.02 & \\textbf{19.3} & \\textbf{35.0} & \\textbf{45.2} & 5.07 & 23.3 & \\textbf{44.2} & \\textbf{56.1}\\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Unique $n$-grams in human or system steps(\\\n",
"\\end{table}\n",
"\\noindent\\textbf{Automatic Evaluation on Diversity} To evaluate the diversity between generated steps in the test sets, we employ two diversity metrics: self-BLEU \\citep{10.1145/3209978.3210080} (Table~\\ref{tab:selfbleu}) and unique $n$-grams \\citep{fedus2018maskgan} (Table~\\ref{tab:gram}). The self-BLEU evaluates whether a model produces similar $n$-grams in different samples by measuring the similarity between one sentence and the rest in the test set. The retrieval model achieves the best results for the Gardening domain because it acquires additional knowledge from the retrieved steps and thus diversifies the output. The contrastive learning model achieves the best self-BLEU for 3,4 grams for the Crafts domain, implying our model's effectiveness. The unique $n$-grams calculate the percentage of distinct $n$-grams. It considers the repetition of n-grams within a generated step and across samples. The contrastive learning model achieves the highest distinct scores for 3,4 grams for both domains, indicating the effectiveness of our diversity-based contrastive loss in generating more diverse steps. \n",
"\\subsection{Human Evaluation}\n",
"\\label{sec:human}\n",
"\n",
"\n",
"\\begin{table}[!htb]\n",
"\\centering\n",
"\\small\n",
"\\begin{tabularx}{\\linewidth}{>{\\hsize=4.2\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X>{\\centering\\arraybackslash\\hsize=0.6\\hsize}X}\n",
"\\toprule\n",
"\\multirow{ 2}{*}{\\textbf{Model}}&\\multicolumn{4}{c}{\\textbf{Gardening}}&\\multicolumn{4}{c}{\\textbf{Crafts}}\\\\\n",
"&\\textbf{N.}$\\downarrow$&\\textbf{F.}$\\downarrow$&\\textbf{D.}$\\downarrow$&\\textbf{E.}$\\downarrow$&\\textbf{N.}$\\downarrow$&\\textbf{F.}$\\downarrow$&\\textbf{D.}$\\downarrow$&\\textbf{E.}$\\downarrow$\\\\\n",
"\\midrule\n",
"BART & 1.92 & 2.05 & 2.43 & 1.60 & 1.90 & 2.03 & 2.29 & 1.76\\\\\n",
"+CP & 1.78 & 1.93 & 2.70 & 1.39 & 1.70 & 1.85 & 2.86 & 1.65 \\\\\n",
"+CP+M & 1.77 & 1.95 & 2.41 & 1.37 & 2.15 & 2.04 & 4.11 & 1.77\\\\\n",
"+CP+M+R & 1.48 & 1.55 & 2.66 & 1.29 & 1.93 & 2.13 & 2.89 & 1.63 \\\\\n",
"+CP+M+R+CL & \\textbf{1.31} & \\textbf{1.37} & \\textbf{1.27}& \\textbf{1.18}& \\textbf{1.55} & \\textbf{1.84} & \\textbf{1.57} & \\textbf{1.52} \\\\\n",
"\\bottomrule\n",
"\\end{tabularx}\n",
"\\caption{Human evaluations on with average ranking of next step correctness (N.), future steps correctness (F.), diversity (D.), executability (E.). Ties are allowed.\n",
"\\label{tab:human}\n",
"}\n",
"\\end{table}\n",
"\n",
"Since script learning is an open-ended task that is inherently difficult for automatic metrics to measure the correctness of generated scripts~\\cite{huang2022language}, we further conduct a human evaluation. \n",
"We hire four proficient English speakers as human annotators to independently rank the generation results from 1 (best) to 5 (worst) for: (1) \\textit{next step correctness} which measures whether the generated results match the next step; (2) \\textit{future steps correctness} measuring whether the generated results match any of the future steps; (3) \\textit{diversity} which measures the diversity of generated results under the same subgoal; (4) \\textit{executability} which checks the generated results repeat or conflict with step history. We randomly select ten subgoals, including 41 and 44 generated steps from the test set for Gardening and Crafts separately.\n",
"\n",
"The human evaluation results\\footnote{The Krippendorff-$\\alpha$ inter-annotator agreement scores \\citep{krippendorff2018content} and detailed guidelines of human evaluations are in the Appendix \\ref{sec:humanevald}} are shown in Table \\ref{tab:human}. Our contrastive learning model performs best over all metrics on two datasets. By adding each component of our model, we observe a consistent trend in correctness to ground truth. However, we also observe that scores for selective encoding decrease because the output space with selective encoding is more constrained than the BART baseline, and the length of our generated sequence is not very long. \n",
"\n",
"\n",
"\n",
"\\subsection{Discussions}\n",
"\n",
"\\noindent\\textbf{Impact of Selective Multimedia Encoder} The caption input helps the model understand the general step descriptions better. \n",
"For example, given the activity \\textit{``cure azaleas of leaf gall''}, the step text only shows a generic instruction: \\textit{``rule out other diseases''}. However, the BLIP captioner generates \\textit{``a green leaf with white dots on it''} which helps the model generate \\textit{``remove the leaf gall from the shrub''} instead of \\textit{``keep your shrub healthy''}. \n",
"Furthermore, in Figure \\ref{img:task_example}, the finger object is absent from caption 3, indicating that the caption model has the ability to eliminate extraneous information from the image.\n",
"The selective gate can filter out unrelated steps which are not directly related to the current subgoal. \n",
"For example, in Figure \\ref{img:task_example}, our model successfully predicts a low masking weight of 0.049324 for the step \u201ccut the thread\u201d, while assigning a much higher masking weight of 0.134498 to its uninformative caption \u201ca pair of scissors and a measuring tape\u201d. The results imply that the selective gate successfully guides the model to focus on the related information.\n",
"\n",
"\n",
"\\noindent\\textbf{Impact of Retrieval Augmentation} The retrieved steps provide relevant knowledge from similar tasks: given the subgoal \\textit{``finding or growing roses''} because the retrieved sentence mentioned \\textit{``fertilizer''} and \\textit{``mulch''}, the model successfully generates \\textit{``fertilize your roses''}. Additionally, the model also benefits from retrieval augmentation with an analogy, e.g., the model generates \\textit{``know when to harvest''} given the retrieved step \\textit{``plant the bulbs when you get them''}. \n",
"\n",
"\\noindent\\textbf{Impact of Contrastive Learning} In addition to the improvement in diversity from the previous section, we observe that contrastive learning helps the model generate results closer to ground truth compared to other baselines. For example, it generates \\textit{``pick creeping charlie plants from the ground''}, similar to ground truth \\textit{``pick your creeping charlie leaves''}. The addition of contrastive learning also helps our model generates instructions with more details than other baselines by stating \\textit{``place the plant in the hole and cover it with soil''} instead of \\textit{``place the plant in the hole''}.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\\section{Related Work} \n",
"Previous script learning tasks fall into two forms: selective and generative. \n",
"The selective script learning tasks focus on modeling the script interactions given a list of candidates, including multi-choice goal step inference/ordering \\citep{zhou-etal-2019-learning-household,zhang-etal-2020-reasoning}, script retrieval \\citep{lyu-etal-2021-goal,zhou-etal-2022-show}, action anticipation \\citep{Damen2018EPICKITCHENS,Damen2021RESCALING}, procedure segmentation \\cite{8578725,10.5555/3504035.3504965,ghoddoosian2022hierarchical}, multi-choice visual goal-step inference \\citep{yang-etal-2021-visual}, multimedia procedure planning~\\citep{Zhao_2022_CVPR}, multimedia step ordering \\citep{NEURIPS2021_c6d4eb15,wu-etal-2022-understanding}, instructional video retrieval \\citep{ier}, and step classification \\citep{Lin_2022_CVPR}. Despite promising results, their performance heavily relies on the given candidates, making them difficult to generalize for unseen activities. The second category is text-based generative script learning \\citep{tandon-etal-2020-dataset,lyu-etal-2021-goal,huang2022language,li2020connecting,li2021future,jin2022event,sancheti-rudinger-2022-large}. However, this is the first work to provide a multimedia goal-oriented generative script learning along with a new multimodal-retrieval based metric. Different from \\citet{Sener_2019_ICCV}, which uses a video to generate the next step, our new task uses step image-text pairs as input. Unlike previous multimedia script learning frameworks with a multimedia encoder to capture visual and textual information, we use a captioner to convert images into captions summarizing the important objects in images. \n",
"The GOSC dataset~\\cite{lyu-etal-2021-goal} contains the steps of daily stereotypical tasks, but most of the steps (52.6\\\n",
"\n",
"To handle irrelevant sentences in the input, instead of using a token-level gating mechanism that only depends on the token itself \\citep{sengupta-etal-2021-gated-transformer}, we introduce a sentence (step/caption) level gating mechanism whose gates depend on global context and weighted sentence representations. \n",
"Our work is also related to retrieval-augmented text generation models~\\citep{wang-etal-2019-paperrobot,NEURIPS2020_6b493230,liu-etal-2021-three}. However, instead of retrieving knowledge from an external corpus, we use steps from similar tasks in training data to guide the generation process. Moreover, we introduce a new contrastive learning loss to increase diversity. \n",
"Previous contrastive learning-based text generation methods usually use negative samples constructed by sequence manipulation \\citep{cao-wang-2021-cliff,hu-etal-2022-planet} or perturbation \\citep{lee2021contrastive}. Inspired by \\citet{wang-etal-2022-simkgc} which uses self-negatives for knowledge graph completion and that the generation output tends to repeat the input, we extend self-negatives for sequence-to-sequence contrastive learning. We also retrieve similar steps from the training set as additional hard negatives.\n",
"\n",
"\\section{Conclusion}\n",
"We propose a novel Multimedia Generative Script Learning task with the first benchmark featuring step and descriptive image pairs to generate subsequent steps given historical states in both text and vision modalities. Moreover, we build a new script learning framework consisting of a selective multimedia encoder, a retrieval-augmented decoder, and a diversity-oriented contrastive learning objective to generate the next steps. Furthermore, we define a new \\textit{multimodal-retrieval based metric} which can be used for multimedia script learning tasks. Automatic and human evaluation results demonstrate consistent performance improvements. \n",
"\n",
"\\section{Limitations}\n",
"\\subsection{Limitations of Data Collection}\n",
"Regarding data collection, we crawled the English WikiHow website from Jan 2021 to May 2021. The number of available activities is limited by the data we crawled from WikiHow. We currently only choose \\textit{Gardening} and Crafts categories as case studies. Because we focus on multimedia image-step pairs, we remove steps \\textit{that} are not attached to any illustrative images. We also observe that a small portion of activities in the dataset do not follow chronological order. \n",
"\n",
"\n",
"Since our task focuses on the daily stereotypical tasks which usually require the model to understand the visual environment, the model design can be directly applied to support other domains, such as steps in the cooking videos. In addition, our model can also adapt to scenarios without visual images because the performance of our model only decreases slightly if no caption is provided. \n",
"We plan to expand our model to other categories written in other languages. \n",
"\n",
"\\subsection{Limitations of System Performance}\n",
"The model might generate incorrect nouns because of the occurrence of patterns (e.g., \\textit{``refrigerate the \\textbf{slane} for up to 1 year''} instead of \\textit{``refrigerate the \\textbf{purslane} for up to 1 year''}). In addition, our model sometimes tends to generate generic step descriptions because of insufficient input information, e.g., given the last step \\textit{``lay the t-shirt out on a clean, flat surface.''}, the model generates \\textit{``cut the shirt out''} which is vague compared to ground truth \\textit{``carefully cut around the sleeve''}. Moreover, the pretrained model might focus more on language modeling instead of inherent logic: for the activity of \\textit{``make paint can planters''}, after \\textit{``removing the label''} from the paint can, the BART+CAP generates \\textit{``read the label''}. In addition, there is still a small chance that the model generates the same output for various similar inputs.\n",
"\n",
"\n",
"Because we rely on image captions and retrieval results for step prediction, the upper bound of our generation quality is limited by the performance of the image caption and sentence retrieval modules. Our framework also needs to improve on imbalanced topics in the dataset. For example, the dataset contains more activities about \\textit{tree} for the gardening domain than other gardening-related plants. Because our multimedia generative script learning is a new task, we cannot compare our model with other established state-of-the-art models. Moreover, because WikiHow is a crowd-sourcing website, some everyday activities might have better human annotations than the remaining activities. We plan to include a fine-grained human written step prediction as an upper bound to address this issue.\n",
"\n",
"\\subsection{Limitations of Evaluation} \n",
"\n",
"The automatic metrics we chose, including BLEU \\cite{papineni-etal-2002-bleu}, ROUGE \\cite{lin-2004-rouge}, METEOR \\cite{denkowski-lavie-2014-meteor}, BARTScore \\cite{NEURIPS2021_e4d2b6e6}, self-BLEU \\cite{10.1145/3209978.3210080}, and unique $n$-grams \\cite{fedus2018maskgan}, might not be the best metrics to evaluate our results. Some other metrics, such as semantic similarity and multimodal-retrieval based metrics, are based on pretrained models, including Augmented SBERT \\cite{thakur-etal-2021-augmented}, SentenceBert \\cite{reimers-gurevych-2019-sentence}, and CLIP \\cite{pmlr-v139-radford21a}. Those metrics might not align with human judgment and might be biased toward pretrained datasets. While we complement it with human evaluation, we only focus on relevance to ground truth and diversity. Although we found fluency is not an issue, it is likely we still need to cover all aspects of generation results.\n",
"\n",
"\\section{Ethics and Broader Impact}\n",
"The type of multimedia script learning framework we have designed in this paper is limited to WikiHow articles, and they might not be applicable to other scenarios. \n",
"\\subsection{Usage Requirement}\n",
"Our multimedia script learning framework provides investigative leads for multimedia script prediction. Therefore, it is not intended to be used for any activity related to any human subjects. Instead, our system aims to generate step predictions with unseen activities similar to those in the training set. Accordingly, domain experts might use this tool as an assistant to write more constructive instructional scripts that would be too time-consuming for a human to create from scratch. \n",
"Experts can also use this system to improve writing instruction by adding missing instructions. However, our system does not perform fact-checking or incorporate any external knowledge, which we leave as future work. The IRB board should first approve human subjects who follow instructions generated by our system. \n",
"\n",
"\\subsection{Data Collection}\n",
"We collect data by crawling the raw official English WikiHow website, which is under \\textit{Attribution-Noncommercial-Share Alike 3.0 Creative Commons License}\\footnote{\\url{https://www.wikihow.com/wikiHow:Creative-Commons}}. We ensure that our data collection procedure follows the Terms of Use located at \\url{https://www.wikihow.com/wikiHow:Terms-of-Use}. Therefore our dataset can only be used for non-commercial purposes. As mentioned in Section \\ref{sec:human}, we perform the human evaluation. All annotators involved in the human evaluation are voluntary participants and receive a fair wage.\n",
"\\section*{Acknowledgement}\n",
"This work is supported by Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture, and by U.S. DARPA KAIROS Program No. FA8750-19-2-\n",
"1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.\n",
"Hou Pong Chan was supported in part by the Science and Technology Development Fund, Macau SAR (Grant Nos. FDCT/060/2022/AFJ, FDCT/0070/2022/AMJ) and the Multi-year Research Grant from the University of Macau (Grant No. MYRG2020-00054-FST). \n",
"\n",
"\\bibliography{anthology,custom}\n",
"\\bibliographystyle{acl_natbib}\n",
"\n",
"\\input{Appendix}\n",
"\n",
"\\end{document}\n"
],
"del_percentage": 0.11983
}
}