doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.08909 | 41 | [32] J. Williams, A. Raux, D. Ramachandran, and A. Black. The dialog state tracking chal- lenge. In SIGDIAL, pages 404â413, 2013.
[33] L. Yu, K. M. Hermann, P. Blunsom, Deep learning for an- arXiv preprint and S. Pulman. swer sentence selection. arXiv:1412.1632, 2014.
[34] M.D. Zeiler. Adadelta: learning rate method. arXiv:1212.5701, 2012. an adaptive arXiv preprint
# Appendix A: Dialogue excerpts | 1506.08909#41 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 42 | # Appendix A: Dialogue excerpts
Time User Utterance 03:44 03:45 03:45 03:45 03:45 03:45 03:45 03:45 03:46 03:46 Sender Old kuja Taru bur[n]er kuja Taru LiveCD kuja _pm Taru Recipient I dont run graphical ubuntu, I run ubuntu server. Taru: Haha sucker. Kuja: ? Old: you can use "ps ax" and "kill (PID#)" Taru: Anyways, you made the changes right? Kuja: Yes. or killall speedlink Taru: Then from the terminal type: sudo apt-get update if i install the beta version, how can i update it when the ï¬nal version comes out? Kuja: I did. Utterance Old bur[n]er Old I dont run graphical ubuntu, I run ubuntu server. you can use "ps ax" and "kill (PID#)" kuja Taru kuja Taru kuja Taru Taru Kuja Taru Kuja Taru Kuja Haha sucker. ? Anyways, you made the changes right? Yes. Then from the terminal type: sudo apt-get update I did. | 1506.08909#42 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.08909 | 43 | Figure 4: Example chat room conversation from the #ubuntu channel of the Ubuntu Chat Logs (top), with the disentangled conversations for the Ubuntu Dialogue Corpus (bottom).
Time User Utterance [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:21] [12:22] [12:22] dell cucho RC RC dell dell RC dell dell cucho well, can I move the drives? dell: ah not like that dell: you canât move the drives dell: deï¬nitely not ok lol this is the problem with RAID:) RC haha yeah cucho, I guess I could just get an enclosure and copy via USB... dell: i would advise you to get the disk Sender Recipient Utterance dell cucho dell cucho dell cucho dell well, can I move the drives? ah not like that I guess I could just get an enclosure and copy via USB i would advise you to get the disk dell RC dell dell RC well, can I move the drives? you canât move the drives. deï¬nitely not. this is the problem with RAID :) haha yeah | 1506.08909#43 | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response. | http://arxiv.org/pdf/1506.08909 | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | cs.CL, cs.AI, cs.LG, cs.NE | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | cs.CL | 20150630 | 20160204 | [
{
"id": "1503.02364"
},
{
"id": "1506.06863"
}
] |
1506.06714 | 0 | 5 1 0 2 n u J 2 2 ] L C . s c [
1 v 4 1 7 6 0 . 6 0 5 1 : v i X r a
# A Neural Network Approach to Context-Sensitive Generation of Conversational Responsesâ
# Alessandro Sordoni 1â â¡ Michel Galley 2â¡ Michael Auli 3â Chris Brockett 2 Yangfeng Ji 4â Margaret Mitchell 2 Jian-Yun Nie 1â Jianfeng Gao 2 Bill Dolan 2
1DIRO, Universit´e de Montr´eal, Montr´eal, QC, Canada 2Microsoft Research, Redmond, WA, USA 3Facebook AI Research, Menlo Park, CA, USA 4Georgia Institute of Technology, Atlanta, GA, USA
# Abstract
We present a novel response generation sys- tem that can be trained end to end on large quantities of unstructured Twitter conversa- tions. A neural network architecture is used to address sparsity issues that arise when in- tegrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show con- sistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.
context because of your game ? message yeah iâm on my way now response ok good luck ! | 1506.06714#0 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 0 | 5 1 0 2
n u J 2 2 ] V C . s c [
1 v 4 2 7 6 0 . 6 0 5 1 : v i X r a
# Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books
Yukun Zhu â,1 Ryan Kiros*,1 Richard Zemel1 Ruslan Salakhutdinov1 Raquel Urtasun1 Antonio Torralba2 Sanja Fidler1 1University of Toronto 2Massachusetts Institute of Technology
{yukun,rkiros,zemel,rsalakhu,urtasun,fidler}@cs.toronto.edu, [email protected]
# Abstract | 1506.06724#0 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 1 | context because of your game ? message yeah iâm on my way now response ok good luck !
Figure 1: Example of three consecutive utterances occur- ring between two Twitter users A and B.
# Introduction
Until recently, the goal of training open-domain con- versational systems that emulate human conversation has seemed elusive. However, the vast quantities of conversational exchanges now available on so- cial media websites such as Twitter and Reddit raise the prospect of building data-driven models that can begin to communicate conversationally. The work of Ritter et al. (2011), for example, demonstrates that a response generation system can be constructed from Twitter conversations using statistical machine trans- lation techniques, where a status post by a Twitter user is âtranslatedâ into a plausible looking response.
âThis paper appeared in the proceedings of NAACL-HLT 2015 (submitted December 4, 2014, accepted February 20, 2015, and presented June 1, 2015). | 1506.06714#1 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 1 | # Abstract
Books are a rich source of both ï¬ne-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This paper aims to align books to their movie releases in order to provide rich descriptive explanations for visual content that go semanti- cally far beyond the captions available in current datasets. To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural em- bedding for computing similarities between movie clips and sentences in the book. We propose a context-aware CNN to combine information from multiple sources. We demon- strate good quantitative performance for movie/book align- ment and show several qualitative examples that showcase the diversity of tasks our model can be used for.
# 1. Introduction | 1506.06724#1 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 2 | âThis paper appeared in the proceedings of NAACL-HLT 2015 (submitted December 4, 2014, accepted February 20, 2015, and presented June 1, 2015).
However, an approach such as that presented in Rit- ter et al. (2011) does not address the challenge of generating responses that are sensitive to the context of the conversation. Broadly speaking, context may be linguistic or involve grounding in the physical or virtual world, but we here focus on linguistic context. The ability to take into account previous utterances is key to building dialog systems that can keep con- versations active and engaging. Figure 1 illustrates a typical Twitter dialog where the contextual infor- mation is crucial: the phrase âgood luckâ is plainly motivated by the reference to âyour gameâ in the ï¬rst utterance. In the MT model, such contextual sensitiv- ity is difï¬cult to capture; moreover, naive injection of context information would entail unmanageable growth of the phrase table at the cost of increased sparsity, and skew towards rarely-seen context pairs. In most statistical approaches to machine translation, phrase pairs do not share statistical weights regard- less of their intrinsic semantic commonality.
â The entirety of this work was conducted while at Microsoft Research. | 1506.06714#2 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 2 | Figure 1: Shot from the movie Gone Girl, along with the subtitle, aligned with the book. We reason about the visual and dialog (text) alignment between the movie and a book.
Books provide us with very rich, descriptive text that conveys both ï¬ne-grained visual details (how people or scenes look like) as well as high-level semantics (what peo- ple think and feel, and how their states evolve through a story). This source of knowledge, however, does not come with associated visual information that would enable us to ground it with descriptions. Grounding descriptions in books to vision would allow us to get textual explanations or stories behind visual information rather than simplistic captions available in current datasets. It can also provide us with extremely large amount of data (with tens of thousands books available online).
A truly intelligent machine needs to not only parse the surrounding 3D environment, but also understand why peo- ple take certain actions, what they will do next, what they could possibly be thinking, and even try to empathize with them. In this quest, language will play a crucial role in grounding visual information to high-level semantic con- cepts. Only a few words in a sentence may convey really rich semantic information. Language also represents a natu- ral means of interaction between a naive user and our vision algorithms, which is particularly important for applications such as social robotics or assistive driving. | 1506.06724#2 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 3 | Combining images or videos with language has gotten signiï¬cant attention in the past year, partly due to the cre- ation of CoCo [18], Microsoftâs large-scale captioned im- age dataset. The ï¬eld has tackled a diverse set of tasks such as captioning [13, 11, 36, 35, 21], alignment [11, 15, 34], Q&A [20, 19], visual model learning from textual descrip- tions [8, 26], and semantic visual search with natural multi- sentence queries [17].
In this paper, we exploit the fact that many books have been turned into movies. Books and their movie releases have a lot of common knowledge as well as they are com- plementary in many ways. For instance, books provide de- tailed descriptions about the intentions and mental states of the characters, while movies are better at capturing visual aspects of the settings. | 1506.06724#3 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 4 | to compactly encode semantic and syntactic simi- larity. We argue that embedding-based models af- ford ï¬exibility to model the transitions between con- secutive utterances and to capture long-span depen- dencies in a domain where traditional word and phrase alignment is difï¬cult (Ritter et al., 2011). To this end, we present two simple, context-sensitive response-generation models utilizing the Recurrent Neural Network Language Model (RLM) architec- ture of (Mikolov et al., 2010). These models ï¬rst encode past information in a hidden continuous repre- sentation, which is then decoded by the RLM to pro- mote plausible responses that are simultaneously ï¬u- ent and contextually relevant. Unlike typical complex task-oriented multi-modular dialog systems (Young, 2002; Stent and Bangalore, 2014), our architecture is completely data-driven and can easily be trained end-to-end using unstructured data without requiring human annotation, scripting, or automatic parsing. | 1506.06714#4 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 4 | The ï¬rst challenge we need to address, and the focus of this paper, is to align books with their movie releases in order to obtain rich descriptions for the visual content. We aim to align the two sources with two types of in- formation: visual, where the goal is to link a movie shot to a book paragraph, and dialog, where we want to ï¬nd correspondences between sentences in the movieâs subtitle and sentences in the book. We formulate the problem of movie/book alignment as ï¬nding correspondences between shots in the movie as well as dialog sentences in the sub- titles and sentences in the book (Fig. 1). We introduce a novel sentence similarity measure based on a neural sen# âDenotes equal contribution
1 | 1506.06724#4 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 5 | This paper makes the following contributions. We present a neural network architecture for response generation that is both context-sensitive and data- driven. As such, it can be trained from end to end on massive amounts of social media data. To our knowl- edge, this is the ï¬rst application of a neural-network model to open-domain response generation, and we believe that the present work will lay groundwork for more complex models to come. We additionally in- troduce a novel multi-reference extraction technique that shows promise for automated evaluation.
# 2 Related Work | 1506.06714#5 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 5 | 1
tence embedding trained on millions of sentences from a large corpus of books. On the visual side, we extend the neural image-sentence embeddings to the video domain and train the model on DVS descriptions of movie clips. Our approach combines different similarity measures and takes into account contextual information contained in the nearby shots and book sentences. Our ï¬nal alignment model is for- mulated as an energy minimization problem that encourages the alignment to follow a similar timeline. To evaluate the book-movie alignment model we collected a dataset with 11 movie/book pairs annotated with 2,070 shot-to-sentence correspondences. We demonstrate good quantitative perfor- mance and show several qualitative examples that showcase the diversity of tasks our model can be used for.
The alignment model can have multiple applications. Imagine an app which allows the user to browse the book as the scenes unroll in the movie: perhaps its ending or act- ing are ambiguous, and one would like to query the book for answers. Vice-versa, while reading the book one might want to switch from text to video, particularly for the juicy scenes. We also show other applications of learning from movies and books such as book retrieval (ï¬nding the book that goes with a movie and ï¬nding other similar books), and captioning CoCo images with story-like descriptions.
# 2. Related Work | 1506.06724#5 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 6 | # 2 Related Work
Our work naturally lies in the path opened by Ritter et al. (2011), but we generalize their approach by exploiting information from a larger context. Rit- ter et al. and our work represent a radical paradigm shift from other work in dialog. More traditional dialog systems typically tease apart dialog manage- ment (Young, 2002) from response generation (Stent and Bangalore, 2014), while our holistic approach can be considered a ï¬rst attempt to accomplish both tasks jointly. While there are previous uses of ma- chine learning for response generation (Walker et al., 2003), dialog state tracking (Young et al., 2010), and user modeling (Georgila et al., 2006), many compo- nents of typical dialog systems remain hand-coded: in particular, the labels and attributes deï¬ning dia- log states. In contrast, the dialog state in our neural
network model is completely latent and directly opti- mized towards end-to-end performance. In this sense, we believe the framework of this paper is a signif- icant milestone towards more data-driven and less hand-coded dialog processing. | 1506.06714#6 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 6 | # 2. Related Work
Most effort in the domain of vision and language has been devoted to the problem of image captioning. Older work made use of ï¬xed visual representations and translated them into textual descriptions [6, 16]. Recently, several approaches based on RNNs emerged, generating captions via a learned joint image-text embedding [13, 11, 36, 21]. These approaches have also been extended to generate de- scriptions of short video clips [35]. In [24], the authors go beyond describing what is happening in an image and pro- vide explanations about why something is happening.
For text-to-image alignment, [15, 7] ï¬nd correspon- dences between nouns and pronouns in a caption and visual objects using several visual and textual potentials. Lin et al. [17] does so for videos. In [11], the authors use RNN embeddings to ï¬nd the correspondences. [37] combines neural embeddings with soft attention in order to align the words to image regions. | 1506.06724#6 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 7 | Continuous representations of words and phrases estimated by neural network models have been ap- plied on a variety of tasks ranging from Information Retrieval (IR) (Huang et al., 2013; Shen et al., 2014), Online Recommendation (Gao et al., 2014b), Ma- chine Translation (MT) (Auli et al., 2013; Cho et al., 2014; Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014), and Language Modeling (LM) (Bengio et al., 2003; Collobert and Weston, 2008). Gao et al. (2014a) successfully use an embedding model to reï¬ne the estimation of rare phrase-translation prob- abilities, which is traditionally affected by sparsity problems. Robustness to sparsity is a crucial prop- erty of our method, as it allows us to capture context information while avoiding unmanageable growth of model parameters. | 1506.06714#7 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 7 | Early work on movie-to-text alignment include dynamic time warping for aligning movies to scripts with the help of subtitles [5, 4]. Sankar et al. [28] further developed a system which identiï¬ed sets of visual and audio features to align movies and scripts without making use of the subtitles. Such alignment has been exploited to provide weak labels for person naming tasks [5, 30, 25].
Closest to our work is [34], which aligns plot synopses to shots in the TV series for story-based content retrieval. This work adopts a similarity function between sentences in plot | 1506.06724#7 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 8 | Our work extends the Recurrent Neural Network Language Model (RLM) of (Mikolov et al., 2010), which uses continuous representations to estimate a probability function over natural language sentences. We propose a set of conditional RLMs where contex- tual information (i.e., past utterances) is encoded in a continuous context vector to help generate the re- sponse. Our models differ from most previous work in the way the context vector is constructed. For example, Mikolov and Zweig (2012) and Auli et al. (2013) use a pre-trained topic model. In our models, the context vector is learned along with the condi- tional RLM that generates the response. Additionally, the learned context encodings do not exclusively cap- ture contentful words. Indeed, even âstop wordsâ can carry discriminative power in this task; for exam- ple, all words in the utterance âhow are you?â are commonly characterized as stop words, yet this is a contentful dialog utterance.
# 3 Recurrent Language Model
We give a brief overview of the Recurrent Language Model (RLM) (Mikolov et al., 2010) architecture that our models extend. A RLM is a generative model of sentences, i.e., given sentence s = s1, . . . , sT , it
estimates: | 1506.06714#8 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 8 | synopses and shots based on person identities and keywords in subtitles. Our work differs with theirs in several impor- tant aspects. First, we tackle a more challenging problem of movie/book alignment. Unlike plot synopsis, which closely follow the storyline of movies, books are more verbose and might vary in the storyline from their movie release. Fur- thermore, we use learned neural embeddings to compute the similarities rather than hand-designed similarity functions. Parallel to our work, [33] aims to align scenes in movies to chapters in the book. However, their approach operates on a very coarse level (chapters), while ours does so on the sentence/paragraph level. Their dataset thus evaluates on 90 scene-chapter correspondences, while our dataset draws 2,070 shot-to-sentences alignments. Furthermore, the ap- proaches are inherently different. [33] matches the pres- ence of characters in a scene to those in a chapter, as well as uses hand-crafted similarity measures between sentences in the subtitles and dialogs in the books, similarly to [34]. | 1506.06724#8 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 9 | estimates:
T p(s) = [] p(silsi,---,se1). () t=1
The model architecture is parameterized by three weight matrices, Ornn = (Win, Wout; Wan): an in- put matrix W;,,, a recurrent matrix W),,, and an output matrix Wo.,¢, which are usually initialized randomly. The rows of the input matrix W;,, ⬠RY** contain the â-dimensional embeddings for each word in the language vocabulary of size V. Let us denote by s; both the vocabulary token and its one-hot representa- tion, i.e., a zero vector of dimensionality V with a 1 corresponding to the index of the s; token. The em- bedding for s; is then obtained by sf Win. The recur- rent matrix Wrap ⬠R*** keeps a history of the sub- sequence that has already been processed. The output matrix Woy, ⬠RE*XY projects the hidden state h; into the output layer o;, which has an entry for each word in the vocabulary V. This value is used to gen- erate a probability distribution for the next word in the sequence. Specifically, the forward pass proceeds with the following recurrence, fort = 1,...,T:
hy = 0(8) Win + ht_Wnn), 06 = hi Wout (2) | 1506.06714#9 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 9 | Rohrbach et al. [27] recently released the Movie De- scription dataset which contains clips from movies, each time-stamped with a sentence from DVS (Descriptive Video Service). The dataset contains clips from over a 100 movies, and provides a great resource for the captioning techniques. Our effort here is to align movies with books in order to ob- tain longer, richer and more high-level video descriptions.
We start by describing our new dataset, and then explain our proposed approach.
# 3. The MovieBook and BookCorpus Datasets
We collected two large datasets, one for movie/book alignment and one with a large number of books.
The MovieBook Dataset. Since no prior work or data ex- ist on the problem of movie/book alignment, we collected a new dataset with 11 movies along with the books on which they were based on. For each movie we also have a sub- title ï¬le, which we parse into a set of time-stamped sen- tences. Note that no speaker information is provided in the subtitles. We automatically parse each book into sentences, paragraphs (based on indentation in the book), and chapters (we assume a chapter title has indentation, starts on a new page, and does not end with an end symbol). | 1506.06724#9 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 10 | hy = 0(8) Win + ht_Wnn), 06 = hi Wout (2)
where Ï is a non-linear function applied element- wise, in our case the logistic sigmoid. The recurrence is seeded by setting h0 = 0, the zero vector. The probability distribution over the next word given the previous history is obtained by applying the softmax activation function:
exp(ors) an exp(01y) P(s_ = wlsi,...,$¢-1) = (3)
The RLM is trained to minimize the negative log- likelihood of the training sentence s:
T L(s) = â SF log P(silsi,-.-, 8-1). (4) t=1
The recurrence is unrolled backwards in time us- ing the back-propagation through time (BPTT) al- gorithm (Rumelhart et al., 1988), and gradients are accumulated over multiple time-steps.
Ot Ot OL Wout Wout Wh ht heya Me ees â> | |------- Wan t Win Win St St St41
Figure 2: Compact representation of an RLM (left) and unrolled representation for two time steps (right).
# 4 Context-Sensitive Models | 1506.06714#10 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 10 | Our annotators had the movie and a book opened side by side. They were asked to iterate between browsing the book and watching a few shots/scenes of the movie, and trying to ï¬nd correspondences between them. In particular, they marked the exact time (in seconds) of correspondence in the movie and the matching line number in the book ï¬le, indicating the beginning of the matched sentence. On the video side, we assume that the match spans across a shot (a video unit with smooth camera motion). If the match was longer in duration, the annotator also indicated the ending time of the match. Similarly for the book, if more sentences | 1506.06724#10 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 11 | Figure 2: Compact representation of an RLM (left) and unrolled representation for two time steps (right).
# 4 Context-Sensitive Models
We distinguish three linguistic entities in a conver- sation between two users A and B: the context1 c, the message m and response r. The context c rep- resents a sequence of past dialog exchanges of any length; then B emits a message m to which A reacts by formulating its response r (see Figure 1).
We use three context-based generation models to estimate a generation model of the response r, r = r1, . . . , rT , conditioned on past information c and m:
T p(rie,m) = Tein. .ee5Tt-1,¢,m). (5) t=1
These three models differ in the manner in which they compose the context-message pair (c, m).
# 4.1 Tripled Language Model
In our ï¬rst model, dubbed RLMT, we straightfor- wardly concatenate each utterance c, m, r into a single sentence s and train the RLM to minimize L(s). Given c and m, we compute the probability of the response as follows: we perform the forward propagation over the known utterances c and m to ob- tain a hidden state encoding useful information about previous utterances. Subsequently, we compute the likelihood of the response from that hidden state. | 1506.06714#11 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 11 | Title Gone Girl Fight Club No Country for Old Men Harry Potter and the Sorcerers Stone Shawshank Redemption The Green Mile American Psycho One Flew Over the Cuckoo Nest The Firm Brokeback Mountain The Road # sent. 12,603 4,229 8,050 6,458 2,562 9,467 11,992 7,103 15,498 638 6,638 # words 148,340 48,946 69,824 78,596 40,140 133,241 143,631 112,978 135,529 10,640 58,793 # unique words 3,849 1,833 1,704 2,363 1,360 3,043 4,632 2,949 3,685 470 1,580 BOOK avg. # words per sent. 15 14 10 15 18 17 16 19 11 20 10 max # words per sent. 153 90 68 227 115 119 422 192 85 173 74 # para- graphs 3,927 2,082 3,189 2,925 637 2,760 3,945 2,236 5,223 167 2,345 # shots 2,604 2,365 1,348 2,647 1,252 2,350 1,012 1,671 2,423 1,205 1,108 MOVIE # sent. in subtitles 2,555 1,864 889 1,227 1,879 | 1506.06724#11 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 12 | An issue with this simple approach is that the con- catenated sentence s will be very long on average, especially if the context comprises multiple utter- ances. Modelling such long-range dependencies with an RLM is difï¬cult and is still considered an open problem (Pascanu et al., 2013). We will consider
1In this work, the context is purely linguistic, but future work might integrate further contextual information, e.g., geographical location, time information, or other forms of grounding.
DCGM-I DCGM-II Or On . A Wout Wout Whh Whn he ad I L rr Wy wr -â> C2 | Win ca Ol Wn âi St St W; w} bem be b, n
Figure 3: Compact representations of DCGM-I (left) and DCGM-II (right). The decoder RLM receives a bias from the context encoder. In DCGM-I, we encode the bag-of- words representation of both c and m in a single vector bcm. In DCGM-II, we concatenate the representations bc and bm on the ï¬rst layer to preserve order information.
RLMT as an additional context-sensitive baseline for the models we present next.
# 4.2 Dynamic-Context Generative Model I | 1506.06714#12 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06714 | 13 | RLMT as an additional context-sensitive baseline for the models we present next.
# 4.2 Dynamic-Context Generative Model I
The above limitation of RLMT can be addressed by strengthening the context bias. In our second model (DCGM-D),, the context and the message are encoded into a fixed-length vector representation the is used by the RLM to decode the response. This is illus- trated in Figure[3] (left). First, we consider c and m as a single sentence and compute a single bag-of-words representation bem ⬠RY. Then, bem is provided as input to a multilayered non-linear forward archi- tecture that produces a fixed-length representation that is used to bias the recurrent state of the decoder RLM. At training time, both the context encoder and the RLM decoder are learned so as to minimize the negative log-probability of the generated response. The parameters of the model are Opccm1 = (Win, Wins Wout, {Wf }f1), where {Wee 1 are the weights for the L layers of the feed-forward con- text networks. The fixed-length context vector ky is obtained by forward propagation of the network:
ky = d),W} 6 ke =o(ki,WS) forâ¬=2,---,L )
# The rows of W 1 | 1506.06714#13 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 13 | 15 All Table 1: Statistics for our MovieBook Dataset with ground-truth for alignment between books and their movie releases.
# of books 11,038 # of sentences 74,004,228 # of words 984,846,357 # of unique words mean # of words per sentence median # of words per sentence 1,316,420 13 11
Table 2: Summary statistics of our BookCorpus dataset. We use this corpus to train the sentence embedding model.
matched, the annotator indicated from which to which line a match occurred. Each alignment was also tagged, indicating whether it was a visual, dialogue, or an audio match. Note that even for dialogs, the movie and book versions are se- mantically similar but not exactly the same. Thus deciding on what deï¬nes a match or not is also somewhat subjective and may slightly vary across our annotators. Altogether, the annotators spent 90 hours labeling 11 movie/book pairs, locating 2,070 correspondences.
lished authors. We only included books that had more than 20K words in order to ï¬lter out perhaps noisier shorter sto- ries. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science ï¬ction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.
# 4. Aligning Books and Movies | 1506.06724#13 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 14 | ky = d),W} 6 ke =o(ki,WS) forâ¬=2,---,L )
# The rows of W 1
f contain the embeddings of the voThe rows of W; contain the embeddings of the vocabulary.2 These are different from those employed in the RLM and play a crucial role in promoting the specialization of the context encoder to a distinct task. The hidden layer of the decoder RLM takes the following form:
ht = 0(hi_1Wan + kt + 5) Win) (7a)
_ pT or = hy Wout (7b)
p(st+1|s1, . . . , stâ1, c, m) = softmax(ot)
(7c)
This model conditions on the previous utterances via biasing the hidden layer state on the context repre- sentation kL. Note that the context representation does not change through time. This is useful because: (a) it forces the context encoder to produce a repre- sentation general enough to be useful for generating all words in the response and (b) it helps the RLM decoder to remember context information when gen- erating long responses.
# 4.3 Dynamic-Context Generative Model II | 1506.06714#14 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 14 | # 4. Aligning Books and Movies
Table 1 presents our dataset, while Fig. 8 shows a few ground-truth alignments. One can see the complexity and diversity of the data: the number of sentences per book vary from 638 to 15,498, even though the movies are similar in duration. This indicates a huge diversity in descriptiveness across literature, and presents a challenge for matching. The sentences also vary in length, with the sentences in Broke- back Mountain being twice as long as those in The Road. The longest sentence in American Psycho has 422 words and spans over a page in the book. | 1506.06724#14 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 15 | # 4.3 Dynamic-Context Generative Model II
Because DCGM-I does not distinguish between c and m, that model has the propensity to underestimate the strong dependency that holds between m and r. Our third model (DCGM-II) addresses this issue by concatenating the two linear mappings of the bag-of- words representations bc and bm in the input layer of the feed-forward network representing c and m (see Figure 3 right). Concatenating continuous representa- tions prior to deep architectures is a common strategy to obtain order-sensitive representations (Bengio et al., 2003; Devlin et al., 2014).
The forward equations for the context encoder are:
ky = BEW},bLW}] ®) ke = o(ky Wy) for â¬=2,--- ,L
where [x, y] denotes the concatenation of x and y vec- tors. In DCGM-II, the bias on the recurrent hidden state and the probability distribution over the next token are computed as described in Eq. 7.
2Notice that the ï¬rst layer of the encoder network is linear. We found that this helps learning the embedding matrix as it reduces the vanishing gradient effect partially due to stacking of squashing non-linearities (Pascanu et al., 2013).
# 5 Experimental Setting
# 5.1 Dataset Construction | 1506.06714#15 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 15 | Aligning movies with books is challenging even for hu- mans, mostly due to the scale of the data. Each movie is on average 2h long and has 1,800 shots, while a book has on average 7,750 sentences. Books also have different styles of writing, formatting, different and challenging language, slang (going vs goinâ, or even was vs âus), etc. As one can see from Table 1, ï¬nding visual matches turned out to be particularly challenging. This is because the visual descrip- tions in books can be either very short and hidden within longer paragraphs or even within a longer sentence, or very verbose â in which case they get obscured with the sur- rounding text â and are hard to spot. Of course, how close the movie follows the book is also up to the director, which can be seen through the number of alignments that our an- notators found across different movie/books. | 1506.06724#15 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 16 | # 5 Experimental Setting
# 5.1 Dataset Construction
For computational efï¬ciency and to alleviate the bur- den of human evaluators, we restrict the context se- quence c to a single sentence. Hence, our dataset is composed of âtriplesâ Ï â¡ (cÏ , mÏ , rÏ ) consisting of three sentences. We mined 127M context-message- response triples from the Twitter FireHose, covering the 3-month period June 2012 through August 2012. Only those triples where context and response were generated by the same user were extracted. To mini- mize noise, we selected triples that contained at least one frequent bigram that appeared more than 3 times in the corpus. This produced a corpus of 29M Twitter triples. Additionally, we hired crowdsourced raters to evaluate approximately 33K candidate triples. Judg- ments on a 5-point scale were obtained from 3 raters apiece. This yielded a set of 4232 triples with a mean score of 4 or better that was then randomly binned into a tuning set of 2118 triples and a test set of 2114 triples3. The mean length of responses in these sets was approximately 11.5 tokens, after cleanup (e.g., stripping of emoticons), including punctuation.
# 5.2 Automatic Evaluation | 1506.06714#16 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 16 | Our approach aims to align a movie with a book by ex- ploiting visual information as well as dialogs. We take shots as video units and sentences from subtitles to represent di- alogs. Our goal is to match these to the sentences in the book. We propose several measures to compute similari- ties between pairs of sentences as well as shots and sen- tences. We use our novel deep neural embedding trained on our large corpus of books to predict similarities between sentences. Note that an extended version of the sentence embedding is described in detail in [14] showing how to deal with million-word vocabularies, and demonstrating its performance on a large variety of NLP benchmarks. For comparing shots with sentences we extend the neural em- bedding of images and text [13] to operate in the video do- main. We next develop a novel contextual alignment model that combines information from various similarity measures and a larger time-scale in order to make better local align- ment predictions. Finally, we propose a simple pairwise Conditional Random Field (CRF) that smooths the align- ments by encouraging them to follow a linear timeline, both in the video and book domain.
We ï¬rst explain our sentence, followed by our joint video to text embedding. We next propose our contextual model that combines similarities and discuss CRF in more detail.
# 4.1. Skip-Thought Vectors | 1506.06724#16 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 17 | # 5.2 Automatic Evaluation
We evaluate all systems using BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005), and supplement these results with more targeted human pairwise comparisons in Section 6.3. A major chal- lenge in using these automated metrics for response generation is that the set of reasonable responses in our task is potentially vast and extremely diverse. The dataset construction method just described yields only a single reference for each status. Accordingly, we extend the set of references using an IR approach to mine potential responses, after which we have hu- man judges rate their appropriateness. As we see in Section 6.3, it turns out that by optimizing systems towards BLEU using mined multi-references, BLEU rankings align well with human judgments. This lays groundwork for interesting future correlation studies.
Multi-reference extraction We use the following algorithm to better cover the space of reasonable re- sponses. Given a test triple Ï â¡ (cÏ , mÏ , rÏ ), our
3The Twitter ids of the tuning and test sets along with the code for the neural network models may be obtained from http://research.microsoft.com/convo/
Corpus # Triples Avg # Ref [Min,Max] # Ref Tuning Test 2118 2114 3.22 3.58 [1, 10] [1, 10] | 1506.06714#17 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 17 | # 4.1. Skip-Thought Vectors
The BookCorpus Dataset. In order to train our sentence similarity model we collected a corpus of 11,038 books from the web. These are free books written by yet unpubIn order to score the similarity between two sentences, we exploit our architecture for learning unsupervised rep- resentations of text [14]. The model is loosely inspired by
oe >@â® >@ >@ >@â® a door confronted her <eos> ren a door confronted â her she stopped and tried to pull it i budge _<eos> open <eos> it didnt budge
Figure 2: Sentence neural embedding [14]. Given a tuple (s;_1, 5;, 5:41) of consecutive sentences in text, where s; is the i-th sentence, we encode s; and aim to reconstruct the previous s;_; and the following sentence s;41. Unattached arrows are connected to the encoder output. Colors depict which components share parameters. (eos) is the end of sentence token. | 1506.06724#17 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 18 | Corpus # Triples Avg # Ref [Min,Max] # Ref Tuning Test 2118 2114 3.22 3.58 [1, 10] [1, 10]
Table 1: Number of triples, average, minimum and maxi- mum number of references for tuning and test corpora.
goal is to mine other responses {rËÏ } that ï¬t the con- text and message pair (cÏ , mÏ ). To this end, we ï¬rst select a set of 15 candidate triples {ËÏ } using an IR system. The IR system is calibrated in order to select candidate triples ËÏ for which both the message mËÏ and the response rËÏ are similar to the original mes- sage mÏ and response rÏ . Formally, the score of a candidate triple is:
s(7,7) = d(mz,m,) (ad(rz,r7) +(1âa)e), (9) | 1506.06714#18 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 18 | he drove down the street off into the distance . the most effective way to end the battle . he started the car , left the parking lot and merged onto the highway a few miles down the road . he shut the door and watched the taxi drive off . she watched the lights ï¬icker through the trees as the men drove toward the road . he jogged down the stairs , through the small lobby , through the door and into the street . a messy business to be sure , but necessary to achieve a ï¬ne and noble end . they saw their only goal as survival and logically planned a strategy to achieve it . there would be far fewer casualties and far less destruction . the outcome was the lisbon treaty .
Table 3: Qualitative results from the sentence embedding model. For each query sentence on the left, we retrieve the 4 nearest neighbor sentences (by inner product) chosen from books the model has not seen before. | 1506.06724#18 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 19 | s(7,7) = d(mz,m,) (ad(rz,r7) +(1âa)e), (9)
where d is the bag-of-words BM25 similarity func- tion (Robertson et al., 1995), a controls the impact of the similarity between the responses and « is a smoothing factor that avoids zero scores for candi- date responses that do not share any words with the reference response. We found that this simple for- mula provided references that were both diverse and plausible. Given a set of candidate triples {7}, hu- man evaluators are asked to rate the quality of the response within the new triples {(c,, mr,rz)}. Af- ter human evaluation, we retain the references for which the score is 4 or better on a 5 point scale, re- sulting in 3.58 references per example on average (Table[Ip. The average lengths for the responses in the multi-reference tuning and test sets are 8.75 and 8.13 tokens respectively.
# 5.3 Feature Sets
The response generation systems evaluated in this pa- per are parameterized as log-linear models in a frame- work typical of statistical machine translation (Och and Ney, 2004). These log-linear models comprise the following feature sets: | 1506.06714#19 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 19 | the skip-gram [22] architecture for learning representations of words. In the word skip-gram model, a word wi is cho- sen and must predict its surrounding context (e.g. wi+1 and wiâ1 for a context window of size 1). Our model works in a similar way but at the sentence level. That is, given a sen- tence tuple (siâ1, si, si+1) our model ï¬rst encodes the sen- tence si into a ï¬xed vector, then conditioned on this vector tries to reconstruct the sentences siâ1 and si+1, as shown in Fig. 2. The motivation for this architecture is inspired by the distributional hypothesis: sentences that have similar surrounding context are likely to be both semantically and syntactically similar. Thus, two sentences that have similar syntax and semantics are likely to be encoded to a similar vector. Once the model is trained, we can map any sentence through the encoder to obtain vector representations, then score their similarity through an inner product. | 1506.06724#19 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 20 | MT MT features are derived from a large response generation system built along the lines of Ritter et al. (2011), which is based on a phrase-based MT de- coder similar to Moses (Koehn et al., 2007). Our MT feature set includes the following features that are common in Moses: forward and backward maxi- mum likelihood âtranslationâ probabilities, word and
System BLEU RANDOM MT HUMAN 0.33 3.21 6.08
Table 2: Multi-reference corpus-level BLEU obtained by leaving one reference out at random.
phrase penalties, linear distortion, and a modiï¬ed Kneser-Ney language model (Kneser and Ney, 1995) trained on Twitter responses. For the translation prob- abilities, we built a very large phrase table of 160.7 million entries by ï¬rst ï¬ltering out Twitterisms (e.g., long sequences of vowels, hashtags), and then se- lecting candidate phrase pairs using Fisherâs exact test (Ritter et al., 2011). We also included MT de- coder features speciï¬cally motivated by the response generation task: Jaccard distance between source and target phrase, Fisherâs exact probability, and a score relating the lengths of source and target phrases. | 1506.06714#20 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 20 | The learning signal of the model depends on having con- tiguous text, where sentences follow one another in se- quence. A natural corpus for training our model is thus a large collection of books. Given the size and diversity of genres, our BookCorpus allows us to learn very general representations of text. For instance, Table 3 illustrates the nearest neighbours of query sentences, taken from held out books that the model was not trained on. These qualitative results demonstrate that our intuition is correct, with result- ing nearest neighbors corresponds largely to syntactically and semantically similar sentences. Note that the sentence embedding is general and can be applied to other domains not considered in this paper, which is explored in [14]. | 1506.06724#20 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 21 | IR We also use an IR feature built from an index of triples, whose implementation roughly matches the IRstatus approach described in Ritter et al. (2011): For a test triple Ï , we choose rËÏ as the candidate response iff ËÏ = arg maxËÏ d(mÏ , mËÏ ).
CMM Neither MT nor IR traditionally take into ac- count contextual information. Therefore, we take into consideration context and message matches (CMM), i.e., exact matches between c, m and r. We deï¬ne 8 features as the [1-4]-gram matches between c and the candidate reply r and the [1-4]-gram matches between m and the candidate reply r. These exact matches help capture and promote contextual infor- mation in the replies.
RLMT, DCGM-I, DCGM-II We consider the RLM trained on the concatenated triples, denoted as RLMT (Section 4.1), to be a context-sensitive RLM baseline. Each neural network model contributes an additional feature corresponding to the likelihood of the candidate response given context and message.
# 5.4 Model Training | 1506.06714#21 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 21 | vanishing gradient problem, through the use of gates to con- trol the ï¬ow of information. The LSTM unit explicity em- ploys a cell that acts as a carousel with an identity weight. The ï¬ow of information through a cell is controlled by in- put, output and forget gates which control what goes into a cell, what leaves a cell and whether to reset the contents of the cell. The GRU does not use a cell but employs two gates: an update and a reset gate. In a GRU, the hidden state is a linear combination of the previous hidden state and the proposed hidden state, where the combination weights are controlled by the update gate. GRUs have been shown to perform just as well as LSTM on several sequence predic- tion tasks [3] while being simpler. Thus, we use GRU as the activation function for our encoder and decoder RNNs. are (siâ1, si, si+1), and let wt and let xt description into three parts: objective function. Encoder. Let w1 i denote words in sentence si with N the number of words in the sentence. The encoder pro- duces a hidden state ht i at each time step which forms the representation of the sequence w1 i , . . . , wt i. | 1506.06724#21 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 23 | approximating the probability of the target word (Gut- mann and Hyv¨arinen, 2010). Parameter optimization is done using Adagrad (Duchi et al., 2011) with a mini-batch size of 100 and a learning rate α = 0.1, which we found to work well on held-out data. In order to stabilize learning, we clip the gradients to a ï¬xed range [â10, 10], as suggested in Mikolov et al. (2010). All the parameters of the neural models are sampled from a normal distribution N (0, 0.01) while the recurrent weight Whh is initialized as a random orthogonal matrix and scaled by 0.01. To prevent over-ï¬tting, we evaluate performance on a held-out set during training and stop when the objec- tive increases. The size of the RLM hidden layer is set to K = 512, where the context encoder is a 512, 256, 512 multilayer network. The bottleneck in the middle compresses context information that leads to similar responses and thus achieves better generaliza- tion. The last layer embeds the context vector into the hidden space of the decoder RLM.
# 5.5 Rescoring Setup | 1506.06714#23 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 23 | h'=(1-z')oh 142â on! (1)
To construct an encoder, we use a recurrent neural net- work, inspired by the success of encoder-decoder models for neural machine translation [10, 2, 1, 31]. Two kinds of activation functions have recently gained traction: long short-term memory (LSTM) [9] and the gated recurrent unit (GRU) [3]. Both types of activation successfully solve the
where hâ is the proposed state update at time t, zâ is the up- date gate and (©) denotes a component-wise product. The update gate takes values between zero and one. In the ex- treme cases, if the update gate is the vector of ones, the previous hidden state is completely forgotten and hâ = hâ. Alternatively, if the update gate is the zero vector, than the
hidden state from the previous time step is simply copied over, that is ht = htâ1. The update gate is computed as
zt = Ï(Wzxt + Uzhtâ1) (2)
where Wz and Uz are the update gate parameters. The proposed state update is given by
h! = tanh(Wx' + U(r! © hâ~â)) (6))
where rt is the reset gate, which is computed as | 1506.06724#23 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 24 | # 5.5 Rescoring Setup
We evaluate the proposed models by rescoring the n-best candidate responses obtained using the MT phrase-based decoder and the IR system. In contrast to MT, the candidate responses provided by IR have been created by humans and are less affected by ï¬u- ency issues. The different n-best lists will provide a comprehensive testbed for our experiments. First, we augment the n-best list of the tuning set with the scores of the model of interest. Then, we run an itera- tion of MERT (Och, 2003) to estimate the log-linear weights of the new features. At test time, we rescore the test n-best list with the new weights.
# 6 Results
# 6.1 Lower and Upper Bounds
Table 2 shows the expected upper and lower bounds for this task as suggested by BLEU scores for human responses and a random response baseline. The RAN- DOM system comprises responses randomly extracted from the triples corpus. HUMAN is computed by choosing one reference amongst the multi-reference set for each context-status pair.4 Although the scores
4For the human score, we compute corpus-level BLEU with a sampling scheme that randomly leaves out one reference - the human sentence to score - for each reference set. This sampling scheme (repeated with 100 trials) is also applied for the MT and | 1506.06714#24 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 24 | h! = tanh(Wx' + U(r! © hâ~â)) (6))
where rt is the reset gate, which is computed as
rt = Ï(Wrxt + Urhtâ1) (4)
If the reset gate is the zero vector, than the proposed state update is computed only as a function of the current word. Thus after iterating this equation sequence for each word, we obtain a sentence vector hN Decoder. The decoder computation is analogous to the en- coder, except that the computation is conditioned on the sentence vector hi. Two separate decoders are used, one for the previous sentence siâ1 and one for the next sentence si+1. These decoders use different parameters to compute their hidden states but both share the same vocabulary ma- trix V that takes a hidden state and computes a distribution over words. Thus, the decoders are analogous to an RNN language model but conditioned on the encoder sequence. Alternatively, in the context of image caption generation, the encoded sentence hi plays a similar role as the image. | 1506.06724#24 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 25 | MT n-best BLEU (%) METEOR (%) IR n-best BLEU (%) | METEOR (%) MT o feat. 3.60 (-9.5%) 9.19 (-0.9%) IR 2 feat. 1.51 (-55%) 6.25 (-22%) CMM o feat. 3.33 (-16%) 9.34 (40.7%) CMM o feat. 3.39 (-0.6%) 8.20 (+0.6%) > MT + CMM 97 feat, 3.98 (-) 9.28 (-) > IR+CMM go feat. 3.41 (-) 8.04 (-) RLMT 3 feat, 4.13 (43.7%) 9.54 (42.7%) RLMT 9 feat. 2.85 (-16%) 7.38 (-8.2%) DCGM-I9 feat. 4.26 (+7.0%) 9.55 (+2.9%) DCGM-I9 feat. 3.36 (-1.5%) 7.84 (-2.5%) DCGM-II 3 feat. 4.11 (43.3%) 9.45 (41.8%) DCGM-II 9 feat. 3.37 (-1.1%) 8.22 (42.3%) DCGM-I+ CMM jo feat, 4.44 (411%) 9.60 (43.5%) | 1506.06714#25 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 25 | We describe the decoder for the next sentence si+1 (com- putation for siâ1 is identical). Let ht i+1 denote the hidden state of the decoder at time t. The update and reset gates for the decoder are given as follows (we drop i + 1): zhtâ1 + Czhi) rhtâ1 + Crhi)
zt = Ï(Wd rt = Ï(Wd
z xtâ1 + Ud r xtâ1 + Ud i+1 is then computed as:
the hidden state ht
hâ = tanh(W?%x'~! + U4(r! © h'â+) + Ch,) (7) hi, =(1â2') oh! +2! oh! (8) âi Given hj,,, the probability of word w/,, given the previ- ous t â 1 words and the encoder vector is P(wi, whi) x exp(Vng, Wha) Q)
where vwt word of wt the previous sentence siâ1. Objective. Given (siâ1, si, si+1), the objective optimized is the sum of log-probabilities for the next and previous sen- tences conditioned on the representation of the encoder: | 1506.06724#25 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 26 | logP (wt i+1|w<t i+1, hi) + logP (wt iâ1|w<t iâ1, hi) t t
(10) The total objective is the above summed over all such train- ing tuples. Adam algorithm [12] is used for optimization.
# 4.2. Visual-semantic embeddings of clips and DVS
The model above describes how to obtain a similarity score between two sentences, whose representations are learned from millions of sentences in books. We now dis- cuss how to obtain similarities between shots and sentences. Our approach closely follows the image-sentence rank- ing model proposed by [13]. In their model, an LSTM is used for encoding a sentence into a ï¬xed vector. A linear mapping is applied to image features from a convolutional network. A score is computed based on the inner product between the normalized sentence and image vectors. Cor- rect image-sentence pairs are trained to have high score, while incorrect pairs are assigned low scores. | 1506.06724#26 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 27 | Table 3: Context-sensitive ranking results on both MT (left) and IR (right) n-best lists, n = 1000. The subscript feat, indicates the number of features of the models. The log-linear weights are estimated by running one iteration of MERT. We mark by (+%) the relative improvements with respect to the reference system (>).
are lower than those usually reported in SMT tasks, the ranking of the three systems is unambiguous.
# 6.2 BLEU and METEOR
The results of automatic evaluation using BLEU and METEOR are presented in Table 3, where some broad patterns emerge. First, both metrics indi- cate that a phrase-based MT decoder outperforms a purely IR approach. Second, adding CMM features to the baseline systems helps. Third, the neural net- work models contribute measurably to improvement: RLMT and DCGM models outperform baselines, and DCGM models provide more consistent gains than RLMT. | 1506.06714#27 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 27 | In our case, we learn a visual-semantic embedding be- tween movie clips and their DVS description. DVS (âDe- scriptive Video Serviceâ) is a service that inserts audio de- scriptions of the movie between the dialogs in order to en- able the visually impaired to follow the movie like anyone else. We used the movie description dataset of [27] for learning our embedding. This dataset has 94 movies, and 54,000 described clips. We represent each movie clip as a vector corresponding to mean-pooled features across each frame in the clip. We used the GoogLeNet architecture [32] as well as hybrid-CNN [38] for extracting frame features. For DVS, we pre-processed the descriptions by removing names and replacing these with a someone token.
The LSTM architecture in this work is implemented us- ing the following equations. As before, we represent a word embedding at time t of a sentence as xt: | 1506.06724#27 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 28 | MT vs. IR BLEU and METEOR scores indicate that the phrase-based MT decoder outperforms a purely IR approach, despite the fact that IR proposes ï¬uent human generated responses. This may be be- cause the IR model only loosely captures important patterns between message and response: It ranks candidate responses solely by the similarity of their message with the message of the test triple (§5.3). As a result, the top ranked response is likely to drift from the purpose of the original conversation. The MT ap- proach, by contrast, more directly models statistical patterns between message and response. | 1506.06714#28 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 28 | The LSTM architecture in this work is implemented us- ing the following equations. As before, we represent a word embedding at time t of a sentence as xt:
i = o(Waix! + Warm t+ Wee) UD ff = o(Wayx' + Wapmâ | + Wereâ) (12) aâ = tanh(Weex! + W),.mâ~') (13) ec = foc t+i' oat (14) of = o(Woox' + Wrom'! + Weoe') (15) mâ = o' @tanh(câ) (16)
where (o) denotes the sigmoid activation function and (©) indicates component-wise multiplication. The states (i', f¢, câ, o', m*) correspond to the input, forget, cell, out- put and memory vectors, respectively. If the sentence is of length N, then the vector m⢠= m is the vector represen- tation of the sentence. | 1506.06724#28 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 29 | CMM MT+CMM, totaling 17 features (9 from MT + 8 CMM), improves 0.38 BLEU points, a 9.5% relative improvement, over the baseline MT model. IR+CMM, with 10 features (IR + word penalty + 8 CMM), beneï¬ts even more, attaining 1.8 BLEU points and 1.5 METEOR points over the IR baseline. Figure 4 (a) and (b) plots the magnitude of the learned CMM feature weights for MT+CMM and IR+CMM. CMM features help in both these hy- pothesis spaces and especially on the IR n-best list. Figure 4 (b) supports the hypothesis formulated in the previous paragraph: Since IR solely captures inter- message similarities, the matches between message and response are important, while context matches help in providing additional gains. The phrase-based statistical patterns captured by the MT system do a good job in explaining away 1-gram and 2-gram mes- sage matches (Figure 4 (a)) and the performance gain mainly comes from context matches. On the other hand, we observe that 4-gram matches may be impor- tant in selecting appropriate responses. Inspection of the tuning set reveals | 1506.06714#29 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 29 | Let q denote a movie clip vector, and let v = WI q be the embedding of the movie clip. We deï¬ne a scoring function s(m, v) = m · v, where m and v are ï¬rst scaled to have unit norm (making s equivalent to cosine similarity). We then optimize the following pairwise ranking loss:
min > So max{0,a âs(m,v) +s(m,vz)} (17) mk +52 S° max{0,0 = s(v,m) + 5(v,ma)},
with mk a contrastive (non-descriptive) sentence vector for a clip embedding v, and vice-versa with vk. We train our model with stochastic gradient descent without momentum.
# 4.3. Context aware similarity | 1506.06724#29 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 30 | On the other hand, we observe that 4-gram matches may be impor- tant in selecting appropriate responses. Inspection of the tuning set reveals instances where responses con- tain long subsequences of their corresponding mes- sages, e.g., m = âgood night best friend, I love youâ, r = âI love you too, good night best friendâ. Although infrequent, such higher-order n-gram matches, when they occur, may provide a more robust signal of the quality of the response than 1- and 2-gram matches, given the highly conversational nature of our dataset. | 1506.06714#30 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 30 | # 4.3. Context aware similarity
We employ the clip-sentence embedding to compute similarities between each shot in the movie and each sen- tence in the book. For dialogs, we use several similarity measures each capturing a different level of semantic sim- ilarity. We compute BLEU [23] between each subtitle and book sentence to identify nearly identical matches. Simi- larly to [34], we use a tf-idf measure to ï¬nd near duplicates but weighing down the inï¬uence of the less frequent words. Finally, we use our sentence embedding learned from books to score pairs of sentences that are semantically similar but may have a very different wording (i.e., paraphrasing).
These similarity measures indicate the alignment be- tween the two modalities. However, at the local, sentence level, alignment can be rather ambiguous. For example, de- spite being a rather dark book, Gone Girl contains 15 occur- rences of the sentence âI love youâ. We exploit the fact that a match is not completely isolated but that the sentences (or shots) around it are also to some extent similar. | 1506.06724#30 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 31 | RLMT and DCGM Both RLMT and DCGM models outperform their respective MT and IR base- lines. Both models also exhibit similar performance and show improvements over the MT+CMM mod- els, albeit using a lower dimensional feature space. We believe that their similar performance is due to the limited diversity of MT n-best list together with gains in ï¬uency stemming from the strong language model provided by the RLM. In the case of IR mod- els, on the other hand, there is more headroom for improvement and ï¬uency is already guaranteed. Any
RANDOM system so as to make BLEU scores comparable.
(a) MT-+CMM g 02 (b) IR-+CMM i mmatches Ga catches Tgram 2-gram 3-gram| (c) DCGMII+CMM on MT 4-gram 0.6 4 05 50.4 203 % 02 for T-gram 2-gram gram 4-gram (d) DCGMII+CMM on IR Tgram 2-gram 3-gram| gram T-gram 2-oram Sgram â 4-gram
Figure 4: Comparison of the weights of learned CMM features for MT+CMM and IR+CMM systems (a) et (b) and DCGM-II+CMM on MT and IR (c) and (d). | 1506.06714#31 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 31 | We design a context aware similarity measure that takes into account all individual similarity measures as well as a ï¬xed context window in both, the movie and book do- main, and predicts a new similarity score. We stack a set of M similarity measures into a tensor S(i, j, m), where i, j, and m are the indices of sentences in the subtitle, in the book, and individual similarity measures, respectively. In particular, we use M = 9 similarities: visual and sentence embedding, BLEU1-5, tf-idf, and a uniform prior. We want to predict a combined score score(i, j) = f (S(I, J, M)) at each location (i, j) based on all measurements in a ï¬xed volume deï¬ned by I around i, J around j, and 1, . . . , M . Evaluating the function f (·) at each location (i, j) on a 3-D tensor S is very similar to applying a convolution using a kernel of appropriate size. This motivates us to formulate the function f (·) as a deep convolutional neural network (CNN). In this paper, we adopt a 3-layer CNN as illustrated in Figure 3. We | 1506.06724#31 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 32 | System A System B Gain (%) CI HUMAN MT+CMM 13.6* [12.4,14.8] MT DCGM-II DCGM-II+CMM MT DCGM-II+CMM MT+CMM 1.9* 3.1* 1.5* [0.8, 2.9] [2.0, 4.3] [0.5, 2.5] IR DCGM-II DCGM-II+CMM IR DCGM-II+CMM IR+CMM 5.2* 5.3* 2.3* [4.0, 6.4] [4.1, 6.6] [1.2, 3.4]
Table 4: Pairwise human evaluation scores between Sys- tem A and B. The ï¬rst (second) set of results refer to the MT (IR) hypothesis list. The asterisk means agreement between human preference and BLEU rankings. | 1506.06714#32 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06714 | 33 | gains must come from context and message matches. Hence, RLMT underperforms with respect to both DCGM and IR+CMM. The DCGM models appear to have better capacity to retain contextual information and thus achieve similar performance to IR+CMM despite their lack of exact n-gram match information. In the present experimental setting, no striking performance difference can be observed between the two versions of the DCGM architecture. If multiple sequences were used as context, we expect that the DCGM-II model would likely beneï¬t more owing to the separate encoding of message and context.
DCGM+CMM We also investigated whether mix- ing exact CMM n-gram overlap with semantic in- formation encoded by the DCGM models can bring additional gains. DCGM-{I-II}+CMM systems each totaling 10 features show increases of up to 0.48 BLEU points over MT+CMM and up to 0.88 BLEU over the model based on Ritter et al. (2011). ME- TEOR improvements similarly align with BLEU im- provements both for MT and IR lists. We take this | 1506.06714#33 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 33 | # 4.4. Global Movie/Book Alignment
So far, each shot/sentence was matched independently. However, most shots in movies and passages in the books follow a similar timeline. We would like to incorporate this prior into our alignment. In [34], the authors use dynamic time warping by enforcing that the shots in the movie can only match forward in time (to plot synopses in their case). However, the storyline of the movie and book can have crossings in time (Fig. 8), and the alignment might contain
mW
Figure 3: Our CNN for context-aware similarity computa- tion. It has 3 conv. layers and a sigmoid layer on top.
giant leaps forwards or backwards. Therefore, we formu- late a movie/book alignment problem as inference in a Con- ditional Random Field that encourages nearby shots/dialog alignments to be consistent. Each node yi in our CRF rep- resents an alignment of the shot in the movie with its cor- responding subtitle sentence to a sentence in the book. Its state space is thus the set of all sentences in the book. The CRF energy of a conï¬guration y is formulated as:
= Sondu yi) Ss > wp (Yi. Â¥;) i=1 JEN (i) â log p(x,y;w | 1506.06724#33 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 34 | as evidence that CMM exact matches and DCGM semantic matches interact positively, a ï¬nding that comports with Gao et al. (2014a), who show that semantic relationships mined through phrase embed- dings correlate positively with classic co-occurrence- based estimations. Analysis of CMM feature weights in Figure 4 (c) and (d) suggests that 1-gram matches are explained away by the DCGM model, but that higher order matches are important. It appears that DCGM models might be improved by preserving word-order information in context and message en- codings.
# 6.3 Human Evaluation
Human evaluation was conducted using crowd- sourced annotators. Annotators were asked to com- pare the quality of system output responses pairwise (âWhich is better?â) in relation to the context and message strings in the 2114 item test set. Identical strings were held out, so that the annotators only saw those outputs that differed. Paired responses were presented in random order to the annotators, and each pair of responses was judged by 5 annotators. | 1506.06714#34 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 34 | = Sondu yi) Ss > wp (Yi. Â¥;) i=1 JEN (i) â log p(x,y;w
where K is the number of nodes (shots), and N (i) the left and right neighbor of yi. Here, Ïu(·) and Ïp(·) are unary and pairwise potentials, respectively, and Ï = (Ïu, Ïp). We directly use the output of the CNN from 4.3 as the unary potential Ïu(·). For the pairwise potential, we measure the time span ds(yi, yj) between two neighbouring sentences in the subtitle and the distance db(yi, yj) of their state space in the book. One pairwise potential is deï¬ned as:
Ïp(yi, yj) = (ds(yi, yj) â db(yi, yj))2 (ds(yi, yj) â db(yi, yj))2 + Ï2 (18) | 1506.06724#34 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 35 | Table 4 summarizes the results of human evalua- tion, giving the difference in mean scores (pairwise preference margin) between systems and 95% conï¬- dence intervals generated using Welchâs t-test. Iden- tical strings not shown to raters are incorporated with an automatically assigned score of 0.5. The pattern in these results is clear and consistent: context-sensitive systems (+CMM) outperform non-context-sensitive systems, with preference gains as high as approxi- mately 5.3% in the case of DCGM-II+CMM versus IR, and about 3.1% in the case of DCGM-II+CMM versus MT. Similarly, context-sensitive DCGM sys- tems outperform non-DCGM context-sensitive sys- tems by 1.5% (MT) and 2.3% (IR). These results are consistent with the automated BLEU rankings and conï¬rm that our best performing DCGM models out- perform both raw baseline and the context-sensitive baseline using CMM features.
# 6.4 Discussion
Table 5 provides examples of responses generated on the tuning corpus by the MT-based DCGM-II+CMM system, our best system in terms of both BLEU and human evaluation. Responses from this system are on average shorter (8.95 tokens) than the original human | 1506.06714#35 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 35 | Here Ï2 is a robustness parameter to avoid punishing gi- ant leaps too harsh. Both ds and db are normalized to [0, 1]. In addition, we also employ another pairwise poten- tial Ïq(yi, yj) = (db(yi,yj ))2 (db(yi,yj ))2+Ï2 to encourage state consis- tency between nearby nodes. This potential is helpful when there is a long silence (no dialog) in the movie.
Inference. Our CRF is a chain, thus exact inference is possible using dynamic programming. We also prune some states that are very far from the uniform alignment (over 1/3 length of the book) to further speed up computation.
Learning. Since ground-truth is only available for a sparse set of shots, we regard the states of unobserved nodes as hidden variables and learn the CRF weights with [29].
# 5. Experimental Evaluation
We evaluate our model on our dataset of 11 movie/book pairs. We train the parameters in our model (CNN and CRF) | 1506.06724#35 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 36 | # Context c Message m Generated Response r 1 noone can help this time its just i need a support system well im here to support you. whenever you thank you, i appreciate it thank you need me. 2 nothin much, and howâs the book?! its good but iâm only like halfway through cuz i donât feel like reading. iâm so bored ... thatâs good! i have the book but iâm bored too. 3 i know! what have you been up to? nothing really. just enjoying summer and working. what about you? thatâs good! nothing much just listening to music and you? 4 thanks, your way better though iâm just good at expressing it thatâs but trust me you are freakin amazing and waay better i know, i donât trust you, i love you too! thank you 5 thanks. yeah i am good thanks! good, but angry! sorry youâre angry, been there, i hope it dies down. i hope so too, iâm not angry!! 6 ï¬gures. bass. never ate bass. bass is extremely mild with very few bones. my favorite after catï¬sh thatâs one of the bones, chuck bass is mine too. | 1506.06714#36 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 36 | # 5. Experimental Evaluation
We evaluate our model on our dataset of 11 movie/book pairs. We train the parameters in our model (CNN and CRF)
on Gone Girl, and test our performance on the remaining 10 movies. In terms of training speed, our video-text model âwatchesâ 1,440 movies per day and our sentence model reads 870 books per day. We also show various qualitative results demonstrating the power of our approach. We pro- vide more results in the Appendix of the paper.
# 5.1. Movie/Book Alignment | 1506.06724#36 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 37 | bass. never ate bass. bass is extremely mild with very few bones. my favorite after catï¬sh thatâs one of the bones, chuck bass is mine too. 7 today i will walk for miles. clear sky clear mind. love walking. itâs too hot to walk this month. i havenât moved. frozen. heat freeze. itâs too hot for you to get up to honey if you have frozen yogurt to you. 8 arresting people over ânasty tweetsâ? please. iâm not condoning what the lad did but arresting him is a tad silly. it appears he may have sent death threats to others while all this was kicking off too. i did! he was armed with nuclear threats? thatâs whatâs happening to you. | 1506.06714#37 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 37 | Evaluating the performance of movie/book alignment is an interesting problem on its own. This is because our ground-truth is far from exhaustive â around 200 correspon- dences were typically found between a movie and its book, and likely a number of them got missed. Thus, evaluating the precision is rather tricky. We thus focus our evaluation on recall, similar to existing work on retrieval. For each shot that has a GT correspondence in book, we check whether our prediction is close to the annotated one. We evaluate recall at the paragraph level, i.e., we say that the GT para- graph was recalled, if our match was at most 3 paragraphs away, and the shot was at most 5 subtitle sentences away. As a noisier measure, we also compute recall and precision at multiple alignment thresholds and report AP (avg. prec.). The results are presented in Table 4. Columns show dif- ferent instantiations of our model: we show the leave-one- feature-out setting (â
indicates that all features were used), compare how different depths of the context-aware CNN in- ï¬uence the performance, and compare it to our full model (CRF) in the last column. We get the highest boost by adding more layers to | 1506.06724#37 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 38 | CNN in- ï¬uence the performance, and compare it to our full model (CRF) in the last column. We get the highest boost by adding more layers to the CNN â recall improves by 14%, and AP doubles. Generally, each feature helps performance. Our sentence embedding (BOOK) helps by 4%, while nois- ier video-text embedding helps by 2% in recall. CRF which encourages temporal smoothness generally helps (but not for all movies), bringing additional 2%. We also show how a uniform timeline performs on its own. That is, for each shot (measured in seconds) in the movie, we ï¬nd the sen- tence at the same location (measured in lines) in the book. We add another baseline to evaluate the role of context in our model. Instead of using our CNN that considers con- textual information, we build a linear SVM to combine dif- ferent similarity measures in a single node (shot) â the ï¬nal similarity is used as a unary potential in our CRF alignment model. The Table shows that our CNN contextual model outperforms the SVM baseline by 30% in recall, and dou- bles the AP. We plot alignment for a few movies in Fig. 8. | 1506.06724#38 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 39 | responses in the tuning set (11.5 tokens). Overall, the outputs tend to be generic or commonplace, but are often reasonably plausible in the context as in ex- amples 1-3, especially where context and message contain common conversational elements. Example 2 illustrates the impact of context-sensitivity: the word âbookâ in the response is not found in the message. Nonetheless, longer generated responses are apt to degrade both syntactically and in terms of content. We notice that longer responses are likely to present information that conï¬icts either internally within the response itself, or is at odds with the context, as in ex- amples 4-5. This is not unsurprising, since our model lacks mechanisms both for reï¬ecting agent intent in the response and for maintaining consistency with respect to sentiment polarity. Longer context and message components may also result in responses that wander off-topic or lapse into incoherence as in 6-8, especially when relatively low frequency uni- grams (âbassâ, âthreatâ) are echoed in the response. In general, we expect that larger datasets and incorpo- ration of more extensive contexts into the model will help yield more coherent results in these cases. Con- sistent representation of agent intent is outside the scope of this work, but will likely remain a signiï¬cant challenge.
# 7 Conclusion | 1506.06714#39 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 39 | Running Times. We show the typical running time of each component in our model in Table 5. For each movie- book pair, calculating BLEU score takes most of the time. Note that BLEU does not contribute signiï¬cantly to the per- formance and is of optional use. With respect to the rest, extracting visual features VIS (mean pooling GoogleNet features over the shot frames) and SCENE features (mean pooling hybrid-CNN features [38] over the shot frames), | 1506.06724#39 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 40 | # 7 Conclusion
We have formulated a neural network architecture for data-driven response generation trained from so- cial media conversations, in which generation of responses is conditioned on past dialog utterances that provide contextual information. We have pro- posed a novel multi-reference extraction technique allowing for robust automated evaluation using stan- dard SMT metrics such as BLEU and METEOR. Our context-sensitive models consistently outper- form both context-independent and context-sensitive baselines by up to 11% relative improvement in BLEU in the MT setting and 24% in the IR setting, al- beit using a minimal number of features. As our mod- els are completely data-driven and self-contained, they hold the potential to improve ï¬uency and con- textual relevance in other types of dialog systems.
Our work suggests several directions for future research. We anticipate that there is much room for improvement if we employ more complex neural net- work models that take into account word order within the message and context utterances. Direct genera- tion from neural network models is an interesting and potentially promising next step. Future progress in this area will also greatly beneï¬t from thorough study of automated evaluation metrics.
# Acknowledgments
We thank Alan Ritter, Ray Mooney, Chris Quirk, Lucy Vanderwende, Susan Hendrich and Mouni Reddy for helpful discussions, as well as the three anonymous reviewers for their comments. | 1506.06714#40 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 40 | MOVIE BOOKS b u l C t h g i F . . . y r t n u o C o N . . . w e l F e n O d a o R e h T m r i F e h T . y s P n a c i r e m A . . . k n a h s w a h S Fight Club e l i M n e e r G 100.0 . . . w e l F e n O 45.4 . y s P n a c i r e m A 45.2 . . . k n a h s w a h S 45.1 . . . y r t n u o C o N 43.6 m r i F e h T 43.0 d a o R e h T 42.7 Green Mile r e t t o P y r r a H 100.0 . . . k c a b e k o r B 42.5 . y s P n a c i r e m A 40.1 d a o R e h T 39.6 . . . w e l F e n O 38.9 . . . k n a h s w a h S 38.0 m r i F e h T 36.7 Harry Potter o . y s P n a c i r e m A 100.0 m r i F e h T 40.5 . . . w e l F e n | 1506.06724#40 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 41 | # References
[Auli et al.2013] Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proc. of EMNLP, pages 1044â1054.
[Banerjee and Lavie2005] Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judg- In Proc. of ACL Workshop on Intrinsic and ments. Extrinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65â72, Ann Arbor, Jun.
[Bengio et al.2003] Yoshua Bengio, Rejean Ducharme, and Pascal Vincent. 2003. A neural probabilistic lan- guage model. Journ. Mach. Learn. Res., 3:1137â1155. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, C¸ aglar G¨ulc¸ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statistical machine translation. Proc. of EMNLP. | 1506.06714#41 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 41 | Harry Potter o . y s P n a c i r e m A 100.0 m r i F e h T 40.5 . . . w e l F e n O 39.7 b u l C t h g i F 39.5 . . . k n a h s w a h S 39.1 . . . y r t n u o C o N 39.0 . . . k c a b e k o r B 38.7 American Psy. . . . w e l F e n O 100.0 m r i F e h T 55.5 r e t t o P y r r a H 54.9 d a o R e h T 53.5 . . . k n a h s w a h S 53.1 . . . k c a b e k o r B 52.6 . . . y r t n u o C o N 51.3 One Flew... . . . k n a h s w a h S 100.0 m r i F e h T 84.0 . . . y r t n u o C o N 80.8 . . . w e l F e n O 79.1 d a o R e h T 79.0 . . . k c a b e k o r B 77.8 e l i M n e e r G 76.9 Shawshank ... m r i F e h | 1506.06724#41 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 42 | [Collobert and Weston2008] Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural lan- guage processing: Deep neural networks with multitask learning. In Proc. of ICML, pages 160â167. ACM. [Devlin et al.2014] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proc. of ACL.
[Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journ. Mach. Learn. Res., 12:2121â2159.
[Gao et al.2014a] Jianfeng Gao, Xiaodong He, Wen tau Yih, and Li Deng. 2014a. Learning continuous phrase representations for translation modeling. In Proc. of ACL, pages 699â709.
[Gao et al.2014b] Jianfeng Gao, Patrick Pantel, Michael Gamon, Xiaodong He, and Li Deng. 2014b. Modeling interestingness with deep neural networks. In Proc. of EMNLP, pages 2â13. | 1506.06714#42 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 42 | . . . k c a b e k o r B 77.8 e l i M n e e r G 76.9 Shawshank ... m r i F e h T 100.0 . . . k n a h s w a h S 66.0 b u l C t h g i F 62.0 . . . k c a b e k o r B 61.4 . . . w e l F e n O 60.9 . y s P n a c i r e m A 59.1 r e t t o P y r r a H 58.0 The Firm . . . k c a b e k o r B 100.0 . . . w e l F e n O 75.0 b u l C t h g i F 73.9 . y s P n a c i r e m A 73.7 e l i M n e e r G 71.5 m r i F e h T 71.4 . . . k n a h s w a h S 68.5 Brokeback ... d a o R e h T 100.0 m r i F e h T 54.8 . . . w e l F e n O 52.2 . . . y r t n u o C o N 51.9 b u l C t h g i F 50.9 . . . k n a h s w a h S | 1506.06724#42 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06714 | 43 | [Georgila et al.2006] Kallirroi Georgila, James Henderson, and Oliver Lemon. 2006. User simulation for spoken dialogue systems: Learning and evaluation. In Proc. of Interspeech/ICSLP.
[Gutmann and Hyv¨arinen2010] Michael Gutmann and Aapo Hyv¨arinen. 2010. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. of AISTATS, pages 297â304.
[Huang et al.2013] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web In Proc. of CIKM, search using clickthrough data. pages 2333â2338.
[Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. Proc. of EMNLP, pages 1700â1709.
[Kneser and Ney1995] Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for M-gram lan- guage modeling. In Proc. of ICASSP, pages 181â184, May. | 1506.06714#43 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06714 | 44 | [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proc. of ACL Demo and Poster Sessions, pages 177â 180.
[Mikolov and Zweig2012] Tomas Mikolov and Geoffrey Zweig. 2012. Context Dependent Recurrent Neural Network Language Model.
[Mikolov et al.2010] Tomas Mikolov, Martin Karaï¬Â´at, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. of INTERSPEECH, pages 1045â1048.
[Och and Ney2004] Franz Josef Och and Hermann Ney. 2004. The alignment template approach to machine translation. Comput. Linguist., 30(4):417â449.
[Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, pages 160â167. | 1506.06714#44 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06714 | 45 | [Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL, pages 160â167.
[Papineni et al.2002] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL, pages 311â318.
[Pascanu et al.2013] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difï¬culty of train- ing recurrent neural networks. Proc. of ICML, pages 1310â1318.
and William B. Dolan. 2011. Data-driven response generation in social media. In Proc. of EMNLP, pages 583â593.
Steve Walker, Susan Jones, et al. 1995. Okapi at TREC-3. In TREC.
[Rumelhart et al.1988] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1988. Learning rep- resentations by back-propagating errors. In James A. Anderson and Edward Rosenfeld, editors, Neurocom- puting: Foundations of Research, pages 696â699. MIT Press, Cambridge, MA, USA. | 1506.06714#45 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06714 | 46 | [Shen et al.2014] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure In Proc. of CIKM, pages for information retrieval. 101â110.
[Stent and Bangalore2014] Amanda Stent and Srinivas Bangalore. 2014. Natural Language Generation in Interactive Systems. Cambridge University Press. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. 2014. Sequence to sequence learning with neural networks. Proc. of NIPS.
[Walker et al.2003] Marilyn A. Walker, Rashmi Prasad, and Amanda Stent. 2003. A trainable generator for recommendations in multimodal dialog. In Proc. of EUROSPEECH.
[Young et al.2010] Steve Young, Milica GaËsi´c, Simon Keizer, Franc¸ois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Comput. Speech Lang., 24(2):150â174.
[Young2002] Steve Young. 2002. Talking to machines (statistically speaking). In Proc. of INTERSPEECH. | 1506.06714#46 | A Neural Network Approach to Context-Sensitive Generation of Conversational Responses | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines. | http://arxiv.org/pdf/1506.06714 | Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | cs.CL, cs.AI, cs.LG, cs.NE | A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell,
J.-Y. Nie, J. Gao, B. Dolan. 2015. A Neural Network Approach to
Context-Sensitive Generation of Conversational Responses. In Proc. of
NAACL-HLT. Pages 196-205 | null | cs.CL | 20150622 | 20150622 | [] |
1506.06724 | 46 | Table 6: Book âretrievalâ. For a movie (left), we rank books wrt to their alignment similarity with the movie. We normalize similarity to be 100 for the highest scoring book.
takes most of the time (about 80% of the total time).
We also report training times for our contextual model (CNN) and the CRF alignment model. Note that the times are reported for one movie/book pair since we used only one such pair to train all our CNN and CRF parameters. We chose Gone Girl for training since it had the best balance between the dialog and visual correspondences.
# 5.2. Describing Movies via the Book
We next show qualitative results of our alignment. In particular, we run our model on each movie/book pair, and visualize the passage in the book that a particular shot in the movie aligns to. We show best matching paragraphs as well as a paragraph before and after. The results are shown in Fig. 8. One can see that our model is able to retrieve a semantically meaningful match despite large dialog devia- tions from those in the book, and the challenge of matching a visual representation to the verbose text in the book. | 1506.06724#46 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 47 | [00:43:16:00:43:19] Okay, | wanna see the hands. Come on. "Certainly, Mr. Cheswick. A vote is now before the group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." âEveryone in favor of changing the television time to the afternoon, raise his hand."
((f7 [02:14:29:02:14:32] Good afternoon, Harry. ... He realized he must be in the hospital wing, He was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming. "What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it." | 1506.06724#47 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 48 | [00:43:16:00:43:19] Okay, | wanna see the hands. Come on. [01:00:02:01:00:04] Are you saying my life is in danger? [01:13:05:01:13:06] Right, Caleb? group. Will a show of hands be adequate, Mr. McMurphy, or are you going to insist on a secret ballot?""| want to see the hands. | want to see the hands that don't go up, too." âEveryone in favor of changing the television time to the afternoon, raise his hand." Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer. Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' | 1506.06724#48 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 49 | fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' âou, and sometimes your wife. They are here in Washington as we speak. You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke. A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that | 1506.06724#49 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 50 | bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly. ((f7 [02:14:29:02:14:32] Good afternoon, Harry. 1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> [01:26:19:01:26:22] You're not the one that has to worry about everything, was lying in a bed with white linen | 1506.06724#50 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 51 | you?</i> [01:26:19:01:26:22] You're not the one that has to worry about everything, was lying in a bed with white linen sheets, and next to him was a table piled high with what looked like half the candy shop. "Tokens from your friends and admirers," said Dumbledore, beaming. "What happened down in the dungeons between you and Professor Quirrell is a complete secret, so, naturally, the whole school knows. | believe your friends Misters Fred and George Weasley were responsible for trying to send you a toilet seat. No doubt they thought it would amuse you. Madam Pomfrey, however, felt it might not be very hygienic, and confiscated it." | took the envelope and left the rock where Andy had left it, and Andy's friend before him. Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye | 1506.06724#51 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 52 | | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing. You're not the one who has to worry about everything. | 1506.06724#52 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 53 | [01:00:02:01:00:04] Are you saying my life is in danger? Mitch braced himself and waited. "Mitch, no lawyer has ever left your law firm alive. Three have tried, and they were killed. Two were about to leave, and they died last summer. Once a lawyer joins Bendini, Lambert & Locke, he never leaves, unless he retires and keeps his mouth shut. And by the time they retire, they are a part of the conspiracy and cannot talk. The Firm has an extensive surveillance operation on the fifth floor. Your house and car are bugged. Your phones are tapped. Your desk and office are wired, Virtually every word you utter is heard and recorded on the fifth ' âou, and sometimes your wife. They are here in Washington as we speak. You see, Mitch, The Firm is more than a firm, It is a division of a very large business, a very profitable business, A very illegal business. The Firm is not owned by the partners.â Mitch turned and watched him closely. The Director looked at the frozen pond as he spoke. | 1506.06724#53 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 54 | 1h Jee prim. aliienlt Patan » (02:15:24:02:15:26] <i>You remember the name of the town, don't you?</i> | took the envelope and left the rock where Andy had left it, and Andy's friend before him. Dear Red, If you're reading this, then you're out, One way or another, you're out. And f you've followed along this far, you might be willing to come a little further. | think you remember the name of the town, don't you? | could use a good man to help me get my project on wheels. Meantime, have a drink on me-and do think it over. | will be keeping an eye out for you. Remember that hope is a good thing, Red maybe the best of things, and no good thing ever dies. | will be hoping that this letter finds you, and finds you well. Your friend, Peter Stevens| didn't read that letter in the field | 1506.06724#54 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 55 | [01:13:05:01:13:06] Right, Caleb? A huge, circular scar ran out of his hair, down his forehead, through one dead and indifferently cocked eye, and to the comer of his mouth, which had been disfigured into the knowing leer of a gambler or perhaps a whoremaster. One cheek was smooth and pretty; the other was bunched up like the stump of a tree. | guessed there had been a hole in it, but that, at least, had healed. "He has the one eye," Hammersmith said, caressing the boy's bunched cheek with a lover's kind fingers. "I suppose he's lucky not to be blind. We get down on our knees and thank God for that much, at least, Eh, Caleb?" "Yes, sir," the boy said shyly - the boy who would be beaten mercilessly on the play-yard by laughing, jeering bullies for all his miserable years of education, the boy who would never be asked to play Spin the Bottle or Post Office and would probably never sleep with a woman not bought and paid for once he was grown to manhood's times and needs, the boy who would always stand outside the warm and lighted circle of his peers, the boy who would look at himself in his mirror for the next fifty or sixty or seventy years of his life and think ugly, ugly, ugly. | 1506.06724#55 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 56 | [01:26:19:01:26:22] You're not the one that has to worry about everything, The man squatted and looked at him. I'm scared, he said. Do you understand? I'm scared, The boy didn't answer. He just sat there with his head bowed, sobbing. You're not the one who has to worry about everything.
Figure 4: Describing movie clips via the book: we align the movie to the book, and show a shot from the movie and its corresponding paragraph (plus one before and after) from the book. | 1506.06724#56 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 57 | American.Psycho r Â¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... (00:30:16:00:30:19] Prolemuris. They're aggressive. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. âThe escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so | 1506.06724#57 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 58 | which made him only more attractive. âThe escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ... Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat... ay! it Batman.Begins â¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. Batman.Begins [01:09:31:01:09:34] I'm going to have to Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you | 1506.06724#58 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 59 | have to Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. A Captive s Submission â| believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement. I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible. Do you have any questions?" A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" âMr. Asher, I'm going to have to ask you to leave." | 1506.06724#59 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |